Posted on 09/02/2009 12:47:38 AM PDT by neverdem
YORKTOWN HEIGHTS, N.Y. Gaze into the electron microscope display in Frances Rosss laboratory here and it is possible to persuade yourself that Dr. Ross, a 21st-century materials scientist, is actually a farmer in some Lilliputian silicon world.
Dr. Ross, an I.B.M. researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than todays transistors.
The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to leak across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips.
The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers.
For more than half a century, transistors have gotten smaller and cheaper, following something called Moores Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel,...
(Excerpt) Read more at nytimes.com ...
Eh?
And he came up with it while trying to get away from seasonal allergies.
For more than half a century, transistors have gotten smaller and cheaper, following something called Moores Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel, in a now-legendary 1965 article in Electronics, the source of Moores Law.
Moore's Law describes the phenomenon, but IMHO the more fundamental point is that the R&D funding implicit in that "law" comes from the fact that you can cut the production cost of a product by about 20 - 25% any time you double the production rate - and the fact that the market for transistors on computer chips has not saturated as transistor production has been ramped up at a geometric rate.And IMHO it is unlikely to saturate any time soon, as transistor counts grow toward the point where artificial intelligence applications start to kick in in a big way.
You want to know why there is such a phenomenon as the Steve Jobs aura? It is because people know that Moore's Law is a "law" of progress - which depends fundamentally on the innovation of at least a linear increase in utility as transistor quantity increases geometrically. And that requires vision, which requires visionaries.
Shining a light on DNA-binding drugs in living cells
Lungs of fatal swine flu patients badly damaged
This Time, City Says Its Ready for Swine Flu (NYC)
FReepmail me if you want on or off my health and science ping list.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.