Posted on 12/06/2014 1:48:55 PM PST by 2ndDivisionVet
During the last few years, the semiconductor industry has been having a harder and harder time miniaturizing transistors with the latest problem being Intels delayed roll-out of its new 14 nm process. The best way to confirm this slowdown in progress of computing power is to try to run your current programs on a 6-year-old computer. You will likely have few problems since computers have not sped up greatly during the past 6 years. If you had tried this experiment a decade ago you would have found a 6-year-old computer to be close to useless as Intel and others were able to get much greater gains per year in performance than they are getting today.
Many are unaware of this problem as improvements in software and the current trend to have software rely on specialized GPUs instead of CPUs has made this slowdown in performance gains less evident to the end user. (The more specialized a chip is, the faster it runs.) But despite such workarounds, people are already changing their habits such as upgrading their personal computers less often. Recently people upgraded their ancient Windows XP machines only because Microsoft forced them to by discontinuing support for the still popular Windows XP operating system. (Windows XP was the second most popular desktop operating system in the world the day after Microsoft ended all support for it. At that point it was a 12-year-old operating system.)
It would be unlikely that AIs would become as smart as us by 2029 as Ray Kurzweil has predicted if we depended on Moores Law to create the hardware for AIs to run on. But all is not lost. Previously, electromechanical technology gave way to relays, then to vacuum tubes, then to solid-state transistors, and finally to todays integrated circuits. One possibility for the sixth paradigm to provide exponential growth of computing has been to go from 2D integrated circuits to 3D integrated circuits. There have been small incremental steps in this direction, for example Intel introduced 3D tri-gate transistors with its first 22 nm chips in 2012. While these chips were slightly taller than the previous generation, performance gains were not great from this technology. (Intel is simply making its transistors taller and thinner. They are not stacking such transistors on top of each other.)
But quietly this year, 3D technology has finally taken off. The recently released Samsung 850 Pro which uses 42 nm flash memory is competitive with competing products that use 19 nm flash memory. Considering that, for a regular flat chip, 42 nm memory is (42 × 42) / (19 × 19) = 4.9 times as big and therefore 4.9 times less productive to work with, how did Samsung pull this off? They used their new 3D V-NAND architecture, which stacks 32 cell layers on top of one another. It wouldnt be that hard for them to go from 32 layers to 64 then to 128, etc. Expect flash drives to have greater capacity than hard drives in a couple years! (Hard drives are running into their own form of an end of Moores Law situation.) Note that by using 42 nm flash memory instead of 19 nm flash memory, Samsung is able to use bigger cells that can handle more read and write cycles.
Samsung is not the only one with this 3D idea. For example, Intel has announced that it will be producing its own 32-layer 3D NAND chips in 2015. And 3D integrated circuits are, of course, not the only potential solution to the end of Moores Law. For example, Google is getting into the quantum computer business which is another possible solution. But there is a huge difference between a theoretical solution that is being tested in a lab somewhere and something that you can buy on Amazon today.
Finally, to give you an idea of how fast things are progressing, a couple months ago Samsungs best technology was based on 24-layer 3D MLC chips and now Samsung has already announced that it is mass producing 32-layer 3D TLC chips that hold twice as much data per cell than the 32-layer 3D MLC chips currently used in the Samsung 850 Pro.
The Singularity is near!
486 computers can do text processing in near real time.
Audio in real time too.
The best way to confirm this slowdown in progress of computing power is to try to run your current programs on a 6-year-old computer. You will likely have few problems since computers have not sped up greatly during the past 6 years.
What has changed is feature size. As feature size shrinks, so does power consumption and heat (generally). Smaller gates mean more gates on the same size die. This means larger caches, more cores, etc. Most "real" speed up lately has come from doing multiple things at once.
Now I guess even that run is coming to an end. As feature size shrinks, quantum effects, random radiation effects, etc. become a problem. So, will 3D technology keep us increasing performance??? Just have to see...
It’s been said that there are only two types of computers;
those which waste your precious time and those which waste your precious time much faster.
Quantum dots and optical computing?
DK
liquid memory solves all problems.
There’s a big price hurdle to go beyond 8 Gig RAM. Requires a pricier chip set. To quote the master, no one should need more than 640K.
There are Physics limitations for everything. ;-)
Bourbon processing has only limited productivity gains.
So said the sail and buggy makers.
lol no spirits needed.
Already they have produced a chip with one million 'neurons' and 256 million 'synapses' -- roughly equivalent to an earthworm's brain.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.