Thanks. For I surely have no understanding of the underlying physical laws. Do you know why 5 silicon atoms and not 4 or 6 is the limit?
Basically what happens is that there are doublings not in terms of feature length but in terms of area (7 squared is about half of 10 squared).
So it says that there just aren’t too many doublings left.
Ok the part that I do understand without understanding is that when the chips are too small the light or electrons bleed through the walls of the circuits so there’s interference.
what I don’t understand is how they can get smaller without increasing the bleed.
stronger walls? more finely etched circuits?
As 2 Kool explained, I'm not sure there's anything significant about 7 nm (or 5 atoms), and I'm not sure that's what Intel is saying either.
Ten or twenty years ago, there were plenty of people who thought they'd never be able to get useful circuit functionality at feature sizes below twenty atoms, which they are already well below.
Also keep in mind that there's a whole manufacturing dimension to this. To get to the current features size of 19 nm (or is it 14 nm now?) takes an incredible amount of technology.
For example, back in the 1990s, they were able to use deep blue lasers for light sources to transfer the chip-scale artwork to the various layers of photoresist on the big (300 mm) wafers they used to make microprocessor and memory chips.
To get below 22 nm, they have to use EUV (extreme UV) light sources that are remarkable. In them, they use very high-power lasers to ionize tiny specks (or droplets) of metallic tin (Sn). When I say "ionize," I mean highly ionize. They blow the entire outer electron shell off the tin atoms; the resultant plasma emits photons of such a high energy that they are close to being X-rays. This process is in some ways identical to what they planned to do with laser fusion, except this is practical, the tin droplets are ionized at the rate of tens or hundreds per second (I'm not sure of the details).
Just the light sources for these fab machines cost as much as the entire fab machine (they used to be called a "wafer stepper" or a "mask aligner," I don't know if those terms are still used) of twenty years ago.
To get to 7 nm, they'll have to actually go to literal X-Ray sources. This means (if I remember correctly) that the whole process has to be done under high-vacuum conditions. Also keep in mind that they don't get to lay down the tiny lines and features just once. They have to do it over and over again, five, six, even seven times, to make working chips. Each overlay has to be done to a tiny fraction of the feature line width. Thus, to achieve 7 nm feature sizes, they have to be sure they can reach an overlay accuracy of - again, I don't know the details - but I would think no more than 2 or 3 nanometers.
That means they have to be able to control the "wafer stage" of these machines (which are big, the size of a small car) to an accuracy on the order of the size of one atom, over and over, at production speeds, turning out tens of thousands of chips per hour. That's because these machines will be enormously expensive, and must be kept in production 24-7-365 to pay off the money Intel will borrow to build, install, and operate them.