Stands to reason...someday they will have no choice BUT to leave the x86 pattern of doing things.
I seriously doubt they will.
There have been several architectures with better designs that have been thumped rather soundly in the market (eg, PowerPC, MIPS) because they didn’t have the huge base of software on them that the x86 does.
Don’t get me wrong: I don’t like the x86 architecture. To me, the x86 architecture is every bit as infuriating as the S/370 architecture was. The only difference was that you could get more useful work done on the 370.
But thanks to Microsoft, there is simply no getting around the fact that there is a huge base of s/w out there now that needs x86 execution - and users want this code executed *fast*. Look at how many years there was “Virtual PC” available on PPC Macs? And the Mac didn’t take off.
Out come Apple with x86 Macs, full-speed execution of x86 code and Apple machines take off in sales. This is just one example.
Intel was reaching a really hard point when they got to the Pentium D - it was screamin’ hot, increases in the clock speed were yielding increasingly marginal increases in performance, etc. And along comes the Israeli Intel group who creates what you now see as the “Core” technology, in effect, creating a RISC architecture for the “real” chip and then creating a “x86 interpreter” running on top of the “real” chip. They broke the x86 instruction set away from the actual internal chip architecture and are now effectively emulating it on the chip.
Using this kind of thinking, Intel can push the x86 instruction set quite far into the future, and adding additional opcodes will allow them to simply extend it as they’ve done with 64-bit operations.
Like it or not (and I don’t, but now I’m in a position where I no longer really care), the x86 instruction set is here to stay, probably for a while.
“Stands to reason...someday they will have no choice BUT to leave the x86 pattern of doing things.”
they tried, remember?