The 16 TB limit is per process,Somethings not right here. I calculate that 16 TB can be reached with 44 bits. A 64 bit address space spans approximately 18 million TB.
. . . but that still seems arbitrary and not obviously necessary.and since its approximately 8,000 times larger than the memory in a decently equipped computer today it provides some serious headroom.
At current prices 16 TB of RAM runs approximately $280,000...also providing some headroom. ;-)
Since RAM cost has been declining according to Moore's Law, I look at "headroom" on a log scale. It took 16 bits to address the 64K memories readily available in the early 1980s, and it takes 32 bits to address the 4Gig of RAM readily available now, one human generation later. Were Moore's Law to continue in effect for another human generation, that would suggest the need for about 48 bits to address the memory that would then be in common currency. And it would take another generation again to exhaust the full 64 bit address space.Recall the big issue over the Y2K transition, and it does give one pause over assuming that Moore's Law will break down before reaching 44 bits of address space. Of course I have to admit that it would be no trick at all to go to 128 bit cores long before that - were there a reason - but still. What bang do they get for that buck? What it means is that your son could very easily see the time when an unnecessary software limitation creates a crisis in the operating system. Which, looked at in that way, is pretty optimistic after all. Why would OS X necessarily last two human generations?
“Which, looked at in that way, is pretty optimistic after all. Why would OS X necessarily last two human generations?”
Exactly. My point was that 16 TB of memory per process represents considerably more room for expansion than we’ve had in previous memory addressing jumps. Many other things are more likely to be an issue than this limitation.
You’re thinking like a software user.
The real limitation here is in the chips where the memory controller and MMU come together.
Carrying around a whole lot of extra bits that cannot possibly be used (and wouldn’t be - because there quite simply are not applications that demand that much memory - yet) clutters up the address bus, the speed of the bus, the size of the MMU internal data structures, you name it. So the designers strike a balance here — how many bits of physical address are really needed in, oh, say the next 10 years?
I'm not a programmer or an engineer, so please forgive me if any of this is misinformed, over-simplistic, or just plain stupid.
The difference between pre-Y2K and today is that a programmer is less likely to hard-code limitations into critical software -- they'd be in subroutines or at the OS level, making it much easier to update or port a program than it was with the old, patched to the Nth degree, and mostly undocumented COBOL and FORTRAN code.
Another difference is that we might have actually learned from y2K (stranger things have happened), and databases have come a long way. If the critical data is stored in some standardized form, it would be a lot easier to move to another program or platform, even running the old and new systems in parallel to make the switch smoother.
And finally, virtualization is a pretty mature technology. It's easier now than before to run old software in its own little sandbox while making a transition to the new hotness. Bringing it back to Apple, this is something they're old hands at -- 680x0 emulation on PPC, PPC emulation on Intel, and Classic on OS X all made those transitions shockingly smooth.
What it means is that your son could very easily see the time when an unnecessary software limitation creates a crisis in the operating system. Which, looked at in that way, is pretty optimistic after all. Why would OS X necessarily last two human generations?
I guess my point is that it's a lot more modular than it used to be. OS X might not be around in two generations, just like few modern-day admins have even seen the big iron the Internet was built on. But TCP/IP survives, and if you get a couple of beers in a cranky old-timer, he'll start ranting about how "Web 2.0" is really just telnet 5.0, or gopher with pictures. Or, for that matter, that it's all just an extension of the telegraph, which was a packet-switched digital network before the telephone gummed things up with all that analog stuff.
Why _wouldn't_ it??
You DO realize, don't you, that the core of OS X, that is to say Unix, is two generations old already (39 years), and is not only going strong, but expanding?
And that the so-called "Unix Year-2038 Problem" only has meaning because of the general assumption that Unix operating systems will be around for ANOTHER 30 years?? Your statement suggests that you could review the history of operating systems a little more, or at least acknowledge that OS X is a great modern GUI over an already mature OS foundation.
My personal opinion is that OS X, as the most widespread version of Unix in current use, might last two generations all by itself, if Steve Jobs or his successor lets it. OS X is far and away the most successful and complete Unix OS developed to date, and its numbers continue to grow at a prodigious rate. So it could do so.
OTOH, will it overtake Windows? Nah, I doubt it. But Windows carries the seeds of its own destruction, and is already beginning to fall after one generation. OS X (or whatever grows from it) only needs to still be there when Windows eventually crumbles on its own, which incidentally I think is no more than 5-6 years ahead, well within OS X's likely lifespan.