Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: 2 Kool 2 Be 4-Gotten

One of the reasons why Linus won that debate was that the state of the MMU on x86 chips back then was... crude by today’s standards.

Where I used Neutrino was on MIPS R4K or similar architectures, which had a MMU that was light years ahead of what was available on x86 even in the late 90’s.

If you’re working on a mapped memory system and you don’t have a really good MMU which can do *fast* mappings, the micro-kernel idea falls down fast. If you’re working on a mapped memory system and you don’t have enough TLB’s... the micro-kernel idea falls down fast too. If you’re going to do anything other than trivial IO in the system, you really have to solve the problem of how you get data from one user process memory space to another, FAST. Too many system designers are used to copying up out of kernel-mapped pages into user space. You have to start thinking in terms of simply re-mapping pages, and being clever about how you do it. You have to start being really clever about how to handle the sync/scheduling issues too, which ultimately gets down to the IPC mechanism. If you don’t have a very good IPC design that really addresses what people want to do with it... the whole thing implodes pretty quickly. I saw some commercial micro-kernels from Europe (france, actually) that looked oh-so-groovy on paper... and then when you went to use them, you found out that their “fast” IPC mechanism, while it looked slick on paper... had almost no actual uses in our application. Most everything in the real world needed to use their “slow” IPC mechanism and the implementation just folded under the IO load in seconds. Back then, an embedded system with 4MB was a BFD. Today, feh. 4GB isn’t unheard of in embedded systems. So today, having 4KB pages, or 8KB pages, and re-mapping a whole page for one byte of actual transfer... who cares? Why copy? Get the re-mapping as fast as possible and suddenly the IPC mechanism starts to scale. Back then, people weren’t willing to re-map pages for a couple of bytes - or even a TCP ACK packet.

So while Linus won that debate with Tannenbaum in, what, the early 90’s, the chips are different today. Today, we have systems which are commercially deployed which are based on real micro-kernels. One of the things that can be done in system architecture to improve this would be to push more and more functionality off the main CPU and into attached task-specific processors - GPU’s, IO processors, etc. This is how IBM viewed the kernel in the early 70’s, fer cryin’ out loud (insert another round of harrumphing on my part here... VM/370 could be viewed as a forerunner of the micro-kernel idea if we want...). Instead, we had these micro-computer kiddies too (*&(*& cheap to up and use anything other than the CPU for everything. Witness the early Macintosh systems - they used the 68K, unmapped, for everything. Video, sound, disk IO, serial IO, you name it. Well, that was OK when systems were small. The system programming on those early Mac OS’s was crude, to say the least. When I was hacking Macs back then, there was nothing akin to a semaphore or event flag. You had to do everything in loops - you’d do as much IO or whatever as you could, then stick your driver or task on a event/IO/interrupt/VBL loop queue, and wait to be called again. It was absurd. I think that packet IO off the ‘net should be handled by a CPU on the Ethernet card, for example. Why should I have to worry about maintaining a TCP connection in the kernel? Tell the card “Hey, doofus... I want to set up a pipe between you and this here IP address.... step and fetch and tell me when you’re done!” And the card should handle all the retries, windowing, buffering, etc. I should just have an in-order stream of data and a series of IPC’s coming at me to tell me that my data is done, whichever way it was going. Same deal for screen data, same deal for sound, disks, etc.

Today, OS X is... sorta a micro-kernel and it works OK. WinNT *could* have been taken down the road of being a micro-kernel, if Microsoft didn’t have cranial/rectal compaction. Unix systems can go there, certainly. But it gets really messy when you go halfway... and that’s what I’m getting at. What we see is a whole lot of systems that don’t “get” the micro-kernel “gestalt” if you will, and they keep failing to solve their problems outside priv’ed memory space or kernel access. It takes a bunch of work and thinking to break with the monolithic kernel ideas of the last 30+ years and say “No, we’re not putting this into kernel space... let’s find a way to do it in user space.”

It really is hard, lemme tell you. The temptation to just throw up your hands, look at the schedules and slap it into the kernel are terrific.


32 posted on 06/29/2011 4:25:16 PM PDT by NVDave
[ Post Reply | Private Reply | To 27 | View Replies ]


To: NVDave

Great post Dave. I coop-ed at IBM for a summer and then some. Worked on VMS on the 370 and I think it might have been MVS? or something - can’t remember exactly. Had to have JCL to run jobs - the whole thing. It was a great experience - frustrating at first but one soon got the hang of it. Sounds like there is a lot of back to the future going on. Ah, but such is life.

I remember in my computer graphics class at grad school they talked about the cycle of reinvention or some such. How graphics processing cycles between being done all on the main CPU (minimize bus traffic) and offloaded to a GPU (computationally more efficient). And how this is predictable as the sun rising and setting.

It sounds like we’re sort of saying the same thing here but generalized to other devices like MMU’s, I/O devices and such but with the further twist that it ripples through into OS architecture. I know people talk about IBM’s channel processor reverentially - that the beauty of big iron was the ability to do industrial amounts of I/O and do it independent of the CPU.

I think the points that I can take away from this are:

1. Hardware drives software and has since the beginning of time.
2. OS’s can and do evolve in response to #1 but it takes time because you don’t revamp an OS overnight.
3. Just because something looks good on paper doesn’t mean it will fly.

Should be interesting to see where this is all headed. And where multicore fits into all of this.


35 posted on 06/30/2011 5:39:47 AM PDT by 2 Kool 2 Be 4-Gotten (Welcome to the USA - where every day is Backwards Day!)
[ Post Reply | Private Reply | To 32 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson