Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: NVDave
The reason the mac GUI is god is that they stole it from Xerox. The star interface was cleaner than any I have seen since (circa 1985). The only problem with the star system was it ran on a 8 bit system and the OS was way to much for the hardware. I told the xerox engineers they needed a 32 bit system and they just looked at me like I was from mars.
16 posted on 05/03/2010 12:32:51 PM PDT by sleepwalker (Palin 2012)
[ Post Reply | Private Reply | To 9 | View Replies ]


To: sleepwalker

Well, I have to clear up some confusion for you.

1. Apple didn’t “steal” the Xerox GUI from Xerox. Tesler went to work for Apple in 1980 - in other words, a person from Xerox who had been working on the Alto’s and D-machines went to work with the GUI ideas in his head. He worked on the Lisa project, where they implemented a bunch of ideas not so much from the Star/D-machines’ desktop, but from the SmallTalk-80 environment.

That said, Apple did a LOT of work on the UI themselves. The SmallTalk-80 UI had a three button mouse, the Star desktop had a two button mouse. Apple decided this was too confusing, and they worked to get it down to a one button mouse. The Xerox interface also had a lot more “modal dialogs” than the Apple software did. Apple hammered on developers pretty hard to avoid modal dialog boxes in the UI.

Apple took more than just the GUI ideas from Xerox. AppleTalk was lifted from XNS rather directly and without shame. It is funny how no one ever seems to mention that Xerox might have had a stronger case for IP theft in AppleTalk than in the UI.

2. The Star didn’t run on a 8-bit system - it was a custom CPU made from four AMD 2901 bit-slice processors - each one of these bit-slice chips was processing four bits. You could say that 4x4=16 bits in an instruction word, but that’s not exactly so. It had a cycle time which would equate to about 8MHz, and a “soft” word size. It is difficult to describe how to make a CPU out of bit-slice processors to people unless they’ve had a background in computer architecture - but I’ll try to distill it down.

You use the bit-slice processors to implement what you want in each instruction - in effect, you’re using the bit slice chips, which are controlled via a micro-code program, to create your instruction set and architecture. Writing microcode is sometimes likened to writing assembly language - and that’s usually wrong. Writing microcode is like writing assembly code for each instruction; you need to explicitly do things like set up the address but, lock the address, fetch the contents, release the bus, etc. You can’t just say “load ‘source address’, r0.”

At cisco, we used bit-sliced chips to implement the AGS+ packet controller which would allow us to “fast switch” packets from one line card to another - doing ACL’s and routing during the switch. Our “word” was 80 bits wide, and was actually two sub-words that were each 8 bits of instruction, 32 bits of address pointing to an input packet or an output packet. Code to do relatively straightforward stuff, even stuff that would be straightforward in 68K assembly language, went on for page after page in microcode.

But it ran like a raped ape.

The reason why the Xerox engineers looked at you like you were from Mars is that at that time, 32 bit systems were the province of mainframes and super-mini’s like the VAX. You weren’t about to get 32 bits into a desk-side system for the approximately $16K that the Star (8010) was sold for. Motorola had released the 68000 CPU in 1979, and that was a huge leap forward, but even it was doing only 24 bits of addressing. The 68K was slow, tho, and it lacked the “bitblt” type of functionality that could be made to work in bit-slice CPU microcode. A bitblt was necessary to make the GUI move along at a reasonable rate. The first GUI on a microcomputer that could do the sorts of things the D-machines could was the Amiga - and it had a bitblt co-processor chip.

3. In the early 90’s, I got the opportunity to help my girlfriend (now wife) who was working at Xsoft with their X.25 software - and as a result, I got to crawl around inside the D-machines’ networking software, the Mesa2 compiler, etc. It was a slick, slick system for the early 80’s, but by the early 90’s, it was clear that it wasn’t going to scale. Mesa was a far better system programming and implementation language that C, C++ or any C variant ever was, is, or will be. Alas, Xerox just could never see the point of releasing the language spec for Mesa to the world. The closest language I could give you for reference is Modula-2 - which is because Wirth spent a summer at PARC, got to see Mesa, and it completely changed his thinking away from toy languages like Pascal.

Xerox people by then (the early 90’s) were migrating to Apple in ever larger numbers, and with these people went the years of experience in a GUI, internationalization, fonts, etc.


28 posted on 05/03/2010 5:06:15 PM PDT by NVDave
[ Post Reply | Private Reply | To 16 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson