Posted on 12/17/2001 4:33:52 AM PST by damnlimey
Rethinking 'Software Bloat' |
||||
Fred Langa takes a trip into his software archives and finds some surprises--at two orders of magnitude. By Fred Langa |
||||
Reader Randy King recently performed an unusual experiment that provided some really good end-of-the-year food for thought: I have an old Gateway here (120 MHz, 32 Mbytes RAM) that I "beefed up" to 128 Mbytes and loaded with--get ready--Win 95 OSR2. OMIGOD! This thing screams. I was in tears laughing at how darn fast that old operating system is. When you really look at it, there's not a whole lot missing from later operating systems that you can't add through some free or low-cost tools (such as an Advanced Launcher toolbar). Of course, Win95 is years before all the slop and bloat was added. I am saddened that more engineering for good solutions isn't performed in Redmond. Instead, it seems to be "code fast, make it work, hardware will catch up with anything we do" mentality.It was interesting to read about Randy's experiment, but it started an itch somewhere in the back of my mind. Something about it nagged at me, and I concluded there might be more to this than meets the eye. So, in search of an answer, I went digging in the closet where I store old software. Factors Of 100 When Windows 3.0 shipped, systems typically operated at around 25 MHz or so. Consider that today's top-of-the-line systems run at about 2 GHz. That's two orders of magnitude--100 times--faster. But today's software doesn't feel 100 times faster. Some things are faster than I remember in Windows 3.0, yes, but little (if anything) in the routine operations seems to echo the speed gains of the underlying hardware. Why? The answer--on the surface, no surprise--is in the size and complexity of the software. The complete Windows 3.0 operating system was a little less than 5 Mbytes total; it fit on four 1.2-Mbyte floppies. Compare that to current software. Today's Windows XP Professional comes on a setup CD filled with roughly 100 times as much code, a little less than 500 Mbytes total. That's an amazing symmetry. Today, we have a new operating system with roughly 100 times as much code as a decade ago, running on systems roughly 100 times as fast as a decade ago. By itself, those "factors of 100" are worthy of note, but they beg the question: Are we 100 times more productive than a decade ago? Are our systems 100 times more stable? Are we 100 times better off? While I believe that today's software is indeed better than that of a decade ago, I can't see how it's anywhere near 100 times better. Mostly, that two-orders-of-magnitude increase in code quantity is not matched by anything close to an equal increase in code quality. And software growth without obvious benefit is the very definition of "code bloat." What's Behind Today's Bloated Code? Instead, most of today's software is produced with high-level programming languages that often include code-automation tools, debugging routines, the ability to support projects of arbitrary scale, and so on. These tools can add an astonishing amount of baggage to the final code. This real-life example from the Association for Computing Machinery clearly shows the effects of bloat: A simple "Hello, World" program written in assembly comprises just 408 bytes. But the same "Hello, World" program written in Visual C++ takes fully 10,369 bytes--that's 25 times as much code! (For many more examples, see http://www.latech.edu/~acm/HelloWorld.shtml. Or, for a more humorous but less-accurate look at the same phenomenon, see http://www.infiltec.com/j-h-wrld.htm. And, if you want to dive into Assembly language programming in any depth, you'll find this list of links helpful.) Human skill also affects bloat. Programming is wonderfully open-ended, with a multitude of ways to accomplish any given task. All the programming solutions may work, but some are far more efficient than others. A true master programmer may be able to accomplish in a couple lines of Zen-pure code what a less-skillful programmer might take dozens of lines to do. But true master programmers are also few and far between. The result is that code libraries get loaded with routines that work, but are less than optimal. The software produced with these libraries then institutionalizes and propagates these inefficiencies. You And I Are To Blame, Too! Take Windows. That lean 5-Mbyte version of Windows 3.0 was small, all right, but it couldn't even play a CD without add-on third-party software. Today's Windows can play data and music CDs, and even burn new ones. Windows 3.0 could only make primitive noises (bleeps and bloops) through the system speaker; today's Windows handles all manner of audio and video with relative ease. Early Windows had no built-in networking support; today's version natively supports a wide range of networking types and protocols. These--and many more built-in tools and capabilities we've come to expect--all help bulk up the operating system. What's more, as each version of Windows gained new features, we insisted that it also retain compatibility with most of the hardware and software that had gone before. This never-ending aggregation of new code atop old eventually resulted in Windows 98, by far the most generally compatible operating system ever--able to run a huge range of software on a vast array of hardware. But what Windows 98 delivered in utility and compatibility came at the expense of simplicity, efficiency, and stability. It's not just Windows. No operating system is immune to this kind of featuritis. Take Linux, for example. Although Linux can do more with less hardware than can Windows, a full-blown, general-purpose Linux workstation installation (complete with graphical interface and an array of the same kinds of tools and features that we've come to expect on our desktops) is hardly what you'd call "svelte." The current mainstream Red Hat 7.2 distribution, for example, calls for 64 Mbytes of RAM and 1.5-2 Gbytes of disk space, which also happens to be the rock-bottom minimum requirement for Windows XP. Other Linux distributions ship on as many as seven CDs. That's right: Seven! If that's not rampant featuritis, I don't know what is. Is The Future Fat Or Lean? But there are signs that we may have reached some kind of plateau with the simpler forms of code bloat. For example, with Windows XP, Microsoft has abandoned portions of its legacy support. With fewer variables to contend with, the result is a more stable, reliable operating system. And over time, with fewer and fewer legacy products to support, there's at least the potential for Windows bloat to slow or even stop. Linux tends to be self-correcting. If code-bloat becomes an issue within the Linux community, someone will develop some kind of a "skinny penguin" distribution that will pare away the needless code. (Indeed, there already are special-purpose Linux distributions that fit on just a floppy or two.) While it's way too soon to declare that we've seen the end of code bloat, I believe the signs are hopeful. Maybe, just maybe, the "code fast, make it work, hardware will catch up" mentality will die out, and our hardware can finally get ahead of the curve. Maybe, just maybe, software inefficiency won't consume the next couple orders of magnitude of hardware horsepower. What's your take? What's the worst example of bloat you know of? Are any companies producing lean, tight code anymore? Do you think code bloat is the result of the forces Fred outlines, or it more a matter of institutional sloppiness on the part of Microsoft and other software vendors? Do you think code bloat will reach a plateau, or will it continue indefinitely? Join in the discussion! |
Course XP blows that away.
This isn't more fodder. This is one of the reasons I am an MS hater. The other is that up until Windows 2000 applications software can crash and lock up the OS. That is inexcusible.
A long time ago we had text based Basic, which allowed anyone to write great, fast and useful programs to solve all types of problems, including Astronomic, Engineering and Technical problems of all types.
HP Basic particularly was extremely rich in commands that made anything very easy to program and output.
Where did it go?
With todays machines, that interpreted language would be incredibly fast. And useful.
Why is there no current version?
(That I know of...)
Frankly, there's darn little performance difference I can detect between the two machines. In fact, I suspect I could do a blind test with a dozen random users and have a 50-50 distribution of accurate guesses as to which was the "fast" machine and which was the "dog".
I used to think my 8mhz "Turbo" XT w/640KB RAM was the bee's knees (compared to the 4 mhz (and slower!) 8 bit 64KB iron I'd cut my teeth on). Then I got my 10 (or was it 12?) mhz 286, and I was pickin' bugs from my teeth every time I got up from the keyboard. That sucker was fast.
Now, it seems that the iron has gotten so much faster than the apps that it's a "paper competition" with little real-world meaning for the vast majority of users.
Who needs the super iron? I see two classes of users, and for one class, the term "need" is applied in the loosest of all possible senses. The two classes are "gamers", and "network admins".
The only time I start feeling "cramped" on my machine is when I'm running multiple concurrent major apps, i.e., one or two instances of Visual Studio (running an app or two), SQL Server, IIS, and IE. IOW, when I'm doing that, I'm essentially running a whole network in one cramped little box. Most people don't do that.
To chime in on the author's theme, it wasn't that long ago (at least not at the rate that the years seem to keep peeling by at my age) that a 10 - 16 mhz 286-386 class machine, with 1-3MB of XMS memory was a high end network server, and cost a pile of money. Nowadays, we've got secretaries using machines that would have literally cost millions (and occupied rooms) a few years ago -- as glorified typewriters.
So, my two cents is that the "bloatware" thing is overblown. When 128 megs of RAM costs less than fifty bucks, and a 60 gig hard drive costs a buck a gig (I remember paying $275 for a 20 megabyte drive -- wholesale!), and no mix of OS and apps comes anywhere near taxing the capabilities for 99% of the users, "bloatware" is a non-issue.
Two things: one, there's nothing preventing you from deploying character-mode apps to an NT/2K/XP platform, and two, if a GUI-based data entry app has worse usability than a character-mode counterpart, it's the programmer's fault, not the GUI's. Granted, too may people do little more than drag and drop textboxes and then bind them to fields, but that's their fault. I can drive my car into a brick wall. If I do, that's not an indictment of Toyota.
For more than 10 years I have watched with amusement as PCs that are much faster than the old ones take LONGER to boot up -- typically, a 1.2 MHz PC XT vintage 1986 would be fully ready to go in a few seconds, and the current models are more than 1000 times as fast and take nearly 10 times as long because they are doing 10000 times as much computational work!
But the REAL problem with software bloat is not the slowness, it is the complexity which makes applications almost impossible to properly debug. NOBODY I know, and I know a LOT of computer types, makes any attempt to fix Microsoft-related errors themselves as they would with Unix or Linux, nor do they bother trying to get Microsoft to fix them because it just won't happen; instead they just shrug, reboot, and work around. A certain level of "Your program has performed an illegal operation and will be shut down" and a (lower) level of total freeze-ups and blue screens of death are simply accepted as a tolerable inconvenience.
But every single time this happens, there are one or more theoretically identifiable HUMANS who made specific MISTAKES that could be tracked down and blamed on them. The practical difficulties of this are sufficient that most of us are willing to simply let them be condemned to hand-simulate the infinite loops of their own programs in programmers' hell after they pass on.
This is an interesting "discovery." I guess the author is saying that the latest h/w resources allow the older os to provide better-than-ever performance because older os's don't have the unneeded overhead.
I hate unneeded overhead anyway.
Russ
But there NEEDS to be another Hardware solution...
Simultaneous calls and virtual multiple clocks or something...
Then it'll all work!
Something big, yes sir... that's it!
Ahh, the good ole days..
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.