Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Myrddin
I'd have to say that with Unix systems, your example of having to worry about data on an established system is much easier to deal with. All you have to do to keep most user-customized stuff is to backup /home to an external (or just another internal) drive. Even if you have one of those really unusual situations where you find you're better off wiping the box and re-installing from scratch, all you have to do once the OS load is done is copy /home back.

When I went from the 2.4 to 2.6 kernel on my desktop, rather than just do an upgrade in place, I opted to reload because the way the /dev/ filesystem was changed in such a major way. Since my /home partition was a separate physical drive, I simply told the installer to ignore the drive while wiping the rest of my stuff. Once it was complete, everything just worked. All my data was exactly where I left it, and all my preferences were maintained. That freaking rocks IMO.

73 posted on 10/04/2007 12:31:21 PM PDT by zeugma (Ubuntu - Linux for human beings)
[ Post Reply | Private Reply | To 67 | View Replies ]


To: zeugma
I configured the production systems at PacBell for rapid recovery as well. System, applications and user data were caefully separated. I could recover the root filesystem from a mirror image in under 2 minutes. That was a traditional UNIX system before the world of shared libraries became common.

Today, I have embedded Linux running from FLASH disks on my rail cars. Disk space is at a premium. The kernel is custom generated for minimum size...just the features required to do the job. Executables are examined with "ldd" to identify the minimum necessary set of shared libraries. Those libraries go on the disk in the correct spots to support the application. Boot time from power on is under 15 seconds to fully operational.

Once those FLASH images are deployed, the fun begins. Debian or Fedora patches are coming out on a regular basis. The compiler, headers, libraries and kernel on the development system are getting updated. The next time the applications tree gets recompiled....oops. The libraries underneath have changed. You can't just replace the applications because many of them require a later version of the shared libraries. In Windows parlance, welcome to DLL Hell. Further, when the kernel changes, there are special kernel modules required to support the interrupt driven I/O processes. They must be built against the exact kernel build tree that will be used for the execution environment.

Linux, QNX or Windows are equally painful when working with embedded systems. Trying to incrementally support a bunch of embedded Linux systems while the Linux world continues to morph underneath isn't particularly pleasant. A desktop Linux system with connections to the internet does a pretty good job of keeping pace with patches. Even so, I have a server on the east coast that was "upgraded" to Fedora Core 5 when the Debian 2 ran out of gas (and Debian support). Now Core 5 is itself out of date. I have to backup all the special web server applications and mySQL databases before upgrading to Core 7. The company network security people are constantly doing security scans. A Core 5 system is eventually going to fail to get a crucial update. The penalty is immediate loss of access to the network until the discrepancy is remedied.

74 posted on 10/04/2007 1:19:07 PM PDT by Myrddin
[ Post Reply | Private Reply | To 73 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson