I would really like to know how they measure uptime, too. Is it the OS itself or the whole system?
For example, I know I can do a lot with an Apache http server on Linux while it stays up, but I need to restart IIS quite often for various bits of maintenance or installation. When you're talking about a Web server nobody cares what the uptime of the OS is, they care about the uptime of the http service, which is OS uptime minus its own downtime.
IOW, this fails on its face. Do uptimes of 2003/IIS6/SQL Server vs. LAMP. Do uptimes of 2003/AD vs. RHEL Directory Server. Do MS file and print with 2003 vs. Linux/Samba.
What is the unix command to determine how long your server has been up (or the last time it was rebooted)?
I confess to never having used a Windows Server product, but creating my BSD file server was so easy (and free, except for a hard drive and network card; got the other hardware from someone getting rid of their older machine) to install and set up and its been running without problems for around 2 months. I don't have a thousand users hitting it every minute (more like 2 a day), but even with the pains I had to go through to learn some unix, installing a Windows server couldn't have been any easier. And it wouldn't have worked on the hardware I'm using.
Sure you do and Ill tell you why..
a) When are you most likely to lose a disk or have other fatal hardware failure? The answer is during a boot-up, this is more often true if the drives have had a chance to cool down a bit.
b) What takes longer applying, for example, an RPM patch to apache and restarting the service or restarting the operating system? answer rebooting the OS.