Posted on 06/16/2011 11:45:11 AM PDT by ShadowAce
Get enough people of any profession in one room and the conversation drifts inexorably towards horror stories. Everyone loves a good and then it all went horribly sideways yarn, and we all have more than one.
The IT profession is flush with tales of woe. There are no upper boundaries on ignorance or inability.
From common facepalms through to the failure of the backup plans backup plan, here are the top ten disaster recovery failures I have seen.
About once a quarter someone walks into my office and says: You know how to do data recovery, right?
Inevitably they carry an external USB Raid 0 hard drive upon which rested all the critical data for the entire company.
While I can probably get those images off that SD card you formatted, Raid 0 with a disaster recovery plan of I heard Trevor can do data recovery is doomed to failure.
Losing ones keys is a normal part of life. You keep a spare set at work or with a trusted friend. When dealing with mission-critical computing, however, plans need to be more robust.
My favourite equivalent of losing the keys is firing the sysadmin before realising that only he has the passwords to vital pieces of equipment whose manufacturer has gone out of business.
Disaster recovery plans that rely on the manufacturer will help us reset the password are iffy at best.
Dead tree backups lose their charm when a corrupted financials database is combined with reliance on a data storage medium requiring a meat-based search engine.
Always be prepared for the auditors. They strike without warning and they have no mercy.
Not everybodys definition of mission critical is 24/7/365. For small organisations , a cold spare requiring an on-site visit to power up may be adequate.
The plan, however, should take into consideration that the individual responsible for switching on the backup must be capable of making it through the snowstorm that took out the power lines.
Pay attention to log files. More than once I have seen perfectly planned and executed offsite failovers felled because nobody realised the cleaner at the backup site was liable to unplug the servers, for example to charge an iPod. This is not an urban legend.
The more important the data, the more likely it is to go missing. The older the data, the more likely it is that at least one copy is corrupt.
Inevitably, some bit of data will be missing from both the primary and the backup live servers. It happens to everyone and it is why we have tape.
Tapes are attached to a backup program of some kind, which keeps a catalogue of tapes and the files they contain. Life becomes interesting when the file thats missing belongs to someone making an order of magnitude more money than you, and the file thats corrupted is the backup catalogue.
Thirty-two hours into rebuilding the catalogue one tape at a time, you discover that one of the tapes is unreadable. Murphys Law, of course, stipulates that it is the tape with the necessary information.
The lesson is simple: test your backups and the catalogues too.
Databases are the lifeblood of many applications, which in turn keep companies alive. Redundancy is key, and so we turn to database synchronisation.
Some aspects of database synchronisation are old hat by now: the primary can sync live to the secondary and life is good so long as both are up. The primary server fails and the backup server absorbs the load exactly as planned: so far, so good.
Where it all goes horribly wrong is when the primary is returned to service without being informed that it is no longer the primary database. After being brought online it instantly overwrites all the data on the backup server with stale information.
This niggle in the recovery process really should have been practiced more.
In an ideal world, your primary and backup servers are identical systems supplemented by an identical test system. This exists so you can experiment with new configurations, settings and patches.
A critical lesson that others have learned so you dont have to is never, ever patch the primary and the backup clusters at the same time.
One beautiful illustration of this comes in the form of an unanticipated incompatibility between a software patch and a very specific combination of hardware present in both the primary and backup systems.
The testing system identical except for a motherboard one revision newer did not exhibit the issue. Patch released via automated patch software, the primary and backup servers were felled simultaneously.
An oilfield company is doing some deep field drilling. There are several constraints regarding the amount of equipment it can bring.
The drilling requires real-time analysis of sensor results and the decision is made to farm that out over communications links to a more established camp nearby.
Data connectivity being so critical, there were three redundant links: a satellite hook-up, a (very flaky) 3G repeater and a small prototype UAV comms blimp which served as a WiMax bridge between the drilling team and the camp.
Predictably, the satellite connection failed, and the 3G repeater never really worked at all. The drilling team was forced to use the largely untested UAV, which unfortunately began to stray out of range.
The on-site tech tried to connect to the blimp, only to discover that the firewall configuration prevented access from the network interface facing the drilling site.
The connection was so flaky that the team couldn't bounce a connection off a server located on the other side of the network. Thus the UAV drifted entirely out of range and half a province away before it was recovered. The drilling operation was a bust.
Moral: cloud computing absolutely requires multiple tested redundant network links.
Two companies merge and are in the process of consolidating their two data centres. About 80 per cent of the way through the power-up of the new systems, there is a loud snap and all electrical power is dead.
The electricians post mortem is succinct: the electrical panels were from the 1940s. To get 30-Amp lines for the UPSes, a previous electrician had simply "bridged" two 15-Amp breakers.
When enough systems were powered up, the total cumulative load on the panels blew the panels without tripping more than a handful of frankenbreakers.
When the first panel blew, affected systems switched to backup power supplies, blowing the second panel, until all seven panels in the building were wrecked. Thanks to 70 years of evolutionary wiring, five of those panels were located in parts of the building not leased by either company.
The disaster recovery plan was focused entirely on layers of backup power provisioning: mains, UPSes and a generator. Offsite backups werent a consideration.
With the distribution panels fried, generator power couldn't get to the UPSes and sysadmins had only enough time to shut down the systems cleanly before battery power failed. The downtime cost the company more than it would have spent on building an offsite secondary data centre.
that knowledge can be acquired from an adequate collection of textbooks, but true experience requires walking the minefield.
Please share your IT horror stories either in the comments section or by clicking the mail the author link above. Ill collect the best and publish them as a warning to all: here be monsters. ®
LOL! That's just awesome.
Was the application running under an account with root privileges, or was the root file system open to accounts with non-root privileges?
We immediately went into disaster recovery mode, and brought up production on the UAT server. Of course, the first thing they did was run the same program, which wiped out that machine as well.
ROFL
What you didn’t get the memo from management? The software testing budget has been cut by 50%, blame the programmers.
..."...what do you mean the acquired company's guys were using their inside knowledge to access our competitor's confidential information??"
(actually, this is now more of a tale of woe for Legal)
/johnny
A buddy and his programming staff were given two weeks notice after meeting a deadline with code the met the specification. They had been expecting a ‘atta boy’ or a ‘congrats’ not a pink slip. Management takes code to customers who love the new software but ‘could you make it do X & Z too!?’
Management goes to my buddy with the request and he tells them it would take little effort to make those enhancements but all the new programmers they would have to hire would a few months to get up to speed on the code before they could tackle the changes. ‘What about you and your staff?’ ‘Sorry, but we all have new jobs and all of us leave tomorrow. Why did you get rid of us all?’
They confessed that they wanted to get rid of all those expensive programmers to save money and look smart to their managers.
No, it was a bug in Solaris. It just produced a core dump bigger than 2 gigs when it failed, so Solaris interpreted the size as a negative number and wrote it backwards in the filesystem.
The Sun guys said oh, you should have installed the OS patch for that.
bookmark
I heard of a case back in the bad old days of removable platter drives where the admin got a call at home in the middle of the night to inform him that the primary copy of their data had failed. Not too concerned, he asked if they had mounted the backup copy, and was told that they had done so, only to find out that the problem was in the drive, when it destroyed the backup. Don’t know if they also had tape for a second layer of backup.
That one smells like an urban legend to me!
Electrical company comes in to swing over facility power from an aging UPS to a new 50kW unit, big monster. This is in the middle of June in west central Florida a few years ago, so inevitably, the afternoon thunderstorms start to pop up around 6 PM and last sometimes late into the night, depending on the atmosphere.
Well apparently the electrical company tech didn’t want to be working on an indoor UPS, unplugged, at 10 PM during a lightning storm, and one Hell of a lightning storm it was! I lost power at my home, and I’m 25 miles from the DC. My pager starts going off at 11 PM, and I call in to our DR incident command center to find out our entire DC is black.
I rush over in the pouring rain to find out that the electrical “engineer” left the neutral and the ground unconnected on the new UPS and when a lightning strike hit directly to our ground looped rod, millions of volts of electricity streamed through the live wire, blew up 51 batteries in UPSes chained to the new one, and melted every single transformer in the building.
Needless to say we stopped doing business with that company. It took us 28 hours to bring the entire DC back online and found out that not only were our tape backups not functioning due to magnetic interference during the storm, many of the servers deployed for our finance department were on RAID0 and over 8 years old (talking dishwasher Compaq 5500s here); you know the rest of the story.
Good list.
Reminds me of the time our data center overheated at a company I worked for. Everything shut down now. No warning, nothing.
Of course, the datacenter overheated after hours. Yours truly was on call.
The company was too cheap to install temperature sensors and alarms.
After the data center was down for over 24 hours, the company decided to spend a few dollars on appropriate monitoring and alarms.
Not only that, but the company had also fired the company that maintained and monitored our mainframe shortly before the disaster.
$200 million a year company and they wouldn’t spend a few thousand to maintain data integrity.
I left as soon as I could. I got tired of dealing with that kind of crap.
Most of these boil down to the software equivalent of a spare tire nobody’s checked the air level on since ever. Backup systems are nice, but you need to make sure they actually work. The other 10% are about making sure that whatever killed the primary doesn’t daisy chain to the backup, basically don’t change tires in the middle of the patch of stuff that popped the first one.
Forgot to add, I warned the company there were issues with various system, but they wouldn’t listen.
I got tired of getting blamed for stuff I’d warned them about.
The cleaner unplugged itThen there was the manager of the building containing the mission-critical mainframe processing real time test data. He conducted a tour of his facility for some visitors and at one point in the tour he pointed out the main power switch to the mainframe - and cycled the switch off and back on!!! Scratch one expensive test, and scratch (quite literally) all the big, expensive hard disks supporting the operation.Pay attention to log files. More than once I have seen perfectly planned and executed offsite failovers felled because nobody realised the cleaner at the backup site was liable to unplug the servers, for example to charge an iPod. This is not an urban legend.
Sigh . . .
On a smaller scale, there was the large computer which would go crazy every now and then.Who knew that the steel wool pad on the floor cleaning machine would put iron filings in the air, or that they would randomly short out whatever printed circuit they settled on? Certainly not the janitor!
Hurricane Katrina hit landfall directly over our manufacturing plant along the Gulf Coast in Mississippi. The computer center there was flooded to almost ceiling level. Our Dell storage array network with all the local servers, disk drives, etc. was completely submerged in a stinking, muddy mess.
Come to find out, our fancy ‘distributed’ document management system is a combination of central and local storage. Whenever someone ‘local’ would access a blueprint file in edit mode, the system would move the file from ‘central’ storage to ‘local’ storage - this to improve the speed of accessing the file.
The ‘local’ files were in a Raid5 configuration with weekly full and nightly incremental disk-to-disk backup... plus tape backups stored in the datacenter - now all ruined.
The off-site, month-old backup was in a local bank deposit box. But the bank did not open for about a month after Katrina. The bank-located backups recovered just fine to a sister plant located in TN. But within the lost month, some of the company’s blueprint files had been moved to local storage. In all, a few dozen critical blueprints from across the company existed only on the muck encrusted data disks.
Luckily, a company specializing in recovering data from damaged disks were able to retrieve all the lost engineering files. But not after several weeks and over $100k spent...
Between stupid users, and self inflicted pain, my horror stories are so numerous, I just don’t know where to begin. Lol
It's been a long road, bud, but I made some good friends
That depends on the software and array you are using. The company I work for uses proprietary protocols for several mirroring software systems that prevent that sort of thing. Well, not so much prevent as make it easily recoverable to point-in-time when the corruption happens.
The company I work for has a customer who doesn't care what the cost is to protect the data for their enterprise. Their RTO and RPO are both zero. They demand it. They'll want operational overview but the details aren't relevant nor is the cost. Whatever it takes to accomplish that, do it. Any number above zero costs them millions per minute.
This account has some very stressed but very wealthy Account Sales Reps.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.