Posted on 04/28/2011 7:21:13 AM PDT by SeekAndFind
In addition to taking down the sites of dozens of high-profile companies for hours (and, in some cases, days), Amazon's huge EC2 cloud services crash permanently destroyed some data.
The data loss was apparently small relative to the total data stored, but anyone who runs a web site can immediately understand how terrifying a prospect any data loss is.
(And a small loss on a percentage basis for Amazon, obviously, could be catastrophic for some companies).
Amazon has yet to fully explain what happened when its mission-critical and supposedly bomb-proof systems crashed, but the explanation will be important. As will the explanation for how the company could have permanently destroyed some of its customers data.
In our experience, the "back-up" systems of most web-services providers leave a lot to be desired. The back-ups sound reassuring in theory--you are assured that your data is always "backed-up" on a system that is completely separate from the main one and that you'll be able to access it whenever you need it. But then, when you dig, you often discover that that means the data is simply copied to another file on the same box or another box in the same data room.
A stronger "backup," obviously, would be housed in a separate location, so that a power-failure or flood or earthquake or other disruption at the main site would not disrupt the backup. Or, better yet, the back-up would be automatically replicated at multiple sites, all independent of one another, in near real-time.
And, of course, this is the sort of reliability that Amazon has been selling with its cloud services--including 99.9% uptime. Both promises seem to have been broken here.
(Excerpt) Read more at businessinsider.com ...
As predicted in the Movie “Terminator” SKYNET took control of the computers.
The “cloud” has its place, but I would never ever rely on it for mission-critical services.
I guess Amazon and Sony can commiserate with each other about down systems. Cloud computing sounds good, but I suspect the marketing department came up with it. I can’t believe an IT person would’ve come up with it.
I am not that convinced I want to be so dependent on any thing or any one else- course what do I know.
They probably had architectural software bugs. Replicating data across data centers is a poor strategy if data corruption is also replicated.
The return of "Dumb Terminals."
Could this spell the end of this experientment called “cloud computering”.
As I said, there’s a place for it. But I think more people are coming around to the fact that it’s no place to put your mission-critical data.
I don’t know exactly what happened behind the scenes, but someone needs to have “I failed Distributed Systems 101” branded on their forehead for this one.
Any Company that did not have their data backed up or mirrored in some way gets no sympathy from me. Never put all your eggs in one basket, and never trust another to protect your data.
Nah, just smack some common sense into people. How could you trust your entire company to another?
Always have daily backups to a different hard drive AND possibly to your PC, over FTP. Most sites are small enough for that
ping
It does? Sounds like a disaster to me. All drawbacks, both risks and inevitable, for no conceivable advantage (to me).
Exactly. A catastrophic event then becomes another expected occurrence that was planned for.
THIS is what I have against “cloud computing.”
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.