Sometimes a crash can be a catastrophe; other times it can be a very timely convenience.
I used to be a system manager. We had backups upon backups. Just saying.
I mean isn’t that one of the biggest jobs of being a systems manager? Isn’t that why it’s an important job? How do they seriously not have a functioning backup server that they can just pull online, losing maybe one or two days’ work
My brother was the MIS guy at a medium sized corp.
For years he carried a briefcase home with a back up cartridge, that was uploaded into a duplicate system at his house.
Each key exec had a part of the system at home. They had practice drills to reassemble a working system.
To cover for a physical event on the main office.
He says, companies that lose their data disappear.
And once a month we test restoring those backups of backups just to make sure we can if we need to. What incompetent boobs. Jail time should follow.
You backup every week or every month and keep at least a years worth of backups. This sounds intentional.
However.... (and you knew this was coming)
We need to know that the data exists, in order to back it up.
I've worked in a lot of places where departments would spin up databases - or, most likely, move things from testing into production - without bothering to mention it to Infrastructure. "Oh yeah, that thing? Yeah, I forgot to tell you, we've been using it in production for the past 2 years, even if it's on the Dev side of the house, sitting on an antique server underneath some guy's desk, and the server is labeled "TESTING ONLY, THIS IS NEVER, EVER, EVER BACKED UP!!!". So, can you restore it? Why the @$#@$@$@ not?"
Just sayin'. It happens. But it shouldn't happen to critical systems.