I buy both of those cases.. It’s right in line with my experience in the industry. Company I work for is acquisition based, so we’re all over the world. A while back they decided that key elements of our code building should be centralized to avoid... whatever I never figured that out. And technically speaking they also built redundancy into it. Problem, while they did centralize to multiple locations the build system can’t talk to more than one. When Texas had their power outage we lost our Austin server. We in Tucson couldn’t run a build for a week.
I’m intimately familiar with change control of major companies too. Familiar enough to know there’s what they SAY and what they DO. And they never do what they say.
I’ve seen equivalents to everything you say you’ve never seen. I work on enterprise level software that’s used by over 400 of the Fortune 500, the frequency with which somebody knocks us down and insists they change nothin only they did is at least once a month.
we will just agree to disagree, I’ve was in the IT support business since 1981 when Mainframes ran everything and entire organizations did go down regularly...
IMO, if a single application or data center failure takes down a worldwide network, it’s faulty design by someone who should be answering to management....
These situations are BS in my book....like I said just agree to disagree