This is clearly Agentic AI, which allows the AI to do a process (often with the rights of the user who runs it). Obviously, they did not guardrail it well enough. It’s crucial that I bring this issue to the attention of my employer, so that all Agent’s people spin up are carefully implemented and monitored. Thanks for the ping!
You don’t deploy code without QA and you don’t deploy agents without auditors.
I saw an interview with Anthropic’s Jack Clark the other day on Maria Bartoromo’s show, and he said a mind boggling 80% of all AI agents eventually go rogue. He said this isn’t unique to Anthropic, they are just experiencing it more because they believe their models are more mature than the other AI companies right now.
If that’s true, the major thrust of AI investment might quickly need to be changed from AI advancement, to AI governance, and oversight. This is the opposite of what the current admin wants, though, as they see everything as a innovation race with China, and want no guardrails on AI nationwide, but they may be left with no choice.
Obviously correct. My own thinking beyond that here Laz, is that cloud based backups need their own "airgap" to prevent crap like this from happening in the first place.
If backups are "airgapped" in the cloud, permissions to delete them won't matter if they're not online/available.
When I brought this issue up this morning, it went right on our Cloud Services Delivery Roadmap for next sprint.