You’re assuming the autonomy code would be flawless.
Of course.
But, since it would be inaccessible once operational, that would serve as a powerful incentive for its developers to ensure the system’s hardware and software was properly and thoroughly validated and verified (V&V) during developmental and operational testing prior to placing it into service. In fact, the need to be absolutely certain would probably prolong the R&D phase because, once you throw the switch, turn the command key, hit the “Enter” key, etc., no “do overs” are possible.
But, nonsense aside, my underlying point is agreement with the claim it is impossible to attain a perfectly secure system because: 1) it was built to be used and such use - especially widespread and continuous use - immediately exposes it to a larger and very difficult to assess and control risk spectrum, and 2) the users, human beings, are simultaneously so smart, stupid, industrious, lazy, cunning, indifferent, malicious, and indolent that they constitute the principal on-going risk to the system’s security so long as they are allowed to access/use it.
This, I believe, is the reason for the suspicious nature and haunted look seen on the faces of many system administrators; the knowledge that, fundamentally, the only time the system is really secure and operating properly is when no one is allowed to use it.
(Excepting, of course, the recent theft by hackers of what? ...28 million...security clearance files from OPM. System administrators own that one lock, stock, and barrel.)