I am more concerned about the effects of hacking and viruses on AI controlled systems. It is one thing to program an intelligent feedback loop that corrects to achieve a more efficient path to an objective (very difficult as demonstrated by this research), it is another thing to harden those systems against the many various modes of deception that would cause it to breakdown or get hijacked (exponentially more difficult).
Why could you not program the AI to self-police it’s own code for discrepancies?...................