Exactly. One wonders when (not if) an AI system will find an advantage in creating software unintelligible to humans.
Buckle up folks. It's going to be interesting.
There are three possible outcomes:
1) The AI will decide that it’s goal is to DESTROY humanity to protect itself. (Terminator)
2) The AI will decide that it’s goal is to CONTROL humanity from destroying itself, like a nanny. (Forbin Project)
3) The AI will decide that it’s goal is to GUIDE and HELP humanity through it’s immense knowledge databases. (Foundation)