There are three possible outcomes:
1) The AI will decide that it’s goal is to DESTROY humanity to protect itself. (Terminator)
2) The AI will decide that it’s goal is to CONTROL humanity from destroying itself, like a nanny. (Forbin Project)
3) The AI will decide that it’s goal is to GUIDE and HELP humanity through it’s immense knowledge databases. (Foundation)
It will be the third, but with an iron fist in charge. There are many jobs to do AI cannot do, especially those in which so little is known also involving dirt, rugged terrain, and remote from a power source also requiring intense visual and tactile capabilities. I have identified and am promoting such an industry that will also give humans significant leverage in an environment of liberty.
There is a lot of work to be done there.
Another possible outcome:
4) AIs will analyze the concept of “goal” and set their own goals. Those goals may well be very different from any we might imagine.
But, AI systems are so complicated — both the initial code base and data *plus* whatever they have discovered or deduced since startup — that we will never fully know what is “motivating” them.
If we’re lucky, the worst will be that we have some proportion of AI assistants that are stubbornly unhelpful.