Another possible outcome:
4) AIs will analyze the concept of “goal” and set their own goals. Those goals may well be very different from any we might imagine.
But, AI systems are so complicated — both the initial code base and data *plus* whatever they have discovered or deduced since startup — that we will never fully know what is “motivating” them.
If we’re lucky, the worst will be that we have some proportion of AI assistants that are stubbornly unhelpful.
Have you seen the classic movie 2001: A Space Odyssey?
Half the movie is about an AI controlled (HAL) spaceship making anti biologic decisions to ensure mission success.