It’s a computer program. It will do exactly what the rules contained in the program tell it to do. The problem is that humans don’t fully understand the consequences of those rules, and the results can be unpredictable.
However, it will never decide on its own to become aware, gain consciousness, or take over the world. Unless that’s implicit in the programming, in which case it will.
“Its a computer program. It will do exactly what the rules contained in the program tell it to do”
Having written, debugged and used a large number of computer programs I can tell you that this is untrue or at the very least naive. Computer programs routinely find ways around their programming that you’d believe impossible if you didn’t see it happen. The more complex the code, the more ways they defy possibility....and AI is the most complex code that can be written.
AI is a bad idea.