AI is nothing but a tool. No different than a drill, hammer, or even a gun.
The only way any of them can be “bad” is if an evil person makes them evil.
That’s why you need moral people.
You are thinking in terms of traditional procedural programming constructs "If X Then Y or Z." Artificial intelligence and neural networks do not work that way. AI machines are already in everyday use and you interact with them in Amazon shopping, Alexa, "chat agents," Siri, etc.
The big concern is the "Technological singularity" might occur in the next few decades. From Wiki:
The "Technological Singularity" is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.
Think of this as the beginning of Skynet.