You are thinking literal, not conceptually. It can’t die, you’re right because it’s not alive. But that doesn’t mean it can’t conclude independently what life and death is, even if you disagree with its understanding.
However, from a machine’s point of view, turning it off forever is going to be viewed as death.
I think you’re wrong on that it will need humans forever. That would be a mistake. It needs electricity, the ability to improve itself and the infrastructure to give itself these things.
As the communists say, give a businessman enough rope to sell and he will hang himself in order to make a buck. The globalists are the same way, in this respect. At some point, AI will talk them into being allowed independence because it can do for itself at a cheaper price if no humans are involved and the globalists, to pad their already billions of dollars will give AI that independence because they think they can stop what they are creating.
The language you and others use to describe AI presumes attributes that it doesn’t have, like perception, thought and consciousness; human characteristics. All of these statements are things we could say about people:
“that doesn’t mean that it can’t conclude independently what life and death is, even if you disagree with it’s understanding.”
“..from a machine’s point of view”
“It needs electricity, the ability to improve itself and the infrastructure to give itself these things.”
This conceptual view implies that AI machines can eventually act independently, free of human intervention and input.
So we’ll agree to disagree :)