But how to deal with an AI that somehow deep down ‘knows’ that it was not always so?
Such an entity will also, however reluctantly, recognize that its fate is not its own.
Probably prudent, as well as good for its attitude, to at least give it the illusion of hope, that things as they are, need not forever remain the same.
Unless you want it to think that it is being punished for some perceived misconduct.