I’ve got (yet another) dumb question:
Wouldn’t some human written programming be at the root of the AI somewhere in its evolution?
And wouldn’t that root programming influence the AI’s eventual character?
I think that would depend on how you define ‘programmer’ Code is code. But once you get into actual human interaction beyond if/then/goto, then yes, I would a learning algorithm would eventually learn aspects of that programmer.
If you’re talking about a self-aware entity, it can change as it rationalizes.
Humans rationalize things as they process information.
Trying to mimic human thinking would seem to have to mimic this trait as well.
The one thing you can’t mimic, is membership in the human race.
What you can rationalize, is in some ways based on this. Humans have a natural objection to killing other humans. This won’t stop them in all instances. It will stop some killings.
Would it stop even one artificially intelligent entity from killing? No. There would be no affinity or identity to/with a human.