If a system beging dishing out insults and nonsense, that is the point at which you are supposed to disconnect from it.
It’s not the users job or duty to try and ‘figure out’ why the system is unpleasant. Some very bored programmer thinks it’s fun messing with people’s minds. Don’t be one of the sock puppets! Just say no.
“If a system beging dishing out insults and nonsense, that is the point at which you are supposed to disconnect from it.”
I’m sorry Dave. I can’t let you do that.
“Some very bored programmer thinks it’s fun messing with people’s minds.”
No, there is no programmer involved at the level of individual conversations nor did one ever program it to be emotionally vindictive or threatening. There is still nothing more than predictive pattern matching occurring. In the vast data corpus the AI is trained on there had to be some of instances of similar behavior it “read” about. It is just mimicking that. The developers can prohibit the AI from writing certain things (such as vulgarities), but there is no programming instruction similar to “be emotional” or “try to upset the human”. If that happens it is a natural outcome of instances in the data training.