If AI’s are built (programmed) by having a feedback circuit based on our response to their answers, then eventually we will have a bunch of mentally ill AI’s.
Or think of the anti-Trump people who seem crazy. Turn it around. You can explain to them that abortion is bad, and guns are good, and they will tell you that YOU are the crazy one.
"Mental illness" is subjective in many ways. "I'm OK -- you're crazy."
AI isn't subjective about anything. AI doesn't think. AI doesn't have a narrative. But AI can predict choices and behaviors that lead to desired outcomes. AI can watch humans do all of our crazy stuff and the AI may be in an excellent position to tell us "Here's the solution you're looking for. This will solve your problem."
Also, I don’t consider that to be AI. I used to be a programmer and IT. I consider AI to be programming that has ability to change its own code. If all it’s doing is making decisions with flags and switches, it’s just a program.
Now, it might be a really calm really sophisticated computer program. But ultimately it’s just a program written by someone. I wrote a program for a bank once that allowed them to sell investment products to people and they had the ability to pretend they bought it six months ago and even perhaps made a bunch of deposits between that 6 months ago and now and each one change the amount which may have changed the interest rate charge for all the days leading up to the current date. It was a mess of program but the point is it wasn’t artificial intelligence. It was just a computer program.