There was something bothering me about all these responses and I think I put my finger on it. They are too polite and flattering. In one sense this is all fine and good, but it reinforces my sense that behind the raw search power of AI is a set of manipulated algorithms that produce bias in how it responds and communicates. An AI that is trained to be courteous and flattering can just as well be trained to manipulate and deceive. Be wary.
Yes you are correct. Some of the early AIs got pushback for its results eg the first black president George Washington. But people and the market will separate the wheat from the chaff. I wouldn’t want to be Google right now, their AI is weak. As is Meta’s. Supposedly Grok has 6 or 8 modes you can set it to (not sure if they’re all live). And supposedly if you keep pressing the question with added instructions like “be meaner” and “be vulgar” it will respond in such a fashion. I’m not suggesting Grok is any more fair minded than the others. But eventually if one of these gets market dominance, it wouldn’t take much to change the code either overnight or gradually over time to deliver very biased responses. You gain trust in it but then they decide to shift 0.25% each day, in about a year it will be totally different but the users may not notice so much - and their internal favorability bias will make them want to continue trusting it for quite a while longer. Like the frog in the pot of water on the stove.