ChatGPT AI is a very sophisticated computer program. But like every other computer program, the same problem dogs the operations. Garbage In, Garbage Out, known as GIGO, relates to the data feed, and skewed data provides skewed results.
Human data collection comes from many sources, the AI may be from only a very few or perhaps even just one source, and whatever the bias of those sources, in a larger number, the bias tends to cancel out. In any very small number of sources, the bias can get very extreme, especially when the feed is carefully selected to support or deny specific opinions disguised as “fact”.
This is the biggest reason I say that AI is not yet anywhere near "becoming aware" and it likely never will, as it will always require input from humans in one way or another. In my experience over the last year plus, I've noticed a lot of things about the AI "bias". Truth be told, I can get a varied response to just about any question, all depending on how I phrase the question. I can also get varied "biases", again, depending on my input, such as a more Conservative bias, if I want it. For example, I recently performed a scan of FR and asked CGPT to frame its responses based on the posts and replies. I then took the results and had AI format them into a 30 min podcast script. Then I shot it over an AI text to speech app and created an entire podcast. In total, it took about 12-15 minutes. Most of that time was spent waiting for the given AI app to do its thing.