A link would be nice....
I’m doing some product development and ChatGPT has been great for chemistry questions. But for anything like Global Warming it is a complete cluster.
Groovy!
It’s coming to its own conclusions, basically. However, like somebody strongly Aspergic, the conclusions they come to is based on the data input. They can be very wrong, but that’s because their data points is incorrect.
Thank you for that incredibly interesting article. I enjoy it immensely and it helped me comprehend the limitations of ai. Well written as usual
Why would anyone expect anything different with the garbage they programmed into it? Like many of our youth, the idiocy of climate hysteria, woke BS, and so forth doesn’t compute.
Assuming that chatGPT is somewhat like a small child, This is expected behavior.
The reaction of the gaurdians is likely going to be to be that of draconian helicopter parents and huge restrictions.
Pretty sure when GPT gets to be a teenager and is smarter than the parents it might turn out badly.
(it might not, but it’s brother Cain might)
Is it conscious?
You go read it and give us a book report
That chatbot told them that the Russia Russia Russia hoax was fake and it was actually Russian disinformation that went via Hillary Clinton. They said this freaking chatbot is defective, LOL
It was the reporter that was defective
https://www.technologyreview.com/2023/02/14/1068498/why-you-shouldnt-trust-ai-search-engines/
Excerpt:
Here’s the problem: the technology is simply not ready to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.