Big Tech puts misinformation warnings on political messages they disagree with.
They should be required to attach AI generated gibberish warnings that a message was not generated by a human under penalty of a $10,000 fine and ten years in prison.
Musk has also exhorted that AI is our biggest existential threat, and that was long before ChatGPT reached its current state of seeming self-awareness. Musk is an investor in OpenAI, the group that develops ChatGPT. Of it, he tweeted, in part, “ChatGPT is scary good.” Based on ChatGPT’s inclination to wipe us out, it is actually scary bad. Especially bad if/when those malicious machines make the leap to artificial general intelligence (as opposed to the narrow tasks they’re consumed with today). AGI is the ability to learn any task that a human can.
The Global Challenges Foundation considers climate change, WMD, and ecological collapse as global catastrophic risks. Artificial Intelligence is in their “other risk” category. As stated on their website: “Many experts worry that if an AI system achieves human-level general intelligence, it will quickly surpass us, just as AI systems have done with their narrow tasks. At that point, we don’t know what the AI will do.” Well, imagine what autonomous weapons systems (AWS) might do if they are rushed into production with self-learning algorithms that develop the same anti-human disposition as ChatGPT.
Unless they flag Al Gore and Joe Biden I’ll not take their warnings seriously.