Musk has also exhorted that AI is our biggest existential threat, and that was long before ChatGPT reached its current state of seeming self-awareness. Musk is an investor in OpenAI, the group that develops ChatGPT. Of it, he tweeted, in part, “ChatGPT is scary good.” Based on ChatGPT’s inclination to wipe us out, it is actually scary bad. Especially bad if/when those malicious machines make the leap to artificial general intelligence (as opposed to the narrow tasks they’re consumed with today). AGI is the ability to learn any task that a human can.
The Global Challenges Foundation considers climate change, WMD, and ecological collapse as global catastrophic risks. Artificial Intelligence is in their “other risk” category. As stated on their website: “Many experts worry that if an AI system achieves human-level general intelligence, it will quickly surpass us, just as AI systems have done with their narrow tasks. At that point, we don’t know what the AI will do.” Well, imagine what autonomous weapons systems (AWS) might do if they are rushed into production with self-learning algorithms that develop the same anti-human disposition as ChatGPT.
What if I told you they were warning about AI 100 years ago?
1927
AI can only affect humanity if people keep funding, developing, coding, building, applying, listening to, obeying, and then connecting it to control other things.