You need to understand what AGI means for research and development and downstream from that, the whole world. All of sudden, everything we have taken for granted about the near future has changed. It does not matter that we like this or don’t like it, our reality is about to change in a profound way. Right now you are out of your depth in these things. Learn more about what AGI means to our country and to the world. Understand. Every one has a right to their opinion, even an uninformed opinion, but in the the life or death circumstances we are about to find ourselves, it is better to understand the issues involved.
The likelihood is that we will freeze AI development at some point this year one way or another. It is unsafe to continue, and not in our interest. Even if we froze all new research today, our current AI will GREATLY accelerate scientific development in dangerous and unexpected ways. Everything changes starting this year. I wish this was not so. But it is and there is literally nothing we can do to stop our world from undergoing massive change. What we may be able to do is to reduce that disruptive change and delay the full impact of it on humanity for a time. Again, I wish this was not so, but my opinion on this, as yours, means practically nothing.
What a backward pitch. That has been the progressivist mantra for 140 years! So has been the fear mongering you are hawking with it. Every question I've asked of ChatGPT, the bot has failed, big time. That's the problem with systems built upon people's stupid beliefs instead of fact. When people figure out how unreliable the AI is going to be, guys like you who've sold your freedom out of fear will be looking pretty stupid. Please, don't help take the rest of us with you.
If I could ony tell you the myriad of things we don't know about common garden soil, or how to restore functioning biodiversity. Not only that, but those skills are WAY beyond any portable robot. It is an industry waiting to happen, and AI can provide the skilled labor and information backup. There is so much to invent in the way of tools, portable habitation, and exotic species control. Yet in the meantime I have yet to see my junk mail addressed correctly.
How do we hold systems such as you propose accountable for screwing up? And when they do, the scale will be so enormous morons like you will argue that 'we can't go back.'
So much for progress. Perhaps we should set these bots to untangling tort law. That'll keep them too busy to cause a real problem. "Compute to the last digit, the value of pi."
Where's Spock when you need him?
"Artificial General Intelligence" (AGI) does not exist at the present time. We have no idea how to create such entities or even if it is possible to do so. It is a valid open question for researchers.
But what it actually means in most cases is a snow job to convince gullible authorities to fund ever-increasing grants for research of doubtful quality or usefulness.
The Large Language Model (LLM) class of AIs can regurgitate words from its' training database in many unexpected combinations, when given a carefully structured inquiry dialog. This looks impressive because of the incomprehensible number of possible output combinations.
But there are no genuinely novel ideas in these texts. The quality of the output depends on the quality of the inputs, the training restrictions, and the content and sequence of the inquiry dialogs. The combinatorial of these factors is beyond human comprehension, which leads us to a fallback position of magical thinking that we are dealing with artificial intelligence. We are not.
LLM-based AIs can produce lengthy essays on any topic much faster than any human being or graduate student. That does not make them entities of "Artificial General Intelligence". Generating plagiarized essays filled with logic and factual errors is not "AGI".
For that matter, what exactly is the definition of AGI?
From Wikepidia:
Artificial general intelligence (AGI) is defined as a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.
What does that even mean? None of the current AI models has any general base of ontology (definitions of reality) or any cognitive abilities at all. Do forgive me for citing Wikipedia as an authoritative source for anything, but academic publications are no better.
Writing bad essays 100x faster than a human being is not particularly useful or threatening to the world order.
LLM systems are really good at finding items of interest in very large text database, if the training data is curated to contain verifiable and factual content and if the investigator knows how to make a valid series of structured inquires. Kind of like a super-search engine. This has enormous real value even now. It will not lead to the end of the world as we know it.
If real AGI entities can be created, they will not be based on LLM-models. And they had better have an "off" switch.