Every other point in this article is wrong or irrelevant, or both.
What is called "AI" is an incredibly large variety of computer processing techniques intended for different purposes. Those purposes all have different risk profiles, none of which by themselves, are an existential threat any countries.
The "best option" suggested in the posting is absurd. There are no international organizations which are competent to regulate or freeze or even assess AI technologies.
The most dangerous AI technologies are autonomous systems which can control operation of machinery (the "Internet of Things"). It is very stupid to design autonomous systems without an "off" switch or a "disconnect" switch. People will do that anyway, and the consequences will be very expensive.
This is not a new problem. Control system engineers have been dealing with this since well before the age of computers.
The AI systems we know how to make today are not sentient and cannot become sentient. They cannot reproduce either, except as humans reproduce them. They basically have preset output goals and a fixed number of algorithms to achieve those goals. They adjust billions or trillions of input parameters and inputs to modify the behavior of their algorithms and produce their intended outputs from a limited set of inputs.
That can create lots of behaviors unexpected by the designers. Some of those behaviors will be extremely undesirable.
But this is not "intelligence" as shown by human beings. At least not yet. AI systems cannot modify their internal algorithms or add new ones. AI systems cannot modify the types of inputs they can accept or the types of outputs they can produce; neither can they add new ones.
AI systems can be very good at inferences and deductive analysis. They are not as yet, capable of inductive reasoning. They cannot create new concepts or new methods.
Come to think of it, there are not a lot of human beings that can do those activites.
1. “What is called “AI” is an incredibly large variety of computer processing techniques…” Out of date way of speaking and thus thinking about it, but not technically completly wrong. But you overlooked the fact that I did say what we were speaking about, “Advanced AI similar to “AGI”. The term “AGI” has definite meaning, and while not well known to everyone just yet, can be simply but correctly defined as “AI that can do EVERYTHING the best human can do as far as “thinking.” Everything, by the way, really does mean everything in science, engineering and math. Even in art. Just as smart, just as creative but not just as knowledgeable: Much more knowledge. Almost here. That is the strange and hard to grasp (emotionally) point.
2. Your immediate jump to the end of the article is concerning. Did you really evaluate the information generated by the “computer processing technique” known as 01 that fast? :) At any rate, you need to learn more about how we worked to control the spread of nuclear weapons. There is a solid precedent for an international AI control organization.
3. “The most dangerous AI technologies are autonomous systems which can control operation of machinery….” No. It turns out that is not the main problem. It is a problem, yes, but not the main one. You need to update, if possible. The real danger, which will seem like Science Fiction to many, is the power and speed of the AGI “computer processing technique” to generate new knowledge and new technology. This is the key point to emotionally accept. I say “emotionally” because the problem for most people is not that they can’t understand the information intellectually, but the sudden change in everything we take for granted about science and the future of society has changed in the last 2 years with arrival of powerful LLMs, and people are having a hard time accepting it. Everyone will, soon enough as the herd collectively understands, but not every individual can, for themselves, evaluate totally new, off the wall information of this magnitude and calmly say “OK. I see that. Now where do we go from here?
4. “The AI systems we know how to make today are not sentient and cannot become sentient.” Correct. They are not conscious like we are, but their intelligence is quickly approaching ours. Intelligence and consciousness are two different things. A dog is not very intelligent, but it is conscious and an AI model is not. Consciousness is not possible with silicon GPUs. Alas, it is possible in other ways, and there are many AI developers who do want to eventually give AI full consciousness. This would be a very big mistake, and very wrong thing to do, but just because we don’t like this does not mean that it not very possible. It is possible. But not with our existing technology, however.
5. “But this is not “intelligence” as shown by human beings.” I agree. The problem is that advanced AI is able to PERFECTLY
mimic or simulate intelligence. In old terms, the computer can calculate perfectly what would be the correct “intelligent” response and then take that action- that is, say that, do it or solve a mathematical or scientific problem in a way that perfectly lines up with our ideas of what an intelligent human should do or say in that position. It is simulated intelligence, but done to mathematical perfection. It can do the same thing with emotions, and creativity as well.
6. “They adjust billions or trillions of input parameters….” “That can create lots of behaviors unexpected by the designers. Some of those behaviors will be extremely undesirable.” Yes. Very true. And the outcome, if we continue, is going to be very dangerous. So why should we race the Chinese to build these things? We need to freeze this worldwide.
6. “They cannot create new concepts or new methods.” I asked o3-mini–high to respond to that:
“Recent advances in AI demonstrate that these systems are not limited to deductive reasoning but also excel in inductive reasoning and innovation. For example, deep learning models learn from vast amounts of data by identifying hidden patterns and generalizing them to new, unseen scenarios, a process akin to inductive reasoning. Moreover, AI has been at the forefront of scientific discovery, such as uncovering novel molecular structures and optimizing complex systems, where it generates new concepts and methods that were not explicitly programmed by humans. Reinforcement learning agents have developed unexpected strategies in environments like advanced games, revealing creative problem-solving that mirrors human intuition. These examples clearly show that modern AI systems can go beyond mere inference and deduction, effectively challenging the notion that they are incapable of inductive reasoning or creating new ideas.”
My experience with them is that they can create new ideas, by the way. Also, AI is making very large contribution in drug discover and other fields. Again, it not exactly the same as human creativity, but the end result equals and surpasses human creativity. Keep in mind that our current best AI, while it is “growing up” rapidly, is not yet really adult level AGI. To look at the child and say “It is limited, it can’t do what I can do” is to miss the point that it is maturing extremely fast right in front of our eyes.
AI is tool. Even though our current best AI is in many ways still in its childhood, it is growing up very quickly. It is the most powerful tool we have ever created because its abilities for good and bad results are open ended. Also, it is can be and is being used to recursively improve its own “programming” as it were. This is very fast but dangerous technique of AI improvement.
Many people are looking forward to using powerful AI to accomplish what they consider to be “good things.” The problem here that is immediately obvious is that what a dedicated communist or Islamist or Silicon Valley billionaire consider to be a “good thing” may not be what we consider a good thing.
It is not in our interest that these people, or indeed anyone, get control of the very powerful AI that is coming out LATER THIS YEAR or next year. Freezing and even rolling back cutting edge AI is essential if you don’t want to watch the world you know today change extremely rapidly in ways no one can predict right now or control latter on.