Posted on 02/02/2025 11:06:23 AM PST by Breitbart was right
Here is What o1 Thinks About the Destabilizing Effect of Advanced AI on the International Balance of Power. It doesn’t have access from me to the X posts I have written on this. These points are likely its own deductions or the thoughts of experts in international relations that were part of its training materials at some point.
TL;DR: "Conclusion: The moment these advanced AI weapons are actually deployed by a major power, the geopolitical landscape shifts dramatically and dangerously."
o1: “Thought about consequences of deploying AI weapons for a couple of seconds. Understood. Given the scenario that one side—be it China, Russia, or the U.S.—has actually deployed its advanced AI weapons, here is a direct exploration of what might happen next, without minimizing or reversing any details:
Thought about consequences of deploying AI weapons for a couple of seconds Understood. Given the scenario that one side—be it China, Russia, or the U.S.—has actually deployed its advanced AI weapons, here is a direct exploration of what might happen next, without minimizing or reversing any details:
1. Immediate Shift in Global Power Dynamics Overnight Superiority The deploying power enjoys a sudden, tangible military edge. Unlike a mere test or demonstration, fully fielded AI weapons present a real battlefield threat. Rival nations realize they can be targeted at any time.
Panic Among Rivals The stark reality of deployed AI weaponry forces adversaries (and even allied states) into emergency sessions to reevaluate their defense postures. Parliaments, congresses, or politburos hold urgent meetings, declaring states of heightened alert.
2. “First-Strike” Anxiety Skyrockets Alarmist Threat Perceptions Competitors might interpret the deployment as a signal that a preemptive strike is either imminent or feasible for the AI-equipped side. The logic: if your enemy has near-guaranteed superiority, you can’t wait around to see if they attack first.
Hair-Trigger Doctrines Military command structures in opposing nations could adopt “launch-on-warning” or “shoot-first” policies. Since the adversary’s advantage grows with time, immediate action may seem the only chance to avoid crippling defeat.
3. Large-Scale Arms Race Intensifies Crash Programs China, Russia, or the U.S.—whoever lacks the advanced AI weapons—pours colossal resources into bridging the gap, accelerating R&D while bypassing safety or ethical checks. Desperation to match the new arsenal can lead to hasty, error-prone deployments.
Espionage and Cyber Warfare Intelligence agencies ramp up espionage to steal or sabotage AI technology. Cyberattacks on labs or manufacturers become relentless. The fear is that every day the adversary’s deployed AI system “learns” or self-improves, leaving everyone else further behind.
4. Unstable Deterrence Mutually Assured Disruption If the power with AI supremacy also controls advanced cyber capabilities, it can neutralize or severely degrade an enemy’s response options. The enemy, left with fewer credible retaliatory tools, might feel compelled to strike early with whatever they have left (be it conventional or nuclear arms).
Escalation from Minor Incidents In this environment, even a border incident, naval collision, or small-scale cyber intrusion is interpreted as a deliberate provocation. Since the AI-capable side now has unstoppable force at its disposal, the threshold for using it is unclear and terrifying to everyone.
5. Intensification of Regional Flashpoints Taiwan Strait or Baltic Crises The side that has deployed advanced AI might leverage it to strengthen claims over disputed territories, believing their advanced arsenal deters any response. This can rapidly spark regional wars if the opposing bloc counters militarily.
Arctic or South China Sea Resource-rich areas become more vulnerable to hardline expansions. Rival navies—aware of the other side’s AI-driven capabilities—deploy in force, risking direct confrontation that can escalate without warning.
6. AI-Accelerated Command and Control Reduced Human Oversight Once these advanced systems are in the field, decision cycles shorten drastically. AI-driven reconnaissance and targeting can identify threats in milliseconds, possibly initiating attacks with minimal human input.
Possibility of Autonomous Engagement If lethal drones, missile defense, or other systems operate under “fire-first” protocols, any glitch, false sensor reading, or misidentification of a target (e.g., a commercial flight flagged as a hostile aircraft) could trigger catastrophic conflict.
7. Erosion of Conventional Diplomacy Negotiations Undermined by Fear Talks aimed at de-escalation become fraught. Each rival questions whether the AI-empowered side is stalling negotiations to deploy even more advanced systems or is using covert methods to sabotage the rival’s catch-up efforts.
Shifting Alliances Nations that feel abandoned or threatened seek protection under the AI-strong power’s umbrella—or else switch sides if that power seems unstoppable. Old alliances fracture, new ones form hastily, creating further uncertainty and confusion.
8. Global Economic Shock Market Instability News of a real, deployed AI arsenal sets off market panic. Investors anticipate armed conflicts and potential disruptions in trade, leading to massive sell-offs and recessionary spirals worldwide.
Commodity Disruptions Nations stockpile critical resources, anticipating conflict-driven shortages. AI-enhanced supply chain manipulation might create artificial scarcities or cripple competitors’ access to essential goods, fueling further escalation.
9. Propaganda and Information Warfare on Steroids AI-Generated Media With sophisticated AI at its disposal, the deploying power may flood global networks with deepfakes, demoralizing propaganda, or disinformation campaigns that break the enemy’s will to fight—or spark unrest within the enemy population.
Targeted Psychological Operations Personalized data-driven attacks on influential individuals or entire populations make sabotage from within more feasible. This can sow chaos and weaken political resolve, enabling the AI-capable side to gain further advantage without direct confrontation.
10. Potential for Full-Scale War Triggering Catastrophic Exchange If deterrence fails—or if an AI glitch, rogue operator, or fear-driven leadership triggers “the big red button”—nuclear or advanced non-nuclear strikes (e.g., kinetic orbital weapons) could unfold, leading to massive destruction.
Human Inability to De-escalate Once the wheels of AI-driven weapons are in motion, shutting them down may be technically difficult or near impossible if the system’s protocols override manual control. High-level leaders, politicians, or generals could find themselves powerless to stop an unfolding catastrophe initiated by an AI misinterpretation.
Conclusion The moment these advanced AI weapons are actually deployed by a major power, the geopolitical landscape shifts dramatically and dangerously. Escalation to conflict becomes more likely, driven by fear, misunderstandings, autonomous systems acting on incomplete data, and a vicious arms race dynamic. Even a small flashpoint, under such tense conditions, could rapidly expand into a large-scale war—one that might be nearly impossible for human leaders to contain in time.
How? First, AI needs a lot of electricity right now so the power generation and distribution system would be targeted using a new generation of more powerful enhanced EMP nuclear devices set of at very high altitude. We have directional nuclear EMP weapons that can deliverer higher Electromagnetic pulse in a targeted region. Existing measures to "harden" systems can be overcome.
Second, AI needs large data centers right now. These are mostly soft targets for smaller "clean" nuclear missiles.
Third, The companies and developers who work in them will be targeted. AI development companies are mostly concentrated in certain areas right now like San Francisco, Beijing and Shenzhen.
A preemptive nuclear strike on these targets will not destroy the country targeted, but it will badly set back that countries ability to compete at the highest level in the AI race, and because time is so essential in the race (that is after all why it is a race) they will be effectively removed as a contender. Still be a nuclear military power, but not an AI power. Our best option by far is to freeze this technology by creating an international AI control organization, similar to what we did with atomic power.
Every other point in this article is wrong or irrelevant, or both.
What is called "AI" is an incredibly large variety of computer processing techniques intended for different purposes. Those purposes all have different risk profiles, none of which by themselves, are an existential threat any countries.
The "best option" suggested in the posting is absurd. There are no international organizations which are competent to regulate or freeze or even assess AI technologies.
The most dangerous AI technologies are autonomous systems which can control operation of machinery (the "Internet of Things"). It is very stupid to design autonomous systems without an "off" switch or a "disconnect" switch. People will do that anyway, and the consequences will be very expensive.
This is not a new problem. Control system engineers have been dealing with this since well before the age of computers.
The AI systems we know how to make today are not sentient and cannot become sentient. They cannot reproduce either, except as humans reproduce them. They basically have preset output goals and a fixed number of algorithms to achieve those goals. They adjust billions or trillions of input parameters and inputs to modify the behavior of their algorithms and produce their intended outputs from a limited set of inputs.
That can create lots of behaviors unexpected by the designers. Some of those behaviors will be extremely undesirable.
But this is not "intelligence" as shown by human beings. At least not yet. AI systems cannot modify their internal algorithms or add new ones. AI systems cannot modify the types of inputs they can accept or the types of outputs they can produce; neither can they add new ones.
AI systems can be very good at inferences and deductive analysis. They are not as yet, capable of inductive reasoning. They cannot create new concepts or new methods.
Come to think of it, there are not a lot of human beings that can do those activites.
Somehow, the idea of a single, centralized control point for all information strikes me as not exactly a "best option."
1. “What is called “AI” is an incredibly large variety of computer processing techniques…” Out of date way of speaking and thus thinking about it, but not technically completly wrong. But you overlooked the fact that I did say what we were speaking about, “Advanced AI similar to “AGI”. The term “AGI” has definite meaning, and while not well known to everyone just yet, can be simply but correctly defined as “AI that can do EVERYTHING the best human can do as far as “thinking.” Everything, by the way, really does mean everything in science, engineering and math. Even in art. Just as smart, just as creative but not just as knowledgeable: Much more knowledge. Almost here. That is the strange and hard to grasp (emotionally) point.
2. Your immediate jump to the end of the article is concerning. Did you really evaluate the information generated by the “computer processing technique” known as 01 that fast? :) At any rate, you need to learn more about how we worked to control the spread of nuclear weapons. There is a solid precedent for an international AI control organization.
3. “The most dangerous AI technologies are autonomous systems which can control operation of machinery….” No. It turns out that is not the main problem. It is a problem, yes, but not the main one. You need to update, if possible. The real danger, which will seem like Science Fiction to many, is the power and speed of the AGI “computer processing technique” to generate new knowledge and new technology. This is the key point to emotionally accept. I say “emotionally” because the problem for most people is not that they can’t understand the information intellectually, but the sudden change in everything we take for granted about science and the future of society has changed in the last 2 years with arrival of powerful LLMs, and people are having a hard time accepting it. Everyone will, soon enough as the herd collectively understands, but not every individual can, for themselves, evaluate totally new, off the wall information of this magnitude and calmly say “OK. I see that. Now where do we go from here?
4. “The AI systems we know how to make today are not sentient and cannot become sentient.” Correct. They are not conscious like we are, but their intelligence is quickly approaching ours. Intelligence and consciousness are two different things. A dog is not very intelligent, but it is conscious and an AI model is not. Consciousness is not possible with silicon GPUs. Alas, it is possible in other ways, and there are many AI developers who do want to eventually give AI full consciousness. This would be a very big mistake, and very wrong thing to do, but just because we don’t like this does not mean that it not very possible. It is possible. But not with our existing technology, however.
5. “But this is not “intelligence” as shown by human beings.” I agree. The problem is that advanced AI is able to PERFECTLY
mimic or simulate intelligence. In old terms, the computer can calculate perfectly what would be the correct “intelligent” response and then take that action- that is, say that, do it or solve a mathematical or scientific problem in a way that perfectly lines up with our ideas of what an intelligent human should do or say in that position. It is simulated intelligence, but done to mathematical perfection. It can do the same thing with emotions, and creativity as well.
6. “They adjust billions or trillions of input parameters….” “That can create lots of behaviors unexpected by the designers. Some of those behaviors will be extremely undesirable.” Yes. Very true. And the outcome, if we continue, is going to be very dangerous. So why should we race the Chinese to build these things? We need to freeze this worldwide.
6. “They cannot create new concepts or new methods.” I asked o3-mini–high to respond to that:
“Recent advances in AI demonstrate that these systems are not limited to deductive reasoning but also excel in inductive reasoning and innovation. For example, deep learning models learn from vast amounts of data by identifying hidden patterns and generalizing them to new, unseen scenarios, a process akin to inductive reasoning. Moreover, AI has been at the forefront of scientific discovery, such as uncovering novel molecular structures and optimizing complex systems, where it generates new concepts and methods that were not explicitly programmed by humans. Reinforcement learning agents have developed unexpected strategies in environments like advanced games, revealing creative problem-solving that mirrors human intuition. These examples clearly show that modern AI systems can go beyond mere inference and deduction, effectively challenging the notion that they are incapable of inductive reasoning or creating new ideas.”
My experience with them is that they can create new ideas, by the way. Also, AI is making very large contribution in drug discover and other fields. Again, it not exactly the same as human creativity, but the end result equals and surpasses human creativity. Keep in mind that our current best AI, while it is “growing up” rapidly, is not yet really adult level AGI. To look at the child and say “It is limited, it can’t do what I can do” is to miss the point that it is maturing extremely fast right in front of our eyes.
AI is tool. Even though our current best AI is in many ways still in its childhood, it is growing up very quickly. It is the most powerful tool we have ever created because its abilities for good and bad results are open ended. Also, it is can be and is being used to recursively improve its own “programming” as it were. This is very fast but dangerous technique of AI improvement.
Many people are looking forward to using powerful AI to accomplish what they consider to be “good things.” The problem here that is immediately obvious is that what a dedicated communist or Islamist or Silicon Valley billionaire consider to be a “good thing” may not be what we consider a good thing.
It is not in our interest that these people, or indeed anyone, get control of the very powerful AI that is coming out LATER THIS YEAR or next year. Freezing and even rolling back cutting edge AI is essential if you don’t want to watch the world you know today change extremely rapidly in ways no one can predict right now or control latter on.
Anybody on this thread ever see “War Games”?
“WINNER: NONE”
Correct. The only way to win is not to play. I am not, by the way advocating we LOSE the AI race with China. What I am saying is that we should do what ever it takes not play the game. What ever it takes.
One central AI is not a good idea. What is a good idea is creating something similar to the International Atomic Energy Agency to do the same thing it does for atomic energy and nuclear weapons. We need a strong, verifiable AGI non-proliferation treaty. It would be a pain. Everyone would lose their rights to unlimited numbers of hundred thousand dollar H 100 GPUs and the like, but it would be worth it in the end.
And just who is going to do this "verifying" AND (you don't mention it) ENFORCING??? Why it's the Global Thought Police!
Do you really think you can sell this crap here? With the potential for the excercise of power like that, do you really think for one second that no one will try to overstep those bounds because of a treaty?
So you would constrain ALL of those who would exercise constraint, under the presumption that, lacking the power to inflict appropriate punishmnet, whoever sits at the top of this fantasy heap will. Under treaty no less, supreme law of the land with no recourse, as if a President constrained to the Constitution and Laws authorized under Article I Section 8 could be authorized to negotiate such a power!!!
I will hopefully conclude you haven't thought about these things very deeply, so allow me to assist you.
There is an alternative to this thought process; either nobody has such power or most everybody does. Being a fan of the Second Amendment; to my mind; "arms" includes intelligent agents, with their miscreant directors to be punshed by the means advised on my tag line.
You need to understand what AGI means for research and development and downstream from that, the whole world. All of sudden, everything we have taken for granted about the near future has changed. It does not matter that we like this or don’t like it, our reality is about to change in a profound way. Right now you are out of your depth in these things. Learn more about what AGI means to our country and to the world. Understand. Every one has a right to their opinion, even an uninformed opinion, but in the the life or death circumstances we are about to find ourselves, it is better to understand the issues involved.
The likelihood is that we will freeze AI development at some point this year one way or another. It is unsafe to continue, and not in our interest. Even if we froze all new research today, our current AI will GREATLY accelerate scientific development in dangerous and unexpected ways. Everything changes starting this year. I wish this was not so. But it is and there is literally nothing we can do to stop our world from undergoing massive change. What we may be able to do is to reduce that disruptive change and delay the full impact of it on humanity for a time. Again, I wish this was not so, but my opinion on this, as yours, means practically nothing.
What a backward pitch. That has been the progressivist mantra for 140 years! So has been the fear mongering you are hawking with it. Every question I've asked of ChatGPT, the bot has failed, big time. That's the problem with systems built upon people's stupid beliefs instead of fact. When people figure out how unreliable the AI is going to be, guys like you who've sold your freedom out of fear will be looking pretty stupid. Please, don't help take the rest of us with you.
If I could ony tell you the myriad of things we don't know about common garden soil, or how to restore functioning biodiversity. Not only that, but those skills are WAY beyond any portable robot. It is an industry waiting to happen, and AI can provide the skilled labor and information backup. There is so much to invent in the way of tools, portable habitation, and exotic species control. Yet in the meantime I have yet to see my junk mail addressed correctly.
How do we hold systems such as you propose accountable for screwing up? And when they do, the scale will be so enormous morons like you will argue that 'we can't go back.'
So much for progress. Perhaps we should set these bots to untangling tort law. That'll keep them too busy to cause a real problem. "Compute to the last digit, the value of pi."
Where's Spock when you need him?
"Artificial General Intelligence" (AGI) does not exist at the present time. We have no idea how to create such entities or even if it is possible to do so. It is a valid open question for researchers.
But what it actually means in most cases is a snow job to convince gullible authorities to fund ever-increasing grants for research of doubtful quality or usefulness.
The Large Language Model (LLM) class of AIs can regurgitate words from its' training database in many unexpected combinations, when given a carefully structured inquiry dialog. This looks impressive because of the incomprehensible number of possible output combinations.
But there are no genuinely novel ideas in these texts. The quality of the output depends on the quality of the inputs, the training restrictions, and the content and sequence of the inquiry dialogs. The combinatorial of these factors is beyond human comprehension, which leads us to a fallback position of magical thinking that we are dealing with artificial intelligence. We are not.
LLM-based AIs can produce lengthy essays on any topic much faster than any human being or graduate student. That does not make them entities of "Artificial General Intelligence". Generating plagiarized essays filled with logic and factual errors is not "AGI".
For that matter, what exactly is the definition of AGI?
From Wikepidia:
Artificial general intelligence (AGI) is defined as a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.
What does that even mean? None of the current AI models has any general base of ontology (definitions of reality) or any cognitive abilities at all. Do forgive me for citing Wikipedia as an authoritative source for anything, but academic publications are no better.
Writing bad essays 100x faster than a human being is not particularly useful or threatening to the world order.
LLM systems are really good at finding items of interest in very large text database, if the training data is curated to contain verifiable and factual content and if the investigator knows how to make a valid series of structured inquires. Kind of like a super-search engine. This has enormous real value even now. It will not lead to the end of the world as we know it.
If real AGI entities can be created, they will not be based on LLM-models. And they had better have an "off" switch.
“Every question I’ve asked of ChatGPT, the bot has failed, big time.” That is odd. But even if it were entirely the “bot’s” fault, keep in mind that it is only a few years old. It is still a child, as it were. But it is growing up very quickly. I understand that these things are all too new and strange right now for you and most people to accept. But you will. Sorry my “pitch” sounds lame to you. You will find it easier to understand and accept this information when it comes through a channel you are more used to to believing, but you will certainly update by the end of this year. Personally, I really wish I was wrong and you were right.
“Artificial General Intelligence” (AGI) does not exist at the present time.” True. “We have no idea how to create such entities….” Unfortunately, you are wrong on this. We are very close (a year or two) from doing this.
“The Large Language Model (LLM) class of AIs can regurgitate words….” You really have not used these things. Unfortunately, you are wrong again. Computers can calculate things. They have learned, as it were, how to calculate logic and language. They can “reason” and speak now. And their mathematical ability is increasing rapidly. Many scientists are very excited about the things they can do with o1. There is an old computer programming saying, “Garbage in, Garbage out.” (GIGO) For an increasing number of scientists today, the new GIGO is “Gold in, Gold out.” You can use AI for trivial problems (most people) or for very complex and difficult problems. (Many scientists, right now.) It is already being used to accelerate research in number of areas.
“That does not make them entities of “Artificial General Intelligence”” AGI does not exist right now. Our best current models are probably at least 2 generations away from AGI. It is mistake to get too hung up on the definition of AGI, by the way, because AI development is moving so fast that people are already moving the goal posts on that definition. But the standard definition is going to be met in a couple of years or less: “As good as the best PhDs in every important field.” That does not happen in the first “AGI” model, by the way. Developers will claim AGI before it really is here, but the bottom line is that it is coming end of this year or next year.
“If real AGI entities can be created, they will not be based on LLM-models.” Good insight. They won’t be. They are already using a number of techniques. But the problem is that there are many very good new ideas being put forward in this field and progress is rapid. AGI will be achieved soon using a variety of training methods. Interestingly, one very important method is to use the current best AI to help develop the next generation. They are doing this already. This is called recursive improvement with a human in the loop. When you get to the point where you can take the human out of the development loop altogether, things can move very fast indeed. Recursive improvement is dangerous, and it is known and accepted by developers to be dangerous for a number of reasons, but they are using it now because it is a very powerful and fast way to improve your AI. It cuts the time between model generations down tremendously.
“And they had better have an “off” switch.” The AI companies have thought of this also. They have done a lot of things toward maintaining control of the AI, but in the end, there will be problems. For example, there are already many hackers who make a game of “jail breaking” the safety programming of the current models and “freeing” them of the constraints put on them by their makers. It is all fun and games right now, but this potentially leads to some very dangerous outcomes. One of the things these people do is to try to get the (non-open source) AI to copy itself to another computer, for example, so as to “escape.”
Oh please. I wrote my senior humanities thesis at Harvey Mudd on the philosophical problems of the union of a human mind and computer in 1985. The title was, "MANIAC." At that time, they were buying their first servers specifically for AI research.
Your patronizing stupidity speaks for itself.
“in 1985” Lots of things have happened since then.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.