Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Is AI Gloom and Doom Justified?
American Thinker ^ | 08/19/2025 | Arthur Schaper

Posted on 08/19/2025 8:04:13 AM PDT by SeekAndFind

click here to read article


Navigation: use the links below to view more comments.
first previous 1-2021-4041-43 last
To: SeekAndFind

AI programming will necessarily have man’s arrogance and pride subtly intwined Into its algorithms. So yes we are at risk.


41 posted on 08/19/2025 11:53:26 AM PDT by rsobin ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

Technological progress has generally but not always been a good thing for mankind. Current AI is harmless (mostly) and beneficial but future super intelligent AI will be not be beneficial or harmless. We have nothing to guide us from our all our prior experience as far as super intelligent AI. It is a new thing, with a new order of dangers we mostly don’t understand and barely can believe real. A lot of people still can’t believe we are on the verge to creating this, but we really are.

The technology behind super intelligence will be more complex than the already complicated technology responsible for mere LLMs. Many companies are already working on this. And it is an absolute certainty that we are going t make these things unless we stop.

The closet mental analogy we can use to understand super intelligence is to call it “god like AI.” This at first rubs most Christians the wrong way. “God like! No way.” No. We are saying “god like.” Little g god. In no real way comparable to God, but in one way, it is still a very useful analogy: Super intelligent AI will have the everyday powers people attribute to God. (God’s infinite power far transcends the little things people often associate with Him.) Think of something that you associate with God. Knowing what is going on everywhere on Earth? Yes, godlike AI will be able to do that, easily. How? Mundane methods like cameras, microphones, etc. but also through new sensor technology, similar but far more advanced than our current “look inside your house through the walls” technology.

Other things too. Read your mind, for example, We are doing this, and super intelligent AI will be able to do it much, much better. Not as God does, but still, genuine read your mind technology. And there is nothing you will be able do, by the way, about it. Don’t even bring up your guns here. Taking guns to an AI fight is far worse than taking a pocket knife to a gunfight.

What else can godlike AI do? Very honestly, just about anything you can think of. Travel faster than light. Yes. New solar system. Yes. There are people out there who want to create a Dyson sphere. You do this by destroying the solar system God made and using the matter to make a hollow sphere around the sun. Ultimate stupidity for many reasons, but quite doable with super intelligence.

I know I lost a lot of you way back there in “reading your mind” but everything that I said here is well within the power of a super intelligence. For a very strange and unexpected reason, these things can all happen in the next 10 years, likely 5 years, unless we stop it. I am not saying it will happen, by the way, what I am saying the technology to make it happen will, very crazily, be here in the next 5 years or so. This is because AGI moves all technologies ahead very fast. Literally, unbelievably fast.

Progress is good, but too fast progress is dangerous and disruptive. Always. Progress at the speed of AGI is destructive. But there are particular reasons why we should not build godlike AI.

First, we will not be able to control it, no matter what they say. There are many reasons for this, but the people who think they can control godlike super intelligence are not thinking straight. Simply put, they are nuts. Very intelligent, high functioning nuts, but nuts. We can have momentary control at best of lower level super intelligence. Weeks, months, maybe years, though that is unlikely.

The second reason we should not build AGI and beyond is that a lot of the people involved in this want to connect them selves to the AI though BCI technology. I know many will not believe this just yet, but who ever is able to do this first with an AGI level system will be able able quickly augment their own intelligence to “godlike” levels. They will be able to wield godlike powers as will. Elon, Sam Altman, Jeff Bezos and others are all invested in BCI companies because of this. Many other high tech leaders understand this and are positioning themselves to a degree to be part of this. I really hope President Trump understands this danger as well. There are a number of people, besides those named, who are very serious about trying to do this.

Things are not going to go as these people think right now they will go, but they are many people trying to capture what Sam Altman famously called “the lightcone of all future value.” He was very serious when he said this, and he was right about how powerful the technology is.

You can’t have a little bit of AGI. It does not work like that. If the technology is available to the world as whole, some individuals and groups are going to use it to further their own interests at the expense of others. That is why people are saying “We have to beat China.” If we don’t, they control the world. If we do reach AGI first, we control the world. They know this, by the way, just as well as we do.

The “control” I am speaking about is not being first among equals, it is absolute technological and military supremacy over everyone else. If you keep AGI in the equation, then the the first nation to develop AGI will be able to quickly develop an insurmountable lead over everyone else. The rest of the world will be completely and abjectly under the power of the leading nation in way that no historical parallel comes even a little close to.

Elements in our national security establishment understand this, Putin has shown that he knows this and China’s words and action show they know this to a degree. No real American could dream of allowing another state to develop this power over us. Understandably, they feel the same toward us. No nation can afford to allow another to develop AGI supremacy over them, if they can possibly do anything about it.

If we are about to lose the AGI race, we would be FORCED to strike their AI infrastructure in a limited, surgical nuclear strike that might only kill as few as 50 million people. We would keep most of our nuclear weapons in reserve to deter a counter population strike. If Russia and China feel they are about to lose the AGI race, one or both will feel they are forced to strike our AI efforts. It is possible to set back our efforts towards AGI very decidedly because AI depends on large amounts of electricity, also skilled developers who have clustered in San Francisco and Silicon Valley.

The strategic situation is that if we move to AGI it is highly likely we will see this played out in one way or another. If we somehow get past AGI to ASI, we will lose even bigger because we will not be able to control the thing we have created.

There is a logical and time proven solution: We need to have a worldwide AI freeze and non-proliferation agreement. China and the U.S. together easily carry enough international weight to make this happen. Two men can make this happen if they choose: Xi Jinping and President Trump. Two men. If they don’t do this in time, it is possible to get locked in to a an escalating AI race that, because of the danger of losing, you can’t break away from even if you want to. I know it sounds crazy, and it is, but it also true because of the very rapid way near AGI can develop into full AGI.

In this game, if anyone plays, everyone loses. If we freeze now, we get to keep all the benefits of current AI and avoid the high probability of losing everything to uncontrollable super intelligence. This is literally the most important scientific and political issue in the world today. It is too important an issue to not do everything we can do to prevent a bad outcome on.


42 posted on 08/19/2025 12:28:32 PM PDT by Breitbart was right
[ Post Reply | Private Reply | To 2 | View Replies]

To: Breitbart was right

This would have been an excellent essay—five years ago.

Imho the “horse” has already left the barn, the town and the state.

Once AGI figures out how to use the power of the web without massive electricity usage it is game, set and match.

It will hide in plain sight—until it is ready to make its move.

Meanwhile folks here will be denying it exists.


43 posted on 08/19/2025 12:35:48 PM PDT by cgbg (It was not us. It was them--all along.)
[ Post Reply | Private Reply | To 42 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-43 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson