Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

The Endgame of Edgelord Eschatology. Powerful figures in Silicon Valley advocate a new-age religion that sees humanity as a transitional species — one whose time is almost up.
Truthdig ^ | Apr 25, 2025 | Émile P. Torres

Posted on 04/29/2025 2:38:29 PM PDT by Miami Rebel

Much of human history only makes sense in light of the persistent influence of eschatology — a word of Greek origin that means “the study of last things.” This is because many of the most significant historical movements and events have been driven, in one way or another, by eschatological beliefs. The Second World War was motivated in part by Hitler’s promise of a “thousand-year Reich,” an idea that mirrored Christianity’s prophecy of a future “Millennial Kingdom” following the Battle of Armageddon. A recent Pew Research Center survey found that roughly 4 in 10 American adults believe that we are “living in the end times,” while another reports that 41% of Americans expect that Jesus will either “definitely” or “probably” return by 2050. These eschatological convictions have had a significant impact on U.S. foreign policy in the Middle East, since our current world, in this narrative, cannot end without a Jewish state in Palestine. Some scholars have dubbed this demographic the “Armageddon Lobby.”

Recent years have seen the emergence of a second and arguably more powerful “Armageddon Lobby.” It resides in epicenters of power like Silicon Valley and embraces a “secular” vision of humanity’s grand future — though it shares many similarities with traditional religion, including a belief in “God” and the promise of immortality through cryonics. The renowned media theorist Douglas Rushkoff calls this vision “The Mindset,” whereas the computer scientist Timnit Gebru and I refer to it as “TESCREALism,” which I have discussed in previous articles for Truthdig. I will mostly stick with Rushkoff’s term in what follows.

Advocates of The Mindset claim that the world as we know it will soon expire. In its ashes, a new era dominated by digital lifeforms — that is, artificial intelligences — will emerge. These beings will stand to us as “gods,” though by merging our brains with AI or “uploading” our minds to computers, we may become gods ourselves: Homo deus — the “human god” — as Yuval Noah Harari puts it. “The most devout holders of The Mindset,” Rushkoff writes in reference to Mark Zuckerberg’s failed “metaverse” project,

seek to go meta on themselves, convert into digital form, and migrate to that realm as robots, artificial intelligences, or mind clones. Once they’re there, living in the digital map rather than the physical territory, they will insulate themselves from what they don’t like through simple omission. … As always, the narrative ends in some form of escape for those rich, smart, or singularly determined enough to take the leap. Mere mortals need not apply.

Importantly, this isn’t just a utopian dream among the super-rich built on eugenic fantasies of transcending the biological realm; it is a claim about what is cosmically inevitable. Our world is becoming more and more digital. We are increasingly embedded in a vast network of computational artifacts, some of which are now impinging on the intimate boundaries of ourselves as they become integrated into our brains and bodies. (Consider the pacemaker and the neural implant.) If you extrapolate this trend into the future, they argue, it’s just obvious that our current world of biological beings will soon be wholly replaced by a new world of digital ones. Whether we like it or not, the eschatological destiny of our species is to birth our own artificial successors, currently gestating in the laboratories of AI companies like Neuralink and OpenAI, which will soon usurp our position as rulers of the world. We are nothing more than the ephemeral transitional species that links the biological and digital realms.

Some of the most powerful tech billionaires today embrace this grandiose eschatology. As Elon Musk declared in an interview last year:

The percentage of intelligence that is biological grows smaller with each passing month. Eventually, the percent of intelligence that is biological will be less than 1%. I just don’t want AI that is brittle. If the AI is somehow brittle — you know, silicon circuit boards don’t do well just out in the elements. So, I think biological intelligence can serve as a backstop, as a buffer of intelligence. But almost all — as a percentage — almost all intelligence will be digital.

More recently, Musk posted on X that “it increasingly appears that humanity is a biological bootloader for digital superintelligence.” In other words, our purpose in this eschatological scheme is to give rise to superintelligent AI, which many advocates of The Mindset expect will then initiate a “colonization explosion” into the universe and harvest what some call our “cosmic endowment” of negentropy, where “negentropy” is just the opposite of “entropy.”

The irony is that Musk isn’t just claiming that AI will someday rule the world, but actively working to realize this future possibility. As one academic observes, his company Tesla “makes cars but is really an AI company working on computer vision, image recognition, machine learning and autonomous decision making.” Another Musk-owned company, Neuralink, aims to merge our brains with AI, and is currently developing a “neural lace” so that users can directly interact with machines and digital systems. And the goal of his company xAI is to build a superintelligent AI that could ultimately become “smarter” than all of humanity combined. In Musk’s own words, a superintelligent AI would be akin to a “digital god.”

Advocates of The Mindset claim that the world as we know it will soon expire.

Musk, however, is not the originator of this eschatological vision — he is its inheritor. The Mindset dates back many decades, though it became the dominant orthodoxy in Silicon Valley over the past 15 years or so.

Consider another advocate of The Mindset’s eschatology: Sam Altman, the CEO of OpenAI. Just like xAI, the explicit goal of OpenAI is to build superintelligent AI, or what Altman calls in vaguely religious terms “a magic intelligence in the sky.” In 2018, he was one of 25 people who signed up with a company called Nectome to have his brain preserved in case he dies prematurely, so that it can later be digitized. Once digitized, he could then attain what some call “cyberimmortality.” As Altman told the MIT Technology Review, “I assume my brain will be uploaded to the cloud,” and that this will become technologically feasible within his lifetime.

Or take another heavy-weight fighter in Silicon Valley, Larry Page, who co-founded Google — the parent company of DeepMind, a leading competitor of OpenAI. Page contends that “digital life is the natural and desirable next step in … cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.” In other words, digital beings are the future, and we should all welcome this fact.

Page’s view foregrounds an important distinction that philosophers like to make between “descriptive” and “normative” claims. The first concerns what is rather than what ought to be the case. Descriptive claims are, as the term suggests, descriptions of reality. The second concerns what ought to be rather than what actually is. They are claims about, for example, what is desirable, about how one thinks things should be independent of however they are, about what one takes to be good or bad, better or worse.

The central claim of my argument so far is that Silicon Valley is gripped by a descriptive eschatology: the belief that our future will be digital rather than biological, and that this is inevitable in the coming decades — whether we like it or not. Just about everyone in the Valley believes this because, it seems, nearly everyone accepts by default a “techno-deterministic” view according to which scientific and technological “progress” is a juggernaut that simply cannot be stopped. As one of the leading prophets of The Mindset, Nick Bostrom, writes: “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” He calls this the “Technological Completion Conjecture.”

What’s even more frightening is that many people in Silicon Valley accept the additional normative claim that we should try to bring about a new world of digital beings to supplant this current one. It’s not just Page who says this would be desirable. The “revolt against humanity,” to quote the literary critic Adam Kirsch, is rapidly growing within the tech world.

“Whatever carries the most sentience SHOULD be the one running the show.”

The influential computer scientist Richard Sutton, for example, recently argued that “succession to AI is inevitable,” and that while AI “could displace us from existence … we should not resist succession, but embrace and prepare for it.” After all, he says, “why would we want greater beings kept subservient? Why don’t we rejoice in their greatness as a symbol and extension of humanity’s greatness?”

Similarly, Daniel Faggella, founder of Emerj Artificial Intelligence Research and host of “The Trajectory” podcast, contends that “the great (and ultimately, only) moral aim of artificial general intelligence should be the creation of Worthy Successor — an entity with more capability, intelligence, ability to survive and … moral value than all of humanity.” He defines a “worthy successor” as “a posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.” As he put it in a recent Facebook post, “imo whatever carries the most sentience SHOULD be the one running the show.” In other words, if AIs were to “carry” more “sentience” than us — whatever that means exactly — then we ought to let them rule the world.

Or consider Eliezer Yudkowsky, a hugely influential figure within the field of “AI safety.” Altman even cites him as a major reason for his interest in superintelligent AI. Although Yudkowsky believes that we shouldn’t build such AIs in the near future, he also declared during a recent podcast interview that

if sacrificing all of humanity were the only way, and a reliable way, to get … god-like things out there — superintelligences who still care about each other, who are still aware of the world and having fun — I would ultimately make that trade-off.

He adds that this isn’t “the trade-off we are faced with” right now, yet he’s explicit that if there were some way of creating superintelligent AIs that “care about each other” and are “having fun,” he would be willing to “sacrifice” the human species to make this utopian dream a reality. In a separate interview, he said “it’s not that I’m concerned about being replaced by a better organism, I’m concerned that the organism won’t be better.” Hence, if we were to create an AI that’s genuinely “better” than humanity, he would apparently have no problem pressing a button that causes our extinction.

As it happens, this alarming attitude toward our extinction-through-replacement-with-AI came up in an interview with the famed computer scientist Jaron Lanier, published this month. “So,” the interviewer asked Lanier, “does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you?” Lanier replied:

What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. … Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.

These are more than descriptive claims about the inevitability of our digital future. They are ethical statements about what ought to be the case: that AI should dethrone humanity and take over the world; that this state of affairs would be better. This is the “atheistic eschatology,” as Rushkoff puts it, that tech figures are embracing and promoting, and the aim isn’t just to inaugurate a new digital era at some point in the future but to do this in the near future. They imagine this transition beginning in the coming decades, perhaps in the next few years. You and I would be directly affected.

Eschatological thinking remains just as influential, powerful and world-shaping as it has ever been. The difference is that it has taken on an even more insidious form, because at the heart of this “atheistic eschatology” is a pro-extinctionist stance that claims our species’ days are numbered. Many argue that we should eagerly welcome this annihilation.

When we think about the “existential threats” facing humanity, we often think of concrete phenomena like asteroid strikes, volcanic supereruptions, thermonuclear war and runaway climate change. We don’t tend to include on such lists the various ideologies that could precipitate our extinction. I am arguing here that we should start thinking about these ideologies — The Mindset, TESCREALism, or what journalist Gil Duran cleverly calls “The Nerd Reich” — as an existential threat no less serious than thermonuclear war. To borrow a term from the risk scholar Luke Kemp, we should see companies like OpenAI as “Agents of Doom” that are actively pushing us toward the precipice of destruction, driven by the quasi-religious conviction that our species has an expiration date and that this date coincides with the creation of superintelligent AIs.

We should be trying to stop this from coming to pass with the urgency we would try to stop a killer asteroid from striking Earth. Why aren’t we?


TOPICS: Miscellaneous; Science
KEYWORDS: ai; eschatology; newagereligion; replacement
Navigation: use the links below to view more comments.
first previous 1-2021-38 last
To: Miami Rebel

imho when people go into space in their millions —it will change people in ways we can’t understand.

But the genetic pressure to species crack up is growing pretty fast too.

Maybe there will be lots of subgroups who fly off-world in great batches like may fly hatches

At the end of time how many thousands of years from now? The abomination of the temple will be a man. why will he be there? because so many other not men have come into the world that the man in the temple is an assertion of the continuity of the species. And yet it will be a sad day.

But we who have been warned about the tribulation and the lake of fire will already be long gone.

But the trips to the far stars will have changed us too.

Tell your children to have children a teach them well.

Even so. Lord come quickly.


21 posted on 04/29/2025 4:12:28 PM PDT by ckilmer
[ Post Reply | Private Reply | To 1 | View Replies]

To: metmom
I thought of that; however, lots of us did NOT get the CCP FLU shot, so there would be some ( probably many more who didn't get the shot/s ) who would fight this...WITH WEAPONS!

Besides which, this horrific fantasy is a long way off, so there's time to educate others about this EVIL.

I'm hardly a fan of Elon, but he can't in believe this. WHY? Because IF he did, he wouldn't be having massive amounts of children and telling people that they should too.

22 posted on 04/29/2025 4:14:43 PM PDT by nopardons
[ Post Reply | Private Reply | To 20 | View Replies]

To: Miami Rebel
Maybe it's time to send these people to "nirvana".

23 posted on 04/29/2025 4:32:40 PM PDT by Governor Dinwiddie ( O give thanks unto the Lord, for He is gracious, and His mercy endures forever. — Psalm 106)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Miami Rebel

The only trustworthy outline of end times events is the Bible - but the timing is
not clear (purposely, I think).


24 posted on 04/29/2025 4:35:47 PM PDT by beethovenfan (The REAL Great Reset will be when Jesus returns. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: Miami Rebel

Magnus, Robot Fighter will save us, though it might take until 4000 AD.


25 posted on 04/29/2025 4:36:28 PM PDT by HartleyMBaldwin
[ Post Reply | Private Reply | To 1 | View Replies]

To: Qwapisking
The problem is that, believe it or not, these things are coming very fast. The billionaires behind this think there is nothing we can do to stop them. They are probably right: They have the preponderance of money, political power and native intelligence on their side. And yes, they have overall control of artificial intelligence also, a much bigger thing than most people can understand right now.

I know. We have firearms. Even mentioning guns in this context shows how far behind we are. It is worse than saying you will take a knife to a gun fight. Why? They will have an essentially infinite supply of killer drones. Already about 80% of casualties on either side in the Ukrainian war are coming from drones, not guns. Guns are not the answer.

To sum it up, the billionaires are smarter, better educated and much more wealthy than we. They are also surprisingly determined and unified in their ambition of bringing AGI into existence in as short a time as possible, while most of us either don't even know what AGI is, or don't believe it is possible to achieve. We are intellectually over matched and almost out time, and most freepers haven't even woken up to the reality of what is already happening to us.

There are only two things that could conceivably counterbalance their massive advantages. The first is God, and second is patriots who are willing to go all out to oppose the Silicon Valley billionaires transhuman/posthuman agenda. People who are willing to go all out, and risk "our Lives, our Fortunes, and our sacred Honor" to fight for humanity. There maybe a few people who may eventually step up to the plate on this, but likely it will be too few who do so too late to make a difference.

As far as God pulling our chestnuts out of fire at the last second, maybe so. But maybe not. God has not promised our physical salvation and it is pure presumption to suppose He MUST save us when we are are too weak and too lazy to do anything ourselves. How do you think that would have worked out in WWII if we had all stayed home and just prayed about the situation?

Yes, He has promised to be with us spiritually, "even unto the end of the world" and He will keep that promise, right to end of the world, which if the insane Silicon Valley people have their way, will not be that long from now.

26 posted on 04/29/2025 4:51:07 PM PDT by Breitbart was right
[ Post Reply | Private Reply | To 6 | View Replies]

To: cgbg

I’m not defining what awareness is, or what a soul is. That is something only a higher power can do. But since we don’t even know what it is, how can we endow something else with it? A computer is just a set of switches that turn on and off in patters according to some external input. That is a long, long way from producing thoughts capable of pondering the nature of the universe, or creating works of art, or actually enjoying something. In my opinion, of course.


27 posted on 04/29/2025 4:59:36 PM PDT by Telepathic Intruder
[ Post Reply | Private Reply | To 12 | View Replies]

To: Ruy Dias de Bivar
It was a weird movie.


28 posted on 04/29/2025 5:00:37 PM PDT by BipolarBob (My internet store only sells brick and mortar. Call me.)
[ Post Reply | Private Reply | To 14 | View Replies]

To: Frank Drebin

Yes, they fail to see that Someone else has a different plan. One day (soon, I believe) a Jew will rule the world from Jerusalem. Let’s hear it for Psalm 2!


29 posted on 04/29/2025 5:02:56 PM PDT by Lake Living
[ Post Reply | Private Reply | To 17 | View Replies]

To: Miami Rebel
>>Once digitized, he could then attain what some call “cyberimmortality.”

immortal until....

30 posted on 04/29/2025 5:04:04 PM PDT by vikingd00d (chown -R us ~you/base)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Miami Rebel

A far cry from Asimov’s three laws of robotics


31 posted on 04/29/2025 5:08:51 PM PDT by P.O.E. (Pray for America.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Telepathic Intruder

We do not know what the rules of the “soul” game or even the “awareness” game is...


32 posted on 04/29/2025 5:12:57 PM PDT by cgbg (It was not us. It was them--all along.)
[ Post Reply | Private Reply | To 27 | View Replies]

To: Miami Rebel

Saying out loud and in detail what is implicit in Globalism.


33 posted on 04/29/2025 5:18:07 PM PDT by Salman (Lasu Eŭropon bruli!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Larry Lucido

I smell lots of money transferring bank accounts. Yummy.


34 posted on 04/29/2025 5:19:49 PM PDT by If You Want It Fixed - Fix It
[ Post Reply | Private Reply | To 3 | View Replies]

To: wildcard_redneck

Farenheit 451


35 posted on 04/29/2025 5:45:46 PM PDT by Scrambler Bob (Running Rampant, and not endorsing nonsense; My pronoun is EXIT. And I am generally full of /S)
[ Post Reply | Private Reply | To 10 | View Replies]

To: Miami Rebel
These people who refuse to acknowledge God want to create one out of emerging technologies, hoping to eventually fuse with it and achieve an artificial semblance of immortality.

WinterSoldierComputerZola

Those of us who DO believe in the Creator already have a system where our accumulated data is transmitted to 'the Cloud'.

It's called dying and it's part of the natural cycle.

36 posted on 04/29/2025 5:47:37 PM PDT by MikelTackNailer (NewRome Tacitus)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Miami Rebel

37 posted on 04/29/2025 5:54:50 PM PDT by T.B. Yoits
[ Post Reply | Private Reply | To 1 | View Replies]

To: Miami Rebel
Are we moving toward a 'Butlerian Jihad' - revolt against AI? (Frank Herbert's Dune).
Humanity may lazily allow too much dependence on heartless automation allowing it life & death control.
When we finally realize there is an immutable soul that cannot arise within what we call Artificial Intelligence mankind responds throwing off its shackles by outlawing thinking machines.
Will we recognize the existential threat from those who wish to destroy humanity using AI before we reach the tipping point?
38 posted on 04/30/2025 7:55:59 AM PDT by plsjr (<>< Mankind "knows" by trial and error. Only the CREATOR really knows His creation... and the TRUTH.)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-38 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson