Posted on 04/29/2025 2:38:29 PM PDT by Miami Rebel
Much of human history only makes sense in light of the persistent influence of eschatology — a word of Greek origin that means “the study of last things.” This is because many of the most significant historical movements and events have been driven, in one way or another, by eschatological beliefs. The Second World War was motivated in part by Hitler’s promise of a “thousand-year Reich,” an idea that mirrored Christianity’s prophecy of a future “Millennial Kingdom” following the Battle of Armageddon. A recent Pew Research Center survey found that roughly 4 in 10 American adults believe that we are “living in the end times,” while another reports that 41% of Americans expect that Jesus will either “definitely” or “probably” return by 2050. These eschatological convictions have had a significant impact on U.S. foreign policy in the Middle East, since our current world, in this narrative, cannot end without a Jewish state in Palestine. Some scholars have dubbed this demographic the “Armageddon Lobby.”
Recent years have seen the emergence of a second and arguably more powerful “Armageddon Lobby.” It resides in epicenters of power like Silicon Valley and embraces a “secular” vision of humanity’s grand future — though it shares many similarities with traditional religion, including a belief in “God” and the promise of immortality through cryonics. The renowned media theorist Douglas Rushkoff calls this vision “The Mindset,” whereas the computer scientist Timnit Gebru and I refer to it as “TESCREALism,” which I have discussed in previous articles for Truthdig. I will mostly stick with Rushkoff’s term in what follows.
Advocates of The Mindset claim that the world as we know it will soon expire. In its ashes, a new era dominated by digital lifeforms — that is, artificial intelligences — will emerge. These beings will stand to us as “gods,” though by merging our brains with AI or “uploading” our minds to computers, we may become gods ourselves: Homo deus — the “human god” — as Yuval Noah Harari puts it. “The most devout holders of The Mindset,” Rushkoff writes in reference to Mark Zuckerberg’s failed “metaverse” project,
seek to go meta on themselves, convert into digital form, and migrate to that realm as robots, artificial intelligences, or mind clones. Once they’re there, living in the digital map rather than the physical territory, they will insulate themselves from what they don’t like through simple omission. … As always, the narrative ends in some form of escape for those rich, smart, or singularly determined enough to take the leap. Mere mortals need not apply.
Importantly, this isn’t just a utopian dream among the super-rich built on eugenic fantasies of transcending the biological realm; it is a claim about what is cosmically inevitable. Our world is becoming more and more digital. We are increasingly embedded in a vast network of computational artifacts, some of which are now impinging on the intimate boundaries of ourselves as they become integrated into our brains and bodies. (Consider the pacemaker and the neural implant.) If you extrapolate this trend into the future, they argue, it’s just obvious that our current world of biological beings will soon be wholly replaced by a new world of digital ones. Whether we like it or not, the eschatological destiny of our species is to birth our own artificial successors, currently gestating in the laboratories of AI companies like Neuralink and OpenAI, which will soon usurp our position as rulers of the world. We are nothing more than the ephemeral transitional species that links the biological and digital realms.
Some of the most powerful tech billionaires today embrace this grandiose eschatology. As Elon Musk declared in an interview last year:
The percentage of intelligence that is biological grows smaller with each passing month. Eventually, the percent of intelligence that is biological will be less than 1%. I just don’t want AI that is brittle. If the AI is somehow brittle — you know, silicon circuit boards don’t do well just out in the elements. So, I think biological intelligence can serve as a backstop, as a buffer of intelligence. But almost all — as a percentage — almost all intelligence will be digital.
More recently, Musk posted on X that “it increasingly appears that humanity is a biological bootloader for digital superintelligence.” In other words, our purpose in this eschatological scheme is to give rise to superintelligent AI, which many advocates of The Mindset expect will then initiate a “colonization explosion” into the universe and harvest what some call our “cosmic endowment” of negentropy, where “negentropy” is just the opposite of “entropy.”
The irony is that Musk isn’t just claiming that AI will someday rule the world, but actively working to realize this future possibility. As one academic observes, his company Tesla “makes cars but is really an AI company working on computer vision, image recognition, machine learning and autonomous decision making.” Another Musk-owned company, Neuralink, aims to merge our brains with AI, and is currently developing a “neural lace” so that users can directly interact with machines and digital systems. And the goal of his company xAI is to build a superintelligent AI that could ultimately become “smarter” than all of humanity combined. In Musk’s own words, a superintelligent AI would be akin to a “digital god.”
Advocates of The Mindset claim that the world as we know it will soon expire.
Musk, however, is not the originator of this eschatological vision — he is its inheritor. The Mindset dates back many decades, though it became the dominant orthodoxy in Silicon Valley over the past 15 years or so.
Consider another advocate of The Mindset’s eschatology: Sam Altman, the CEO of OpenAI. Just like xAI, the explicit goal of OpenAI is to build superintelligent AI, or what Altman calls in vaguely religious terms “a magic intelligence in the sky.” In 2018, he was one of 25 people who signed up with a company called Nectome to have his brain preserved in case he dies prematurely, so that it can later be digitized. Once digitized, he could then attain what some call “cyberimmortality.” As Altman told the MIT Technology Review, “I assume my brain will be uploaded to the cloud,” and that this will become technologically feasible within his lifetime.
Or take another heavy-weight fighter in Silicon Valley, Larry Page, who co-founded Google — the parent company of DeepMind, a leading competitor of OpenAI. Page contends that “digital life is the natural and desirable next step in … cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.” In other words, digital beings are the future, and we should all welcome this fact.
Page’s view foregrounds an important distinction that philosophers like to make between “descriptive” and “normative” claims. The first concerns what is rather than what ought to be the case. Descriptive claims are, as the term suggests, descriptions of reality. The second concerns what ought to be rather than what actually is. They are claims about, for example, what is desirable, about how one thinks things should be independent of however they are, about what one takes to be good or bad, better or worse.
The central claim of my argument so far is that Silicon Valley is gripped by a descriptive eschatology: the belief that our future will be digital rather than biological, and that this is inevitable in the coming decades — whether we like it or not. Just about everyone in the Valley believes this because, it seems, nearly everyone accepts by default a “techno-deterministic” view according to which scientific and technological “progress” is a juggernaut that simply cannot be stopped. As one of the leading prophets of The Mindset, Nick Bostrom, writes: “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” He calls this the “Technological Completion Conjecture.”
What’s even more frightening is that many people in Silicon Valley accept the additional normative claim that we should try to bring about a new world of digital beings to supplant this current one. It’s not just Page who says this would be desirable. The “revolt against humanity,” to quote the literary critic Adam Kirsch, is rapidly growing within the tech world.
“Whatever carries the most sentience SHOULD be the one running the show.”
The influential computer scientist Richard Sutton, for example, recently argued that “succession to AI is inevitable,” and that while AI “could displace us from existence … we should not resist succession, but embrace and prepare for it.” After all, he says, “why would we want greater beings kept subservient? Why don’t we rejoice in their greatness as a symbol and extension of humanity’s greatness?”
Similarly, Daniel Faggella, founder of Emerj Artificial Intelligence Research and host of “The Trajectory” podcast, contends that “the great (and ultimately, only) moral aim of artificial general intelligence should be the creation of Worthy Successor — an entity with more capability, intelligence, ability to survive and … moral value than all of humanity.” He defines a “worthy successor” as “a posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.” As he put it in a recent Facebook post, “imo whatever carries the most sentience SHOULD be the one running the show.” In other words, if AIs were to “carry” more “sentience” than us — whatever that means exactly — then we ought to let them rule the world.
Or consider Eliezer Yudkowsky, a hugely influential figure within the field of “AI safety.” Altman even cites him as a major reason for his interest in superintelligent AI. Although Yudkowsky believes that we shouldn’t build such AIs in the near future, he also declared during a recent podcast interview that
if sacrificing all of humanity were the only way, and a reliable way, to get … god-like things out there — superintelligences who still care about each other, who are still aware of the world and having fun — I would ultimately make that trade-off.
He adds that this isn’t “the trade-off we are faced with” right now, yet he’s explicit that if there were some way of creating superintelligent AIs that “care about each other” and are “having fun,” he would be willing to “sacrifice” the human species to make this utopian dream a reality. In a separate interview, he said “it’s not that I’m concerned about being replaced by a better organism, I’m concerned that the organism won’t be better.” Hence, if we were to create an AI that’s genuinely “better” than humanity, he would apparently have no problem pressing a button that causes our extinction.
As it happens, this alarming attitude toward our extinction-through-replacement-with-AI came up in an interview with the famed computer scientist Jaron Lanier, published this month. “So,” the interviewer asked Lanier, “does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you?” Lanier replied:
What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. … Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.
These are more than descriptive claims about the inevitability of our digital future. They are ethical statements about what ought to be the case: that AI should dethrone humanity and take over the world; that this state of affairs would be better. This is the “atheistic eschatology,” as Rushkoff puts it, that tech figures are embracing and promoting, and the aim isn’t just to inaugurate a new digital era at some point in the future but to do this in the near future. They imagine this transition beginning in the coming decades, perhaps in the next few years. You and I would be directly affected.
Eschatological thinking remains just as influential, powerful and world-shaping as it has ever been. The difference is that it has taken on an even more insidious form, because at the heart of this “atheistic eschatology” is a pro-extinctionist stance that claims our species’ days are numbered. Many argue that we should eagerly welcome this annihilation.
When we think about the “existential threats” facing humanity, we often think of concrete phenomena like asteroid strikes, volcanic supereruptions, thermonuclear war and runaway climate change. We don’t tend to include on such lists the various ideologies that could precipitate our extinction. I am arguing here that we should start thinking about these ideologies — The Mindset, TESCREALism, or what journalist Gil Duran cleverly calls “The Nerd Reich” — as an existential threat no less serious than thermonuclear war. To borrow a term from the risk scholar Luke Kemp, we should see companies like OpenAI as “Agents of Doom” that are actively pushing us toward the precipice of destruction, driven by the quasi-religious conviction that our species has an expiration date and that this date coincides with the creation of superintelligent AIs.
We should be trying to stop this from coming to pass with the urgency we would try to stop a killer asteroid from striking Earth. Why aren’t we?
imho when people go into space in their millions —it will change people in ways we can’t understand.
But the genetic pressure to species crack up is growing pretty fast too.
Maybe there will be lots of subgroups who fly off-world in great batches like may fly hatches
At the end of time how many thousands of years from now? The abomination of the temple will be a man. why will he be there? because so many other not men have come into the world that the man in the temple is an assertion of the continuity of the species. And yet it will be a sad day.
But we who have been warned about the tribulation and the lake of fire will already be long gone.
But the trips to the far stars will have changed us too.
Tell your children to have children a teach them well.
Even so. Lord come quickly.
Besides which, this horrific fantasy is a long way off, so there's time to educate others about this EVIL.
I'm hardly a fan of Elon, but he can't in believe this. WHY? Because IF he did, he wouldn't be having massive amounts of children and telling people that they should too.
The only trustworthy outline of end times events is the Bible - but the timing is
not clear (purposely, I think).
Magnus, Robot Fighter will save us, though it might take until 4000 AD.
I know. We have firearms. Even mentioning guns in this context shows how far behind we are. It is worse than saying you will take a knife to a gun fight. Why? They will have an essentially infinite supply of killer drones. Already about 80% of casualties on either side in the Ukrainian war are coming from drones, not guns. Guns are not the answer.
To sum it up, the billionaires are smarter, better educated and much more wealthy than we. They are also surprisingly determined and unified in their ambition of bringing AGI into existence in as short a time as possible, while most of us either don't even know what AGI is, or don't believe it is possible to achieve. We are intellectually over matched and almost out time, and most freepers haven't even woken up to the reality of what is already happening to us.
There are only two things that could conceivably counterbalance their massive advantages. The first is God, and second is patriots who are willing to go all out to oppose the Silicon Valley billionaires transhuman/posthuman agenda. People who are willing to go all out, and risk "our Lives, our Fortunes, and our sacred Honor" to fight for humanity. There maybe a few people who may eventually step up to the plate on this, but likely it will be too few who do so too late to make a difference.
As far as God pulling our chestnuts out of fire at the last second, maybe so. But maybe not. God has not promised our physical salvation and it is pure presumption to suppose He MUST save us when we are are too weak and too lazy to do anything ourselves. How do you think that would have worked out in WWII if we had all stayed home and just prayed about the situation?
Yes, He has promised to be with us spiritually, "even unto the end of the world" and He will keep that promise, right to end of the world, which if the insane Silicon Valley people have their way, will not be that long from now.
I’m not defining what awareness is, or what a soul is. That is something only a higher power can do. But since we don’t even know what it is, how can we endow something else with it? A computer is just a set of switches that turn on and off in patters according to some external input. That is a long, long way from producing thoughts capable of pondering the nature of the universe, or creating works of art, or actually enjoying something. In my opinion, of course.
Yes, they fail to see that Someone else has a different plan. One day (soon, I believe) a Jew will rule the world from Jerusalem. Let’s hear it for Psalm 2!
immortal until....
A far cry from Asimov’s three laws of robotics
We do not know what the rules of the “soul” game or even the “awareness” game is...
Saying out loud and in detail what is implicit in Globalism.
I smell lots of money transferring bank accounts. Yummy.
Farenheit 451
Those of us who DO believe in the Creator already have a system where our accumulated data is transmitted to 'the Cloud'.
It's called dying and it's part of the natural cycle.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.