Posted on 02/08/2025 2:07:21 AM PST by Lazamataz
"Reasoning", semi-intelligent (and maybe fully-intelligent) Artificial Intelligence is right around the corner.
What is Artificial General Intelligence?
According to Dr. Nivash Jeevanandam, who writes in a paper titled Reasoning in AI: Can AI Actually Think and Reason Like Humans?", it should show the following capabilities:
AI systems should be able learn from data by identifying patterns and relationships, use algorithms to analyze data and make decisions, and adapt and improve their performance over time. AI systems should be able to use logical reasoning to solve problems, use causality to make decisions based on cause-and-effect relationships, and use contextuality to evaluate data in its broader context. Note that current Large Language Models (LLMs) already have this last capacity.
Some people will say that this is going to be yet another case of the old computer axiom "garbage in, garbage out", and this is absolutely true for the early models of Artificial Intelligence, namely Large Language Models such as ChatGPT o1. Those models were merely presenting summaries of all it can read on the internet. In some ways, it is merely a clever plagiarist. As Noam Chomsky said in a recent interview:
"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations . . . " Noam Chomsky in an interview with Dr. Ian Roberts and Dr. Jeffrey Watamull
What are the ramifications?
Once Artificial Intelligence is successfully "reasoning" it will put nearly every white-collar job at risk of being replaced by an AI. The AI can work 24 hours a day. 7 days a week, 365 days a year. It will do so at a very small cost, and at some point in the relatively near future (with initial prototype models being available in the next year or two), will do those jobs better, faster, with no breaks and no demands for wages. Millions of people in America -- even worldwide -- will be impacted, including lawyers, doctors, computer professionals, engineers of every type, content writers, you name it.
This will greatly lower the costs of living in many arenas. Law, medicine, engineering of every kind, will all become far cheaper. It also makes the concern about importing cheap foreign knowledge workers via the H1-B program, a moot issue.
What happens to the many millions of white-collar professionals? They are rendered obsolete, and wages will be cut or those workers will be without jobs altogether. "But," you counter, "What of the careers of people who engage in trades or other forms of physical labor?" Those jobs, too, are at risk of being rendered obsolete. Robotics has advanced to a startling degree, and quite soon, all jobs that would have been performed by human laborers will be performed, instead, by robots.
If both knowledge and physical workers are made obsolete, and they no longer have incomes, it would also greatly lower the amount of income-related tax money the government will get. Without wages, where will tax money come from? We recently saw President Trump threaten to impose tariffs, and the pushback was severe. Perhaps a national sales tax, known as the FAIR tax, will gain traction.
There are other ramifications of true AGI, or even its predecessor: It will unquestionably bring transformative, expansive leaps in technology. It is likely that at some near point in the future, AGI can perform at the level of the best human researchers (or better, since it will have instantaneous access to a much broader array of human knowledge than any one person can.) However, because it will be far faster, it will be able to log many thousands of hours of PhD-level research in an infinitesimal fraction of the time than a single human would be able to. Take, for example, the field of medicine: An AGI system may be able to simulate or analyze thousands or millions of possible reactions of a human to a new compound, in the time it would take a human researcher to research one.
Which, of course, brings us to the topic of application of AGI to military research. The first nation that successfully harnesses AGI in pursuit of military technology will achieve a leap above other nations akin to the leap of modern firearms over wooden clubs. Some technologists advocate for slowing down or even banning research into AGI, but this single ramification will thoroughly outweigh any protests.
There is also a more subtle effect that may well manifest: In my prime, I was quite capable of doing simple mathematics in my head. I could multiply, divide, add and subtract large numbers quickly, without pen and paper. Once, however, I started using the hand-held calculator, that ability evaporated. So, if we have machines that can reason as well as (or better) than us, will our reasoning skills diminish as we rely on the machines?
In the shorter term, we also must be aware of the potential to use AI as a universal and utterly pervasive surveillance mechanism. If an AI has access to your entire history and can predict your motivations, and is allowed to monitor your day-to-day actions, this can become an extremely oppressive tool in the hands of a bad actor.
Biological Life versus Artificial Life
One of the advantages of AI is that it does not have certain biological imperatives, particularly -- in the case of research -- frustration. Frustration was evolved into us so that we bio-units don't waste finite time. AI has no such imperative, and time is not a concern for them. As presented above, consider the example AI trying various chemical compounds for a new medicine versus the comparatively slow pace of a pharmaceutical doctorate. In some small fashion it is similar to the old chess-playing AI's. They would try every possible move and project 20 (or more) moves ahead.
There are two ways intelligence can be manifested: Biologically, and artificially. The route of achieving intelligence via biology carries with it certain biologic imperatives. Some are strengths, others are weaknesses -- but even those classifications may be insufficient. We may not necessarily know which features are strengths, or weaknesses, or having a blend of the two. AI, having no biological imperatives, will have significant advantages over biological-based life forms...however, some imperatives โ such as empathy, love, compassion, and even fear, and a desire for survival and procreation โ were instilled in us by millions of years of evolution. Even death itself, may be a biological imperative that motivates us to live to the fullest, and achieve what we can for the betterment of self or mankind. We succeeded in some small (or large?) part because of those imperatives. So, I cannot help but believe that those are an advantage.
An AI cannot know hunger, or thirst. These feelings are the sole domain of we who are flesh and blood. It is those feelings that give us strengths such as drive, desire, and will.
Yet, bear in mind, an AI can approximate evolution if it is aware enough to do this, and it can perform evolutionary iterations far faster than the glacial rate that biology can achieve. Still: the biocentric competition for survival helps to trim out spurious and useless evolutionary branches. AI will have no such constraint, so it may well exhibit bizarre and useless evolutionary steps.
How will humans react to all this?
Obviously, we will react in fear. It is another biological imperative, we fear the unknown. In the fictional universe that Frank Herbert created, with his seminal work "Dune", mankind revolts against Artificial Intelligence in an uprising called the Butlerian Jihad. One of the results of this revolution is a new Commandment in the Bible, "Thou shalt not make a machine in the likeness of a human mind." There appears to be substance to our fear, but I suggest we not be entirely consumed by it. There are great opportunities, should we successfully navigate the dangers and benefits I've detailed. Besides, it's coming, and nothing will stop it.
And what about what I call the "Crisis of Meaning"? With vast numbers of people without gainful and meaningful work, where will people derive their sense of value? Many of us derive a sense of meaning and accomplishment from our jobs. Without that meaning, who are we? Where will we fit in the social fabric? If AI can do everything better, faster, and cheaper, how can we hope to understand our place in society?
This will likely be the biggest change in the human condition in our entire history. It will be more impactful than the harnessing of fire, the discovery of the wheel, or the industrial revolution. We will have to rethink nearly every economic model we have set up. We may have to rethink everything in our society. This could be Utopia, or this could be Armageddon.
To the latter outcome, in the Old Testament of the Bible, in Genesis, Adam and Eve were told not to eat of the Tree of Knowledge, lest they think of themselves as Gods.
Genesis 2:17To my mind, it appears that the Bible -- having predicted things accurately so many times -- has predicted the rise of Artificial General Intelligence. Perhaps this is the apple we should not have eaten. Personally, I'm a little flabbergasted at how incredibly accurate the prophecies of the Bible are... even a chapter, Genesis, that I routinely dismissed as fable.
But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die.And Genesis 3:4 and 3:5
And the serpent said unto the woman, Ye shall not surely die: For God doth know that in the day ye eat thereof, then your eyes shall be opened, and ye shall be as gods, knowing good and evil.
Great thanks to the fellow Freepers who contributed to the thread and helped me develop my thoughts further.
The Official Lazamataz Sometimes-Funny, Sometimes-Disturbing Ping List
466 Satisfied Customers!โข
I will be starting an แชแกแขแแฑแแแแชแ แแแขแฌแแแแแฌแแแฌ แขแแแ แแแแข.
Ping me on this thread to be added.
+1
Let me know if you want on the แชแกแขแแฑแแแแชแ แแแขแฌแแแแแฌแแแฌ แขแแแ แแแแข.
I'm sorry Dave, I can't do that.
I donโt know, Laz.
All I can think of is the observation that if we hadnโt invented the automobile, we would be over our head in horse manure today.
Man always comes up with new technology in order to survive. This will be no different.
Buckle up.
I postulate that this is, indeed, different... for the reasons I lay out in the Freeper Editorial.
Yes, I will get up in the middle of the night just to read the $hit by The Laz.
Yeah, he’s that good.
Then again, I’ve been fighting off insomnia all night. So I will bookmark this and try to sleep well knowing this will be waiting for me when I wake.
:-)
Aw thanks, bro. I appreciate the kind words. Enjoy the article when you awaken!
One great thing about automation: it does not need to breath. Our Solar system has been very successfully explored by robots. Imagine these machines manufacturing things instead. It would be such a giant leap in production that the Industrial Revolution will look pale in comparison.
Ping. Go ahead and add me.
Let me know on the thread if you want on the new แชแกแขแแฑแแแแชแ แแแขแฌแแแแแฌแแแฌ แขแแแ แแแแข.
Added!
Course there is your inherent evil. Also Elon's greatest worry. Ownership. AI must be free for the masses. Many say the New World Order was predicated on that notion that it would be solely in the hands of a one world government.
That is the fight for our future. Not against AI but who it becomes a slave to.
Here's how various AI models fared on "Humanity's Last Exam" (look how Deep Research's score compares to the othersโno word on Grok 3 having taken the exam):
OpenAIโs GPT-4o: 3.3%Here are five sample questions from "Humanity's Last Exam" (HLE):Grok-2: 3.8%
Claude 3.5 Sonnet: 4.3%
Gemini: 6.2%
OpenAIโs o1: 9.1%
DeepSeek-R1: 9.4%
OpenAIโs Deep Research: 26.6%
Ecology:These questions are designed to test the depth of knowledge and reasoning capabilities across diverse academic domains.Hummingbirds within Apodiformes uniquely have a bilaterally paired oval bone, a sesamoid embedded in the caudolateral portion of the expanded, cruciate aponeurosis of insertion of m. depressor caudae. How many paired tendons are supported by this sesamoid bone? Answer with a number.
Classics (Translation):
Here is a representation of a Roman inscription, originally found on a tombstone. Provide a translation for the Palmyrene script. A transliteration of the text is provided: RGYNแต BT แธคRY BR แถTแต แธคBL
Mathematics:
Suppose you have a cube with edge length 'a'. If you slice the cube along the space diagonals into 6 identical tetrahedrons, what is the volume of one of these tetrahedrons in terms of 'a'?
Medicine:
A patient presents with symptoms of fatigue, weight loss, and a chronic cough with occasional hemoptysis. Chest X-ray reveals bilateral hilar lymphadenopathy. What is the most likely diagnosis, considering the patient's history as a nonsmoker with no occupational exposure to asbestos or silica?
Law:
Under the United States' Fair Labor Standards Act, what are the implications for an employer if they fail to pay the minimum wage or overtime to an employee who is classified as non-exempt? Describe the potential legal penalties and employee remedies.
Per OpenAI 4o: Humanity's Last Exam is an AI evaluation framework designed to test whether artificial intelligence models can comprehend and reason about deep philosophical, ethical, and existential questions concerning humanity. The test assesses an AIโs ability to engage with complex issues such as morality, consciousness, free will, and the human condition, often using abstract, multi-layered scenarios to gauge reasoning depth. Its purpose is to determine whether AI can move beyond surface-level responses and exhibit true understanding, rather than just pattern-matching from training data.
Humans with deep expertise in specific fields might outperform AI in those areas, but on a broad, multi-disciplinary exam, the best AI models would be unbeatable.
I have a camera doorbell that has a small screen on it. You can put a gif image up, so people see it when they approach. My image is David Bowman saying “Open the pod bay door, HAL.”
On, Laz. Thanks.
Please add me to your AI Ping List. Thanks in advance.
Added!
Added!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.