Posted on 05/12/2023 3:16:04 PM PDT by nickcarraway
Editor’s Note: The following is a brief letter from Ray Kurzweil, cofounder and member of the board at Singularity Group, Singularity Hub’s parent company, in response to the Future of Life Institute’s recent letter, “Pause Giant AI Experiments: An Open Letter.”
The FLI letter addresses the risks of accelerating progress in AI and the ensuing race to commercialize the technology and calls for a pause in the development of algorithms more powerful than OpenAI’s GPT-4, the large language model behind the company’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has thousands of signatories—including deep learning pioneer, Yoshua Bengio, University of California Berkeley professor of computer science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and many others—and has stirred vigorous debate in the AI community.
…
Regarding the open letter to “pause” research on AI “more powerful than GPT-4,” this criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree. There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn’t sign, because I believe we can address the signers’ safety concerns in a more tailored way that doesn’t compromise these vital lines of research.
I participated in the Asilomar AI Principles Conference in 2017 and was actively involved in the creation of guidelines to create artificial intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI’s profound advantages to health and productivity while avoiding the real perils.
— Ray Kurzweil
Inventor, best-selling author, and futurist
These are all good points by one of the few real geniuses in the world today.
Why not “pause work” on bringing fentanyl into the United States?
It cannot be paused. The cat is out of the bag. Pandora’s box is sprung open.
Kurzeil’s krazy.
If the United States unilaterally or stopped research on AI it would be akin to unilateral disarmament. The Ukraine war has already demonstrated that Current technologies have made land armored vehicles and naval surface combatants obsolete. Try to imagine how AI can and will affect future conflicts. There is no doubt that China, Russia and many other countries will be researching, testing and implementing AI into their armed forces. The outcome of future wars may very well be determined even before they start.
There will be no pause on new technology. Restrictions perhaps but no “pause”. Like or not AI will advance.
.
Exactly.
But don't worry! Our beloved government is right on top of the technology issue, naming Kamala Harris as the AI Czar(ess?)
I agree, even if we paused, which we won't, our competitors and enemies will not.
We live in interesting times.
This is quite difficult because I am not an expert in Trollyology Although I have learned quite a lot about it in the past few years
I could always quote RUSH who said:
"Imagine a time When it all began In the dying days of a war A weapon — that would settle the score Whoever found it first Would be sure to do their worst — They always had before…"
and
"We’ve got nothing to fear — but fear itself? Not pain or failure, not fatal tragedy? Not the faulty units in this mad machinery? Not the broken contacts in emotional chemistry?
With an iron fist in a velvet glove We are sheltered under the gun In the glory game on the power train Thy kingdom’s will be done
And the things that we fear Are a weapon to be held against us…"
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.