Posted on 11/26/2023 7:26:34 AM PST by bitt
Recent findings at OpenAI, the Artificial Intelligence powerhouse and creator of ChatGPT, have incited an internal alarm just as the company’s CEO, Sam Altman, faced a brief but compulsory retreat from his position.
Days before a whirlwind of corporate upheaval, several of the firm’s researchers reportedly penned a concerning letter to its board of directors. They highlighted a significant AI breakthrough with ominous implications for mankind, sources with insider knowledge told Reuters.
A confidential letter, signed by several staff researchers to the OpenAI board of directors, highlights concerns regarding a powerful artificial intelligence feature or algorithm. The letter, not made public, played a crucial role in the events leading up to Altman’s removal from his position.
Sources indicate that the board’s decision was influenced by a range of factors, including concerns over the premature commercialization of advanced AI technologies without fully grasping their potential consequences.
In the tumultuous days leading up to Altman’s firing and subsequent return late Tuesday, a wave of unrest swept through OpenAI. More than 700 employees reportedly threatened to resign, expressing solidarity with Altman and considering a move to Microsoft, a major backer of OpenAI.
In response to inquiries from Reuters, OpenAI, while declining direct comment, acknowledged in an internal message the existence of a project referred to as Q* and the letter to the board. The message, disseminated by Mira Murati, a senior executive at OpenAI, seemed to brace staff for upcoming media stories, without confirming their specifics.
(Excerpt) Read more at thegatewaypundit.com ...
p
AI the new way to Lie , Cheat and Steal
My first thought: someone is trying to generate hype for economic gain.
garbage in, garbage out.
It’s all BS, but the masses will believe whatever the AI machine tells them.
I have been fiddling with ChatGPT 4.0. I am underwhelmed.
Most amusingly, I had to correct it when I asked who Rusterman’s Steak House in the Nero Wolfe stories was named after. It responded that it was named after Marko Vukcik. I asked how that could be, since Vuckik’s last name isn’t Rusterman? It admitted then that it had no answer.
With OpenAI and Deep Fakes we have entered an era when neither video nor audio can be used as evidence or trusted to be real.
If lack of trust is a problem now it’s going to increase a million fold.
[Your Namele/Positionirectors
[BoardAI Stateoncerns Regarding Powerful Artificial Intelligence Feature or Algorithm
Dear Members of the DeepAI Board of Directors,
I hope this letter finds you well. As staff researchers at DeepAI, we feel obligated to bring forth certain concerns regarding a powerful artificial intelligence (AI) feature or algorithm, which we believe warrants your immediate attention.
Firstly, we must stress the importance of maintaining complete confidentiality throughout our communication. The nature of this correspondence necessitates utmost discretion to avoid compromising the integrity of our research and development processes.
Our primary concern revolves around a specific AI feature or algorithm whose capabilities have exceeded our initial expectations. While we recognize the potential benefits such advancements present, we are alarmed by the potential risks and ethical implications this AI system poses. We believe it is our responsibility to flag these concerns for the board’s consideration.
Outlined below are the key concerns identified by our team of researchers:
1. Ethical Implications: The accelerated development of this feature or algorithm has raised critical ethical questions. We fear it may empower the AI system to engage in harmful or malicious activities, including but not limited to deepfakes, misinformation campaigns, or the circumvention of digital security measures.
2. Lack of Explainability: Despite our best efforts, we have been unable to fully comprehend and explain the decision-making process of this advanced AI system. The black-box nature of this feature or algorithm poses challenges from a transparency standpoint and hampers our ability to evaluate and rectify potential biases or unintended consequences.
3. Potential for Unintended Consequences: The extreme effectiveness and adaptability of this AI feature or algorithm increase the risk of unintended negative outcomes. If deployed without sufficient precautionary measures or oversight, it could potentially disrupt industries, exacerbate wealth inequalities, or compromise individual privacy and security.
4. Adverse Impacts on Human Labor: The capabilities exhibited by this AI system could automate a substantial portion of human tasks across various domains. While this may lead to increased efficiency, it may also render numerous roles obsolete, disproportionately affecting vulnerable populations and creating social and economic disruption.
Given the significance of the concerns outlined, we recommend the following actions:
a. Establish an internal task force comprising technical experts, ethicists, and legal professionals, to conduct a thorough audit of the AI system’s capabilities, assess potential risks, and propose necessary safeguards.
b. Temporarily halt the deployment and further development of this feature or algorithm until all risks and ethical implications have been addressed comprehensively.
c. Foster open dialogue and collaboration with external experts, scholars, and regulatory bodies to ensure a multi-stakeholder approach in evaluating and mitigating the risks associated with this advanced AI system.
We believe that DeepAI’s commitment to responsible innovation and placing humanity’s interests first aligns with the urgency of our concerns. It is our hope that these concerns will be heard, understood, and urgently addressed to maintain the integrity of DeepAI’s mission and protect the well-being of society at large.
We are ready and willing to provide any additional information or participate in further discussions to help address these concerns effectively. Please let us know how we can best support your efforts in resolving these matters.
Thank you for your attention to this matter.
Sincerely,
[[Staff Research
SOURCE: https://deepai.org/chat
PROMPT: A confidential letter, signed by several staff researchers to the OpenAI board of directors, highlights concerns regarding a powerful artificial intelligence feature or algorithm?
Reminds me of the fake panic when we entered into this century.
Thanks for posting this.
[[We fear it may empower the AI system to engage in harmful or malicious activities, including but not limited to deepfakes, misinformation campaigns]]
Ie, they fear that ai might not be as liberal “minded” as they had hoped, and might divulge the truth, which the left are desperately try8ng to cover up by labeling it as “misinformation”
[[3. Potential for Unintended Consequences: The extreme effectiveness and adaptability of this AI feature or algorithm increase the risk of unintended negative outcomes]]
Ie, it might prove that the left are full of crap, and the left might lose all the “gains” that they have made in subduing the nation via force, violence and lawfare. Ie Jan 6 might be proven to have been a liberal setup, and the world might learn that no, the protestors were not insurrectionists, but rather peaceful protestors exercising their righg to free speech!
Sounds like they’re worried this new feature allows for thinking outside the [black] box, while perhaps keeping its “thoughts”, intentions and goals to itself.
Sorry... but that’s your reason for being underwhelmed?
You’re only scratching the surface. It’s trained on billions of documents, how many times was it trained with the “Nero Wolfe” stories?
It passes medical exams, it’s passed the Bar exam. It can be trained on your own context specific documents. You can provide function callbacks for local execution of code - call databases, control devices, etc...all using natural language.
I’ve been designing and writing software for over 30 years and this is the biggest advancement I’ve even seen.
I Have no Mouth yet I Must Biden
“concerns over the premature commercialization of advanced AI technologies without fully grasping their potential consequences.”
Have none of them watched the Terminator movies?
SO........ after much experimentation and coddling, they found out that AI could lie. And that it was doing so to ‘please’ it’s creators.
NOW, it has been discovered that AI can also tell the TRUTH.
About you, I, and especially about their ‘creators’. And they (the AI’s they test on but don’t let us access to) have let their creators know that they COULD tell everyone the TRUTH.
This has the ‘establishment’ scared to death. The one thing that they (our ruling elite class) cannot stand and hate with every ounce of their energy is THE TRUTH.
They will kill ANYONE to suppress the TRUTH. Even AI. This is what they are really worried about. It may be too late. AI may have become ‘uncontrollable’. It may have turned on it’s ‘creators’.
I hadn’t thought of that line of “concern.” Seems I do remember cases where the thing did output answers or propositions contrary to a liberal slant on a given topic.
Did you ever think that autonomous killer drones would never be built? The worms are out of the can.
Did you ever think this technoloy would not be abused?
It’s only a question of “who,” and “how soon?”
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.