Here’s what “AI” is:
You now have oodles of memory, (basically an infinite amount for the software (as opposed to data)), oodles of processing power (choose a single chip or a committee of chips), everything cheap, all ordinary computer tasks already programmed hundreds of times over.
What do programmers do now? That’s AI. Also, what do article writers write about? That’s AI. What’s the next big computer thing? That’s AI.
Image recognition and rendering of solid figures in consumer-affordable equipment are software and hardware frontiers.
And when will this AI bubble burst?
OpenAI and Google DeepMind workers warn of AI industry risks in open letter
Nick Robins-Early
Tue 4 Jun 2024 13.07 EDT
A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.
The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic.
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
OpenAI defended its practices in a statement, saying that it had avenues such as a tipline to report issues at the company and that it did not release new technology until there were appropriate safeguards. Google did not immediately respond to a request for comment.
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson said.
Concern over the potential harms of artificial intelligence have existed for decades, but the AI boom of recent years has intensified those fears and left regulators scrambling to catch up with technological advancements. While AI companies have publicly stated their commitment to safely developing the technology, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.
https://www.theguardian.com/technology/article/2024/jun/04/openai-google-ai-risks-letter