Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: AdmSmith; AnonymousConservative; Arthur Wildfire! March; Berosus; Bockscar; BraveMan; cardinal4; ...

2 posted on 06/02/2024 4:45:42 PM PDT by SunkenCiv (Putin should skip ahead to where he kills himself in the bunker.)
[ Post Reply | Private Reply | View Replies ]


To: SunkenCiv

Here’s what “AI” is:

You now have oodles of memory, (basically an infinite amount for the software (as opposed to data)), oodles of processing power (choose a single chip or a committee of chips), everything cheap, all ordinary computer tasks already programmed hundreds of times over.

What do programmers do now? That’s AI. Also, what do article writers write about? That’s AI. What’s the next big computer thing? That’s AI.

Image recognition and rendering of solid figures in consumer-affordable equipment are software and hardware frontiers.


11 posted on 06/02/2024 5:05:54 PM PDT by cymbeline (we saw men break out of a concentration camp.”)
[ Post Reply | Private Reply | To 2 | View Replies ]

To: SunkenCiv

And when will this AI bubble burst?


16 posted on 06/02/2024 6:24:15 PM PDT by jdsteel (PA voters elected a stroke victim and a dead guy. Not a joke. )
[ Post Reply | Private Reply | To 2 | View Replies ]

To: SunkenCiv; All

OpenAI and Google DeepMind workers warn of AI industry risks in open letter

Nick Robins-Early
Tue 4 Jun 2024 13.07 EDT

A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

OpenAI defended its practices in a statement, saying that it had avenues such as a tipline to report issues at the company and that it did not release new technology until there were appropriate safeguards. Google did not immediately respond to a request for comment.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson said.

Concern over the potential harms of artificial intelligence have existed for decades, but the AI boom of recent years has intensified those fears and left regulators scrambling to catch up with technological advancements. While AI companies have publicly stated their commitment to safely developing the technology, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.

https://www.theguardian.com/technology/article/2024/jun/04/openai-google-ai-risks-letter


29 posted on 06/04/2024 7:19:12 PM PDT by BenLurkin (The above is not a statement of fact. It is either opinion, or satire, or both.)
[ Post Reply | Private Reply | To 2 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson