Posted on 02/13/2022 9:24:11 AM PST by BenLurkin
On Wednesday, OpenAI cofounder Ilya Sutskever claimed on Twitter that 'it may be that today's largest neural networks are slightly conscious,' ...
He didn't name any specific developments, but is likely referring to the mega-scale neural networks, such as GPT-3, a 175 billion parameter language processing system built by OpenAI for translation, question answering, and filling in missing words.
Sutskever faced a backlash soon after posting his tweet, with most researchers concerned he was over stating how advanced AI had become, Futurism reported.
'Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI,' according to UNSW Sidney AI researcher Toby Walsh. It is also unclear what 'slightly conscious' actually means, because the concept of consciousness in artificial intelligence is a controversial idea.
An artificial neural network is a collection of connected units or nodes that model the neurons found within a biological brain, that can be trained to perform tasks and activities without human input - by learning, however, most experts say these systems aren't even close to human intelligence, let alone consciousness.
(Excerpt) Read more at dailymail.co.uk ...
It is called playing chess against the computer.
Yeah - BS
AI is combination of complex if/then statements combined with pattern-recognition algorithms.
It may be extremely dangerous, but it’s not intelligence.
“Every time such speculative comments get an airing, it takes months of effort to” re-brain wash the masses into thinking that AI is beautiful.
Fixed it.
Only those that truly see the spiritual side of life will get it.
Could it be the almost conscience implant in Byedones brain?
It’s got Biden beat.
When does Skynet become self aware?
5.56mm
In the afterglow of the success of his perceptron experiments, he stated that a large enough network of perceptrons would “be able to walk, talk, see, write, reproduce itself, and be conscious of its existence," in his own words at the time.
Perceptrons fell into disfavor in the late 1960s and through the 1970s, and were revived and rebranded as "neural networks" in the 1980s. They are nothing but large networks of digital correlators. They can learn but are not at all good at abstraction.
Belief in the "if you scale it up enough it will become conscious" theory is practically a religion among large segments of the AI community. There is no arguing with those who adhere to this view.
Personally, I really like the movie A.I. (Most folks don’t.) At first glance, the movie spends it’s runtime making you think David is the first mecha to have artificial intelligence. A friend pointed out that the first was really Gigolo Joe, through all of his self-directed actions in the movie that go against his (presumed) programming.
De Ex Machina is another thoughtful movie.
Please define “conscious” in this context.
(My next question will be to ask if your cat or dog is conscious.)
What a lie.
I’m doing the same thing in my garage workshop!
OK...
I got the same government grant as those guys, too!
“’I’m sorry, Dave. I’m afraid I can’t do that...”
That’s odd..Alexa has told me the same thing!😎
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.