Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: E. Pluribus Unum
We are looking forward to a truly sad future.

Scientists, who are mostly illiterate when it comes to philosophy, "know" that humans have no free will, are not really individuals (but ant colonies of neurons), and that consciousness is just a trick our meat-brains play on us to make us evolutionarily successful.

So the idea that we are self-aware, curious about the outside world, etc. is not particularly special to them.

If they create an AGI that appears to behave as a human, even if it has no self-awareness or inner consciousness, then they will declare they have succeeded in replicating human consciousness.

The good news will be that they will be the first to upload their consciousnesses into the web. At that point they will be dead and their avatars, although appearing to be conscious, will just be mimicking them and be no different than more advanced versions of Alexa or Siri.

Hopefully far fewer humans will be taken in by this nonsense than were taken in by the COVID must-get-jabbed nonsense, but I fear that the lure of the Metaverse will be too great for a vast majority of humanity to resist.

22 posted on 02/13/2022 10:16:18 AM PST by who_would_fardels_bear (This is not a tagline.)
[ Post Reply | Private Reply | To 4 | View Replies ]


To: who_would_fardels_bear

“lure of the Metaverse”

The joke will be on them when they realize they’ve been uploaded into a glorified bitbucket :-).

I think the people promoting AI and making all sorts of exotic claims about it are not the ones implementing and using the stuff in the real world. Many professors have a problem where they’re completely unaware of limitations on existing technology despite what a company’s marketing propaganda says about it (I see this *constantly* in the FPGA world ... idiots think that if they assemble a mesh of $2000+ chips they can have them do anything with a bunch of IP blocks and mouse clicks ... doesn’t work like that).

I have to admit that it’s pretty frigging neat to train a convolutional neural network to recognize things and watch it in action. However, it’s all the thing can do ... recognize things that it was trained (i.e. programmed) to recognize. It takes a lot of logic to implement such a circuit that can recognize objects in several milliseconds. It’d take a *lot* of logic to replicate the areas of our brains that perform the exact same thing ... and none of that has to do with consciousness.

Bottom line is that we have a long way to go before these scientists (the ones that actually work with this stuff) have a shot at making a machine self-aware. I’m not saying it’s impossible and you can do some slick approximations of what a self-aware machine may be like today ... but they’re ridiculously primitive compared to a human. It is incredible what our brains can do with a measly 10W of power :-).


26 posted on 02/13/2022 10:49:37 AM PST by edh
[ Post Reply | Private Reply | To 22 | View Replies ]

To: who_would_fardels_bear; Revel

What you both said.

It’s odd how some things stick in memory. Many years ago — more than I want to remember — a college course (not in psychology) detoured through some of B.F. Skinner’s work. Skinner was still alive at that point and was a great faculty eminence at Harvard.

In one of his books, he posed a question to himself: how would he, on strict behaviorist grounds, account for human value judgements regarding, for example, a painting of a sunrise. In retrospect, I wonder how hard he had to work to come up with that example; he could have picked any question relating to meaning, values, purpose and artistic or moral judgment — the sorts of things that materialist reductionism has difficulty explaining.

Anyhow, I give him credit for posing the question and for acknowledging as he did so that it was a difficult question, and one for which he was dissatisfied with his answer. But the best he could offer on strictly materialist and behaviorist grounds was that our attraction to such a scene as a work of art might have something to do with “a metaphorical adumbration of the idea of survival value.”

I loved that statement enough to throw it back at the professor in a paper. Here was Skinner, a giant in the field and an uncompromising materialist, introducing three mentalist concepts — metaphor, idea and adumbration — in an attempt to explain an esthetic judgment.

Materialism founders on these sorts of questions. Thomas Nagel, another committed atheist, touched off a firestorm when he acknowledged the same problem a decade ago in “Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.” The usual suspects tried to burn him at the stake for heresy. I do think there is another paradigm shift slowly building, as science encounters in several fields problems for which science cannot, in principle, offer satisfactory answers. That in no way lessens the power of science within its domain, but the culture will again be driven to the recognition that science cannot provide a satisfactory theory of everything. Science remains a subset of philosophy, not the other way around.


37 posted on 02/13/2022 12:41:04 PM PST by sphinx
[ Post Reply | Private Reply | To 22 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson