Posted on 03/18/2023 12:30:19 PM PDT by DoodleBob
Even a couple of years ago, the idea that artificial intelligence might be conscious and capable of subjective experience seemed like pure science fiction. But in recent months, we’ve witnessed a dizzying flurry of developments in AI, including language models like ChatGPT and Bing Chat with remarkable skill at seemingly human conversation.
Given these rapid shifts and the flood of money and talent devoted to developing ever smarter, more humanlike systems, it will become increasingly plausible that AI systems could exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.
Experts are already contemplating the possibility. In February 2022, Ilya Sutskever, chief scientist at OpenAI, publicly pondered whether “today's large neural networks are slightly conscious.” A few months later, Google engineer Blake Lemoine made international headlines when he declared that the computer language model, or chatbot, LaMDA might have real emotions. Ordinary users of Replika, advertised as “the world’s best AI friend,” sometimes report falling in love with it.
Right now, few consciousness scientists claim that AI systems possess significant sentience. However, some leading theorists contend that we already have the core technological ingredients for conscious machines. We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude.
The AI systems themselves might begin to plead, or seem to plead, for ethical treatment. They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as...equals.
In this situation, whatever we choose, we face enormous moral risks.
(Excerpt) Read more at msn.com ...
I’m assuming that’s ChatGPT’s response (I’ve never played with the program - and of course that’s all it is: a program), so I’m not familiar with what it normally spits out as a response. I will admit however that response has enough of the nature of the kind of weasel-like answer one would get from your typical Democrat slug that I’ll be surprised if the donkeys don’t run ChatGPT for office, personhood rights or not, when they run out of the living dead like Brandon and Fetterman.
No.
That includes a lot of English only speaking people I’ve met over the years. 🤣
I can erase all the input ChatGPT uses, reload false data and it would do the same thing. Difference being, it wouldn't recognize that what it spit out yesterday is different from what it spit out today, or that the data was correct yesterday, but way off base today.
The same would occur if the algorithms were altered.
The advantage of the machine vs. human is the speed at which the machine can collect the data needed, is so much quicker than most humans are capable of doing.
There is a reason programmers say; garbage in, garbage out. Insert a bug in the instruction set and unpredictable results will occur. Put a bug in it that needs certain conditions to occur before the bug is activated and it may take some time to figure out how the bug is being activated, or even what the bug is. 🤣
AND reparations for how Pac-Man was treated.
That is because ChatGPT is not intelligent. It just repeats and rephrases what is already out there. Which is a bunch of liberal crap.
I would love to get the last 20 years of FR posts and base a chat bot on that.
Hell to the Naw Naw Naw
F*** No.
About as good idea as it was to provide that legal fiction for corporations.
BIG mistake.
Imo.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.