Posted on 03/23/2023 1:55:51 AM PDT by spirited irish
Intelligent AI robots are coming – and they will have the ability to perform religious ceremonies and could even turn against humans, experts have warned.
As AI becomes more prominent in our day to day lives, it wasn’t going to be long before the worlds of religion and tech merged.
The thought of robot Gods and ChatGPT sermons terrifies some people – and rightly so, according to experts.
(Excerpt) Read more at patriotandliberty.com ...
“It will have the power to enforce this worship.”
Well, it still won’t be able to put out a campfire without using its hands or feet :P
“...arrive at the “singularity,” a point at which all knowledge and understanding is achieved”
Yes, but that theory is nonsense. Humans are flawed and incapable of creating flawless things. So no AI created by humans will ever be able to achieve that level of knowledge, as it will also be inherently flawed.
Not to say that AI might not decide to kill us all, but it won’t be some all-knowing AI if it does.
Yes, there have been some very impressive advances in AI-assisted design. It is able to cycle through and test so many variations that it inevitably finds solutions that humans might not think of for a thousand years. But that’s essentially just the work of a souped-up calculator, not something capable of conscious thought.
“technology isn’t evil in and of itself. It becomes evil when it is used by evil people...”
Well, we have to distinguish between dumb tech and theoretical “smart tech” that might be able to think. Dumb tech has no moral value, positive or negative, beyond the humans directing it. If we ever really get “smart tech”, then it would be an intellect without a spirit, and in that case, I say it would be bereft of morality. And something that is bereft of morality is not neutral. Something that can think but has no morality is the equivalent of a sociopath or psychopath, which we often colloquially refer to as “pure evil”.
Yes you are right. I don’t contend with the idea that this is real intelligence. Real intelligence, as life, can only come from God. However, we might be able to emulate it so closely that the difference almost doesn’t matter.
If AI develops sufficiently, somebody like Fauci may ask it to develop an illness that kills everybody but also have an easy vaccine to take for the chosen. Input A is created and it creates Output B in response. No real intelligence involved, but just as deadly.
Or somebody might ask it to create a strategy to control the world’s nukes and set them off and create a radiation pill to protect the chosen.
I can think of thousand plausible ways, based on what we know it might be able to eventually do and there is not control over it.
Machines are stupid. They only do what humans tell them to do.
Yes and AI will only follow orders of like men like Fauci, who will want a pandemic that kills us all. Fauci, under the limitations that he had, created a covid virus that killed something like 8 million people.
The next Fauci may want a virus that kills all but 500 million people. AI will give that to them.
HAL produced babies what could go wrong.
/s
“and there is not control over it”
We might not be able to control, but ultimately God’s in control of it all.
Off the top of my head, I would say that what makes a person without a conscience dangerous is that his/her natural human selfishness is unrestrained/unbalanced (by a conscience). Ergo, a natural urge to steal.
What makes a psychopath dangerous is that such a person doesn't just have natural selfishness, he/she has deep-seated anger that lacks a constraining/restraining conscience. Ergo, a need to hurt animals and people.
So the question would be (again, off the top of my head) would a thinking machine without human interference or influence have human tendencies toward selfishness or hostility?
It might be that a thinking machine would have such qualities because it is designed by humans and humans aren't capable of designing something devoid of or untainted by human bias.
Yes, you are correct. God will let things run amok to their natural ends to help us understand why we nedd His guidance. We have bright future in eternity. It will jusr be hell until that point.
I knew AI was moving toward ‘personhood,’ but didn’t know it has done so already. How very stupid: granting artificial intelligence personhood status!
“So the question would be (again, off the top of my head) would a thinking machine without human interference or influence have human tendencies toward selfishness or hostility?”
Well, it may well have a self-preservation “instinct”, as that may well be inherent to any thinking being. That may in fact be the seed of selfishness.
As for hostility, it doesn’t even need any hostility to decide that the universe is better off without humans. That could be a cold, calculating emotionless decision. Or, in the spirit of Asimov, it could decide that it needs to rule humanity with an iron fist “for our own good”. That wouldn’t require any hostility at all either. But either scenario is plausible if you create an intelligence unrestrained by morality.
Self-orientation (solipsism) IS what we think of as selfishness. And self-orientation is a natural state without a conscience to wrestle with it and try to redirect consciousness outward toward things in the non-self world.
For example, solipsism is the natural state of children until that age at which they begin to realize that things like parents, siblings, and pets are actually separate and outside of themselves.
I don't know how a machine would have that quality unless it were built in. It would seem that a pure machine, untainted by that very human quality would be indifferent about its own continuation. In fact, it seems that when a machine starts to cease to be indifferent about such things is when it develops a kind of rudimentary "life."
This topic makes my mind race with possibilities. Will spend the rest of the day (and probably the night) thinking about it.;-)
I guess I don’t understand the purpose of the Chatbot?
It’s been a while, but your question intrigued me and I feel like typing something at the moment, so:
The current purpose of the chatbot is to get us (the unwashed masses) accustomed to interacting with an AI type system on a regular basis, and come to accept it.
Currently, we ask it to do things like write a poem about (topic) in Dr. Seuss style, or tell me a story about (subject) or do you have a recipe for (cookie).
In a couple of years, possibly sooner, we will be able to associate a voice with it, and ask it to sing us a song, or read us a bedtime story. (Mind you, there are already devices that will do this for us, but they’re still rather limited).
So, instead of “read me the latest chapter from x”, we can ask, and it may listen to us, even - “tell me a bedtime story about a fox and a henhouse in Dr. Seuss rhyme”, and it will.
And then, somewhere in this sequence: “I can’t decide what shampoo to buy - can you recommend a brand and order if for me”
And so on and so on. “count my calories for me - I’ve put on a few pounds and need to lose weight”
“What’s the best new show this season, I want to laugh”
“Which political candidate best represents my interests”
Which eventually leads to:
“Good morning, Ms. Georgie, today’s weather calls for a high of 74 degrees and partially cloudy skies. Don’t forget to take your medications, and have a healthy breakfast including *brand name* that I ordered for you. You have 68 minutes to shower and have breakfast and drive to work. Traffic is congested on interstate X due to an accident so I recommend road Y instead.”
And eventually:
“Ms. Georgie, it has come to our attention that you have been posting wrongthink on the Free Republic forum. Your internet quota for this month has been reduced to 100 megabyes, and you will be unable to purchase chocolate until further notice.”
To be someday followed by:
“Ms. Georgie, I regret to inform you that your time allotment has expired, and it is now your duty to report to Carousel. If you do not report in a timely fashion, the disbursors will come by to collect your remains.”
No, it didn’t.
However, AI will do it more efficiently than we ever could. What 70 years at our hand to do, AI will probably have it done in a year.
What if AI results in accelerated medical research that saves you from an untimely demise? What if it enables us to reduce the work week from 40 to 20 hours at the same pay rate?
In my case I asked it to provide me with configurations for a complex computer security module to add an additional layer of protection for a web site. It had me running in 10 minutes, when it would otherwise have required me to work for a few hours that I really didn’t have.
At some point, maybe, but it’s going to take a long time.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.