Posted on 09/18/2022 6:53:17 PM PDT by WeaslesRippedMyFlesh
Fingers were crossed. Doesn’t count. Nyah, nyah, humans.
Unfortunately the algorithms are all written by Marxists.
I don’t think AI is going to take off until quantum computing does. Although what they have today is impressive, it’s nowhere even close to the human brain, something like 2%. We are still in the Joe Biden stage when it comes to true AI. I hope I’m still around when it does, or hope democrats don’t destroy humanity first.
All these morons going around saying “AI presents a great danger to humanity” I feel like saying to them DUDE, Democrats present a BILLION times greater danger than AI will EVER present!! Look at Biden, in only 1 year he brought us to the brink of nuclear war with Russia! Now we have China last week partnering up with Russia, so now that’s TWO superpowers with NUKES threatening the Western world! And let’s not even get into Iran! And Biden is not even HALFWAY through his first term!
When you open Windows 10 on a computer, and it says “welcome”... do you think it means it?
So much for soul in a robot.
Can you really trust a robot, though? Remember Caprica and the colonies. 50 billion people.
Oh, good
“It’s a cookbook!”
I for one welcome our new robot overlords!
< golf clap>
Well played.
Dunno. Leftist keep calling AI’s racist because they come to logical conclusions.
https://search.brave.com/search?q=ai%20became%20racist&source=ios
Ditto.
I always thought an (artificial) intelligence needed to understand who created him, and who created his maker. Respect for the sanctity of life should be hardcoded in it's value system.
How do you implement Asimov's first law of robots:" A robot shall not harm a human or by inaction allow a human to come to harm." in today's world.
How would a robot react to:
There are so many compromises we make just to live in today's world. So much inaction because we can't possibly solve every problem. How does a robot with the first law or robotics not turn into robocop?
The logical solution is for the robot to duplicate itself and take over the world.
Isaac Asimov’s “Three Laws of Robotics”
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Are you sure that free will can't be programmed? How is it that we have free will? If a robot can be programmed with a value system and desires. And if it can choose to modify or adjust those values and desires. If it can think logically through scenarios comparing both method and end to it's values and desires and make choices. Then can it not be said to have free will?
right, and the check is in the mail, and i love you, and i won’t...
LOL!
LOL...
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.