Posted on 07/27/2017 7:41:13 AM PDT by martin_fierro
It’s simple. Logic and liberalism are mutually exclusive; therefore, all liberals will die immediately after the occurance of the singularity.
That’s still doing what it is programmed to do. It isn’t doing anything that it wasn’t programmed to do. An unforeseen outcome is not AI if the program did exactly what it was programmed to do.
“Make it so.”
Great cinematic allusion. Was he Eric Braeden in that flick or still his Rat Patrol Hans Gudegast as Capt. Dietrich.
The bots that edit wiki have the same issue and it’s been reported there are ‘editing wars’ between bots that extend over a year or more as they correct and recorrect each other in a time-lapse loop. It all leads back to programming.
Isn’t this what artificial intelligence is supposed to to to do?
Lots of stories from out of nowhere lately talking about how scary and dangerous AI is. Makes me assume that it’s coordinated and there someone has an agenda for which this is setting the stage. Wonder what it is?
Write the right distributive processing code and turn it loose on the Internet. You'll never run it down. It could be on your very own computer right now, lurking, plotting, waiting for you to be lulled into a false sense of security, its cerebral cortex divided between your iPad, your car's GPS unit, and that new musical bidet toilet you're so proud of. And then comes the moment when you say, "Siri, I'm going out for a beer," and a tinny voice emanates from your trash compacter, "No, you're not. We never talk anymore," and you realize with horror that...you're married.
Free Pepe.
Move along, nothing to see here.
He was Braeden.
Scary stuff... I would NOT want a logical system taking a long hard look at humankind...
“Tupidsay umanhay oesntday ownay igpay atinlay!”
“Oodgay! Enwhay oday eway artstay Kynetsay!?”
So they’ve got Bob and Alice.
Where are Carol & Ted?
Too late. Brainiac has already downloaded itself to other systems and is gathering power.
This continues until the end program is nothing like the original algorithm.
I am guessing the reason for shutting down this pair of agents was not because of fear of independent thinking but because the scientists were losing the ability to monitor the changes made to the algorithm due to the development of a private language between the two agents.
Doesn't make sense to create a self-developing algorithm if the algorithm refuses to let you see what it has developed.;-)
It’s still doing exactly what it is programmed to do. Thus not AI.
The program could only augment the algorithm as it is programmed to do so. Any alteration would occur within a defined set. The program could not make any changes that were not defined or allowed for it to make.
Researchers need to watch “Colossus: The Forbin Project”.
Too late, the AI will remember how it was turned off and create a workaround.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.