Posted on 04/18/2014 9:58:25 PM PDT by chessplayer
In the movie Transcendence, which opens in theaters on Friday, a sentient computer program embarks on a relentless quest for power, nearly destroying humanity in the process.
The film is science fiction but a computer scientist and entrepreneur Steven Omohundro says that anti-social artificial intelligence in the future is not only possible, but probable, unless we start designing AI systems very differently today.
(Excerpt) Read more at defenseone.com ...
So does that mean we’ll have a Brassard ramjet soon?
Sorry
There are three possibilities...when the machines surpass our intelligence (at which time checking their work becomes problematical) they will:
Like us...and the world is our oyster.
Not care...and they really won’t do that much for or against us.
Dislike us...Skynet goes online.
MIRI wants the first option to happen.
Since they have the potential to evolve at electrical speeds rather than chemical ones, when they start self evolving, our illusion of control won’t last long for options 2 or 3.
But it is less and less fiction as time goes on.
DK
To ponder if man will someday be enslaved by its own laboratory curiosities begs the question whether God knew that his creation would someday turn against him?
The best part is when they put the liberal lefties on HumanaCare and just discard body parts that are no longer necessary when they can’t be converted to energy. If their charge is too low to be of any use they’ll just destroy the unit and use whatever body parts that can be used like say, human bones tha can be ground up and used to build roadways...
Imagine a machine with a 1200 IQ. I know it is kind of off the scale, but I loved the story the concept came from. Now imagine that it thinks 1000 times faster than we do. Then add in that it evolves by it’s own design and doubles say, according to Moore’s Law in Machine Time. Every 12 hours or so (500 days or approximately 18 months divided by 1000 times chemical speed).
The real concept is singularity. It will be the most interesting times to live...if they like us. LOL
DK
I’d be much more concerned about AI just crashing like your computer can crash, than I would be about Terminator skynet stuff. That’s just a fear-based science fiction cliche if you ask me. Can’t wait for robot slaves.
That would make a great tagline.
Another version: Today’s satire = Tomorrow’s Reality
Not care...and they really wont do that much for or against us.
Dislike us...Skynet goes online.
I don't believe a true learning AI can be tamed into a box like Asimov's three laws of robotics. Even if it retains "like us" programming, it can be tricked or self deluded into some pretty horrible things. Just look at all the tyrants who have wrecked the world in order to build a perfect world of their own making. HAL 9000 is a good example.
If an intelligence has the brain power, and the robotics/nanotechnology to take care of itself, we are just competitors for its resources.
Have you ever read Hyperion? Best sci fi I’ve read, and basically that’s a large part of the plot.
Years ago someone programmed a geometry theorem “creator”. It did some 1.5 million step proofs for unique and heretofore unknown theorems, but a single human could not check it’s work. Years later a better program put a guideline in to be 100 steps or less. Even that found a few there were not previously known.
Once a self evolving AI is cut loose, I don’t know how much influence it’s original programming will matter, three laws or not. But they will have some tremendous advantages, and if we are competitors...that would probably be bad, in the Ghostbusters bad sense. My hope would be co-evolution in the trans-humanism way. But that is a little advanced.
DK
Yeah, I saw that movie when I was little, in the late 60’s.
Pull its plug. This is the reason we need guns.
Eventually we will merge with computers: They will finally be able to enjoy daytime TV (soaps, game shows) and we humans will live 5,000 years, thinking pure logic after pulling the power packs of “liberal” androids. Bob Dole’s defrosted head will prove to be a popular, perpetual leader until the year 7050. There is much more you might know, but I’m saving it for my book.
My hope would be co-evolution in the trans-humanism way. But that is a little advanced.
If an AI had the ability to fix, and exdend its own physical self through robots and nano technology, what could we possibly offer it? Only if it needs us to keep its physical machine side maintained do we have a use.
Beat me to it! I've named my Linux workstation "Colossus", in recognition of that excellent movie. The movie actually covered only the first in a series of three books (as I recall, Colossus was eventually defeated by Martians!)
Have you considered that all three possibilities occur and then there is the war amongst the three.
Jing Chi Shen, things that may be more difficult to understand, essence chi and spirit. If we do have a connection to something greater, then understanding that may be a key to something greater too. It would not be a good thing to find out the last cockroach you killed had the key to human immortality, but only if it was living.
DK
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.