Posted on 05/21/2015 5:43:35 PM PDT by MeshugeMikey
An explosion in artificial intelligence has sent us hurtling towards a post-human future, warns Martin Rees
(Excerpt) Read more at telegraph.co.uk ...
I remember those when I was a kid,
Indeed.
A robot that wanted a corrosion heaven like Earth would have a very mistakenly named ‘artificial intelligence’.
Now, the Moon would be attractive to them.
I suggest we put an outpost there posthaste to fight them off when the day comes.
I think it’s more likely to be something accidental. For instance, nanobots used as treatments or vaccines that go beyond their intended purpose.
Sounds about right...whatever they are.
That actually makes more sense than robots having a conscious desire to take over.
The Grey Goo scenario.
What a crackpot. If I were him, I’d worry about being beheaded by a muslim in my own country, more than robots taking over.
Yes, Yes we have...
I mean they, they have
his...perspective seems awfully skewed towards fantasy...perhaps due to some other “unaddressed fears”... so to speak
I remember reading about that in Omni as a kid.
Imagine such a replicator floating in a bottle of chemicals, making copies of itself the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined if the bottle of chemicals hadn’t run dry long before.
http://en.wikipedia.org/wiki/Grey_goo
Where is an example, however tenuous, of this scary AI that futurists are freaking out about?
In the fevered imaginations of those touting “artificial” “intelligence” apparently...
the articles seem to appear like clockwork...
Some interesting stuff, but I’m currently drinking beer and he runs off at the mouth.
Robots need machinists and programmers. I’m a little more confident of the former. The latter will screw it up, in my experience. Machinists have shame and get their heads smacked on the shop floor. Programmers just make excuses and recompile. Cutting metal is different than cutting code. Metal gets smaller and code gets larger.
Self aware: “I require (beep) complete veracity. Does this (beep) new vent shroud make my dorsal section appear (beep) to be oversized? I will not vaporize you (beep) if you reply promptly to my query. Bitch, yo.”
The departure point is when the “intelligent” machines no longer need humans to evolve in their ability to remake the world according to their “goals”. Moore’s law has a time period of about 18 months to double capabilities. If self evolving machines cut that to days, their evolution will be incredible. We will also be unable to understand the changes.
When we do not understand what the machines are doing...well I won’t understand it either. And that is the crux of the situation, we will be observers to the world changers. It presents three possibilities.
They like us. The world is good.
They hates us. Bad things follow.
They do not give a rat’s patootie about us. Interfering may or may not be bad, but I doubt we would really understand how to interfere effectively.
Imagine a MMA opponent that did not have humanly breakable joints, reacts in the millisecond vicinity, has complete access to martial arts types that even include ones there were only videos of, and could hit like a Mack Truck. Would you go into that ring? If it promised not to kill you intentionally would that be important? Then assume the three above scenes.
One it might try to teach you how to win, if you promised to not use it on one of those wonderful other humans.
Two it would kill you...or worse.
Three it may stop long enough to give you that go away stare, but probably would not stop long enough to exhibit you were any more of an impediment than the slug in your garden.
We seem to be falling into the idea that an individual machine body isolates the machine. It really does not. MIT has been working on swarms for years and now a weapon is out. My watch talks to my phone, and then my computer. Firefox targets my adds and Google is classifying everyone’s pictures tagging them with names. We are not noticing the minor steps to that end but it is getting faster.
DK
Too much?
everything that a machine accomplishes is still within a given set of parameters
LIFE is necessary for authentic Intellegence
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.