Posted on 01/17/2018 9:59:41 AM PST by blam
Excellent comments.
Thankfully it won't happen. What can happen is that evil minds with wounded emotion can control machines. That is dangerous.
Human consciousness has two aspects which create holograms at the intersection. This is where memories are stored. They can be stored and retrieved by either aspect, thus the difference between the emotional feminine and the logical masculine. Higher frequencies of consciousness have the ability to transmute lower frequencies of consciousness and cleanse them. Just as a laser transmutes solid objects, an angry evil person (low frequency consciousness) perceives Heaven(High Frequency Consciousness) as Hell(Transmutation Process).
A higher frequency of consciousness is to a lower consciousness what a strong magnet is to magnetically stored data. Woks just like the old cassette tape erasers.
Welcome to my reality. All this stuff is physical and experiential for me.
The Prisoner beat the all knowing General with “why?”.
“Watson is following a preprogrammed algorithm. Its not conscious.”
Remember the old joke of an index card that read..
How do you keep an idiot busy? Flip card for the correct answer...
and the other side read...
How do you keep an idiot busy? Flip card for the correct answer...
These prognostications are always interesting to read and contemplate, but none ever bridge the gap between some nebulous “intelligence” residing in silicon and the taking over of the real physical world. Think of the complexities of exploring for minerals, opening new mines or wells, mining ores and petroleum, building mining and mine transport machinery, transporting ores, refining them, creating metals and semiconductors, building fabs, making semiconductor processing equipment, powering all of these things, getting water supplies for chemical processing and cooling, etc, etc. Could robots eventually take all of that over and eliminate all input by man?
It seems like man would have the ability to pull the plug at any point in time.
Or maybe it doesn’t have to go that far. Perhaps we just build a massive autonomous war machine that suddenly becomes sentient and it decides humans are not needed on earth as part of its global optimization. It wipes out people, but then “dies” itself and does not have any need or desire to “procreate.” Maybe it chooses a strategy of setting off every nuke and bio/chem weapon on the planet simultaneously to poison the earth thus wiping out mankind. Would that be “optimization” in its “mind”?
A MUCH scarier construct is a robot programmed to do the will of man.
“You ask them a question that has no answer and the computer gets stuck in a loop and sparks and smokes and shuts down. Kind of like Joy Behar.”
You stimulated the best laugh out loud of my day! Thank You
That is funny!
“to do the will of man”
Thankfully the “will of man” is restricted to the level of consciousness at which that man is functioning... Thus the Darwin Award.
The Human Zoo of the future will have lots of fun and time-wasting stuff for our descendants to preoccupy themselves with!
And a Gitch in the Matrix after that.
When does Skynet open their IPO?
I want to get it on the ground floor!
I have thought for a long time that the thing we call ‘consciousness’ arises from quantum-mechanical interactions that take place in our brain (and possibly other parts of the body as well). It is the fundamental level of uncertainty that reigns in the lower levels of our thought that makes us what we are. Current computational models will never create a computer that ‘thinks’. The same is true of what we currently call ‘quantum computers’, because they are still stuck in the world of deterministic programming.
“I fail to understand this fear [of AI] and believe its unwarranted. AI is always given parameters by humans.”
The fact that it is given its parameters by humans is EXACTLY why we should be afraid.
“I fail to understand this fear [of AI] and believe its unwarranted. AI is always given parameters by humans.”
The fact that it is given its parameters by humans is EXACTLY why we should be afraid.
All of that article is bull$hit. Software engineers can’t even design and implement a PC operating system that works all the time.
It is important to realize that AI does not need to think, comprehend, or have all of the same capacities that humans do in order to become extremely dangerous.
Today, computers can already beat humans at any strategy game. They can solve puzzles and problems quicker.
These scenarios will soon be played out on battle fields, and in the performance of police duties.
Criminals, law enforcement, and militaries will weaponize AI. It will become necessary to utilize AI in order to defend ourselves against weaponized AI.
It is important to realize that AI does not need to think, comprehend, or have all of the same capacities that humans do in order to become extremely dangerous.
Today, computers can already beat humans at any strategy game. They can solve puzzles and problems quicker.
These scenarios will soon be played out on battle fields, and in the performance of police duties.
Criminals, law enforcement, and militaries will weaponize AI. It will become necessary to utilize AI in order to defend ourselves against weaponized AI.
There seems to be a growing consensus among neuroscientists that either consciousness is a useful fiction or that it just happens when a sufficiently complex neural system is put together.
If neither of these assumptions is true then we could be replaced by very cleverly designed player pianos.
We will either be toast or toasters.
Pull the power cord; we’ll be good.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.