Posted on 11/26/2016 10:07:07 AM PST by DFG
Engineers at the University of Massachusetts are developing microprocessors which mimic biological synapses - the nerve cells which pass messages across the human body.
The science fiction-style project is being undertaken by Joshua Yang and Qiangfei Xia, professors of electrical and computer engineering at the US college.
Their work focuses heavily on memristors - a computer component which could change science forever, switching the focus from electronics to ionics.
(Excerpt) Read more at express.co.uk ...
Two further problems at that point become "nature vs. nurture" -- take the exact same "virgin" neural network and pre-load certain concepts into it.
a) can the concepts be overridden completely, or are they retained though distorted (good luck finding metrics to quantify this)
b) how does the 'auto didactic mode' ...you know, as it is assumed to work in humans ... have to do with socialization, as well as input such as TV, radio, and books.
Can one introduce a language and cognition of all nouns and verbs, without value judgments? (The temptation to cheat on such experiments, as in voter fraud in Dem strongholds, would be *enormous*.)
I can think of a few groups of “real” humans, that I would prefer to be replaced with “good” robots.
“Count on them taking their work home with them when the time comes ...”
Especially if it can do laundry! (-:
The first one will probably become a lawyer...
Can one introduce a language and cognition of all nouns and verbs, without value judgments?
How about many? Isn’t that Siri, Alexa and Now?
Trying to get a rudimentary handle on A.I.
The AI moral engine seems as far away as ever.
Consider the difficulties.
For example, Asimovs robots were ruled by the Three Laws of Robotics:
1. A robot shall neither harm a human nor, by its inaction, allow a human to come to harm;
2. A robot shall obey a human;
3. A robot shall protect itself from harm.
Great stuff.
But what does it mean to harm a human?
Doesnt this entail understanding something of human emotions, not to mention being able to sense them?
What if preventing harm to one human causes harm to another?
And so on.
Unfortunately, the needs of warfare are not going to allow the luxury of solving these issues before placing autonomous robots into the battlefield. These are robots given the ability to make their own kill decisions.
I do not any longer watch Star Trek, as it is propaganda for the theory of evolution, but nothing seems more descriptive of the leftist than Star Trek’s Borg. Soul less conformity to the survival of pure evil, destroying everything in it’s path, is the essense of the leftist mindset.
HA...
which is totally irrelevant
There will never be a robot that can equal a human being for one simple reason; the human soul , not the brain is the living control of human thought
What could go wrong?
https://www.youtube.com/watch?v=zZkd1t8yEq8
https://www.youtube.com/watch?v=6a1I63FpzpA
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.