Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Are we safe from robots that can think for themselves?
Daily Mail ^ | April 24, 2007 | Rebecca Camber

Posted on 04/24/2007 2:43:34 PM PDT by Star Traveler

Are we safe from robots that can think for themselves?

By REBECCA CAMBER

Robots that can think for themselves could soon be caring for our children and the elderly and policing our streets, say experts.

Scientists told yesterday of a new generation of robots which can work without human direction.

They predict that in the next five years robots will be available for child-minding, to work in care homes, monitor prisons and help police trace criminals.

And while it may sound like something out of a science-fiction film, the experts say advances in technology have made the thinking robot possible.

A group of leading robotic researchers called for an urgent public debate and legislation to prevent large numbers of autonomous robots being introduced into society without considering the potential risks to public safety.

Until now most robots have been operated by humans, usually by remote control or verbal commands. But now autonomous machines such as toys and vacuum cleaners which cover the room without needing any human instructions or guidance are being introduced.

Manufacturers are exploring ways to make robotic toys look after children, which experts say will lead to child-minding machines able to monitor youngsters, transmitting their progress to the parents by onboard cameras.

In Japan, scientists are producing robots to act as companions for the elderly and check their heart rate.

Alan Winfield, professor of electronic engineering at the University of the West of England in Bristol, said yesterday it would not be long before technological advances made it possible for robots to be introduced in the home, as well as prisons and the police service.

Speaking at a debate on robot ethics at the London Science Media Centre, he said: "It is highly likely that in a number of years robots will be employed both for child-minding and care for the elderly.

"But the danger is that we will sleepwalk into a situation where we accept a large number of autonomous robots in our lives without being sure of the consequences.

"The outcome could be that when given a choice the robot could make the wrong decision and someone gets hurt. They can go wrong just like a motor car can.

"We should be aware of the future that we are letting ourselves in for. We need to look at their safety and reliability."

His warning echoes the hit Hollywood sci-fi film I, Robot, starring Will Smith, in which a slave robot with a mind of its own causes chaos.

Noel Sharkey, professor of computer science at Sheffield University, said: "Technology is increasing at an incredible rate.

"My main worry is that these autonomous robots could be introduced very quickly. We need to have an informed public debate now before that happens."

The biggest advances in robots in recent years have been as weapons of war. The U.S. military is developing battlefield robots which will be given the ability to decide when to use lethal force.

At the Georgia Institute of Technology in Atlanta, a battlefield robot is being developed which will use radar data and intelligence feeds to make decisions based on a set of ethical rules, which has been compared to an artificial conscience.

The Korean government is looking to create robotic armed border guards as part of a £ 51million investment in robotics.

--

23/04/07 - Science & tech section

--

Find this story at http://www.dailymail.co.uk/pages/live/articles/technology/technology.html?in_article_id=450231&in_page_id=1965 ©2007 Associated New Media


TOPICS: Business/Economy; Culture/Society; Miscellaneous; Philosophy
KEYWORDS: arnold; asimov; governator; laws; robotics; robots; skynet; terminator
Navigation: use the links below to view more comments.
first 1-2021-4041-6061-80 ... 181-184 next last
Well, believe it or not, this is going to be an issue. And the time to do something about it, is before it becomes big business, because then there will be too many vested interests.

For those who simply think this is science fiction, I'm not sure why that would be so. It's obvious that we (as a society and culture) are advancing rapidly down the track to auotonomous robotics. And with simple extrapolation, it's clear we will be at the point where robots will construct robots and advance totally on their own.

It does appear, at least to me, that our present culture and society is closing up the gap between today and the "science fiction future". This is -- really -- no longer to be considered in the realm of science fiction, but now in the realm of business, ethics and regulation.

Blade Runner was a great science fiction movie. Also, the "concept" in the recent series, Battlestar Galactica is relevant, too. It's not far fetched, with what we're doing today.

Regards,
Star Traveler

1 posted on 04/24/2007 2:43:35 PM PDT by Star Traveler
[ Post Reply | Private Reply | View Replies]

To: Star Traveler

Link to article —
http://www.dailymail.co.uk/pages/live/articles/technology/technology.html?in_article_id=450231&in_page_id=1965


2 posted on 04/24/2007 2:44:23 PM PDT by Star Traveler
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

Isaac Asimov’s three laws of Robotics
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Zeroth Law (added as a primary law)

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

[A condition stating that the Zeroth Law must not be broken was added to the original Laws.]

Zeroth Law is regarding humanity, while the First Law is regarding a human being.

Quotes from Wikidpedia article —

Those working in artificial intelligence sometimes see the Three Laws as a future ideal: once a being has reached the stage where it can comprehend these Laws, it is truly intelligent. Indeed, significant advances in artificial intelligence would be needed for robots to understand the Three Laws. However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.[28][29] Modern roboticists and specialists in robotics agree that, as of 2006, Asimov’s Laws are perfect for plotting stories, but useless in real life. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. SF author Robert Sawyer generalizes this argument to cover other industries, stating:

“The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)”

In March 2007, the South Korean government announced that it would issue a Robot Ethics Charter, setting standards for both users and manufacturers, later in the year. According to Park Hye-Young of the Ministry of Information and Communication, the Charter may reflect Asimov’s Three Laws, attempting to set ground rules for the future development of robotics.

Regards,
Star Traveler


3 posted on 04/24/2007 2:45:13 PM PDT by Star Traveler
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

W is proof that such robots are harmless!


4 posted on 04/24/2007 2:47:10 PM PDT by The_Republican (So Dark The Con of Man)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler
Depends if the thoughts reflect individualist or statist values.

5 posted on 04/24/2007 2:47:42 PM PDT by I see my hands (_8(|)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

A message from my Roomba.

6 posted on 04/24/2007 2:49:04 PM PDT by martin_fierro (< |:)~)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler
Sarah Conner was unavailable for comment....
7 posted on 04/24/2007 2:50:45 PM PDT by SubGeniusX ($29.95 Guarantees Your Salvation!!! Or TRIPLE Your Money Back!!!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

I hated the Zeroth Law, totally breaks the concept. Of course being a conservative and fearing phrases like “it’s for the children” and “if it saves only one life” leaves me pre-disposed to fear stuff done for the “good” of society.


8 posted on 04/24/2007 2:52:08 PM PDT by discostu (only things a western savage understands are whiskey and rifles and an unarmed)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Star Traveler
I for one am insured against robot attack.
9 posted on 04/24/2007 2:54:03 PM PDT by pogo101
[ Post Reply | Private Reply | To 1 | View Replies]

To: martin_fierro

Did you catch this, though? Your Roomba will soon be “packing”. I wouldn’t offend it...

The same company that makes those cute little household vacuuming robots now has a military robot that is equipped with a pump action shotgun capable of firing shotgun rounds and presumably killing enemy combatants (or anyone who happens to be standing in front of the ‘bot). The robot is called the Pacbot, and it has already seen action in Iraq. The Pacbot weighs about 40 pounds, and is propelled by heavy-duty tracks. It also has chemical sensors that detect nuclear, biological, and chemical contaminants. It’s currently being tested by the 29th Infantry Regiment at Fort Benning, Georgia.

Of course, the big story here is not that robots are being used in Iraq or tested by the U.S. Army — the big news is that they are being equipped with lethal weapons. Up until now, robots have always been limited to support roles, such as carrying equipment, sniffing out bombs, or performing remote detection of nuclear, biological, or chemical contaminants. But now there are Army robots with shotguns. Next up? Robot-controlled Hummers that can’t drive straight, but can still shoot. Once they get the bugs out of the software, they’ll even be able to limit their shooting to the enemy rather than just randomly firing off shotgun rounds at anything that moves.

http://www.newstarget.com/z008776.html


10 posted on 04/24/2007 2:54:41 PM PDT by Star Traveler
[ Post Reply | Private Reply | To 6 | View Replies]

To: Star Traveler
Damn! You beat me to it. I was about to point out that Rule 1 protects us. But then that Sealab 2021 episode occurred to me . . .

Stormy: Okay, okay. So, say I put my brain in a robot body and there's a war. Robots versus humans. What side am I on?

Debbie: Humans! You have a human brain.

Sparks: But... the humans discriminate against you. You can't even vote!

Marco: We'd better not have to live on a reservation. That would really chap my caboose.

Captain Murphy: Yeah, but... nobody knows you're a robot. You look the same.

Debbie: Uh-uh. Dogs know. That's how the humans hunt you.

Stormy: They're gonna hunt me? For sport?

Marco: That's why we have to CRUSH mankind! So you might as well get on board for the big win, Stormy.
11 posted on 04/24/2007 2:54:52 PM PDT by Xenalyte ("A cat can give birth to kittens in the oven. That don't make 'em biscuits." - Quanell X)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Star Traveler

The BiCentennial Man comes to mind.
Also the AI movie(forget the entire title)

And don’t forget the T-1 Terminator.
Robo Cop.

This is already big business and there is little we can do to stop robotic companions from becoming commonplace.
But artificial intelligence is only as good as the programmer. One bad program makes the entire line of robots dangerous.


12 posted on 04/24/2007 2:55:08 PM PDT by o_zarkman44 (No Bull in 08!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler; All

I also remember the “Dune” story line. People were dumbed down by the use of thinking machines leaving humanity open for being enslaved by others who used such machines and the machines themselves.


13 posted on 04/24/2007 2:56:08 PM PDT by TMSuchman (American by birth, Rebel by choice, Marine by act of GOD!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: o_zarkman44

You said — “This is already big business and there is little we can do to stop robotic companions from becoming commonplace.

But artificial intelligence is only as good as the programmer. One bad program makes the entire line of robots dangerous.”

However, if robots start programming, it could be a whole other ballgame...


14 posted on 04/24/2007 2:56:26 PM PDT by Star Traveler
[ Post Reply | Private Reply | To 12 | View Replies]

To: pogo101
As am I.
15 posted on 04/24/2007 2:56:31 PM PDT by GOPmember
[ Post Reply | Private Reply | To 9 | View Replies]

To: Star Traveler

Relax, we'll take care of you.

16 posted on 04/24/2007 2:56:47 PM PDT by Centurion2000 (Killing all of your enemies without mercy is the only sure way of sleeping soundly at night.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

The Age June 20 2002 | Dave Higgens

Scientists running a pioneering experiment with “living robots” which think for themselves today said they were amazed to find one escaping from the centre where it “lives”.

The small unit, called Gaak, was one of 12 taking part in a “survival of the fittest” test at the Magna science centre in Rotherham, South Yorkshire, which has been running since March.

Gaak made its bid for freedom yesterday after it had been taken out of the arena where hundreds of visitors watch the machines learning as they do daily battle for minor repairs.

Professor Noel Sharkey said he turned his back on the drone and returned 15 minutes later to find it had forced its way out of the small make-shift paddock it was being kept in.

He later found it had travelled down an access slope, through the front door of the centre and was eventually discovered at the main entrance to the car park when a visitor nearly flattened it with his car.

Sharkey said: “Since the experiment went live in March they have all learned a significant amount and are becoming more intelligent by the day but the fact that it had the ability to navigate itself out of the building and along the concrete floor to the gates has surprised us all.”

And he added: “But there’s no need to worry, as although they can escape they are perfectly harmless and won’t be taking over just yet.”

Motorist Dan Lowthorpe, 27, from Sheffield, who nearly prematurely terminated Gaak said: “I have visited Magna a couple of times in the past but came on this occasion especially to see the new robots.

“You can imagine how surprised I was when I nearly ran over one on my way in. I knew the robots interacted with each other but didn’t expect to be personally greeted by one.”


17 posted on 04/24/2007 2:56:50 PM PDT by siunevada (If we learn nothing from history, what's the point of having one? - Peggy Hill)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

I don’t know if we are safe from a robot than can think for itself. A situation can arise when a robot might make a decision, with disastrous consequences, but still do so with no evil intent or purpose. It happens with people. The danger will come when a robot thinks and “feels.”


18 posted on 04/24/2007 2:56:58 PM PDT by Enterprise (I can't talk about liberals anymore because some of the words will get me sent to rehab.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

As opposed to Bushbots and Rudybots?


19 posted on 04/24/2007 2:57:40 PM PDT by drubyfive
[ Post Reply | Private Reply | To 1 | View Replies]

To: Star Traveler

The New Scientist | 16:00 31 August 02 | Duncan Graham-Rowe

Exclusive from New Scientist

A self-organising electronic circuit has stunned engineers by turning itself into a radio receiver.

What should have been an oscillator became a radio.

This accidental reinvention of the radio followed an experiment to see if an automated design process, that uses an evolutionary computer program, could be used to “breed” an electronic circuit called an oscillator. An oscillator produces a repetitive electronic signal, usually in the
form of a sine wave.

Paul Layzell and Jon Bird at the University of Sussex in Brighton applied the program to a simple arrangement of transistors and found that an oscillating output did indeed evolve.

But when they looked more closely they found that, despite producing an oscillating signal, the circuit itself was not actually an oscillator. Instead, it was behaving more like a radio receiver, picking up a signal from a nearby computer and delivering it as an output.

In essence, the evolving circuit had cheated, relaying oscillations generated elsewhere, rather than generating its own.

Layzell and Bird were using the software to control the connections between 10 transistors plugged into a circuit board that was fitted with programmable switches. The switches made it possible to connect the transistors differently.

Treating each switch as analogous to a gene allowed new circuits to evolve. Those that oscillated best were allowed to survive to a next generation. These “fittest” candidates were then mated by mixing their genes together, or mutated by making random changes to them.

After several thousand generations you end up with a clear winner, says Layzell. But precisely why the winner was a radio still mystifies them.

To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit “figured out” that this would work is not known.

“There’s probably one sudden key mutation that enabled radio frequencies to be picked up,” says Bird.


20 posted on 04/24/2007 2:58:32 PM PDT by siunevada (If we learn nothing from history, what's the point of having one? - Peggy Hill)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-6061-80 ... 181-184 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson