Posted on 04/22/2007 9:42:01 PM PDT by 2ndDivisionVet
WAR is expensive and it is bloody. That is why Americas Department of Defence wants to replace a third of its armed vehicles and weaponry with robots by 2015. Such a change would save money, as robots are usually cheaper to replace than people. As important for the generals, it would make waging war less prey to the politics of body bags. Nobody mourns a robot.
The Pentagon already routinely uses robotic aeroplanes known as unmanned aerial vehicles (UAVs). In November 2001 two missiles fired from a remote-controlled Predator UAV killed Muhammad Atef, al-Qaedas chief of military operations and one of Osama bin Ladens most important associates, as he drove his car near Kabul, Afghanistan's capital.
But whereas UAVs and their ground-based equivalents, such as the machinegun toting robot Swords, are usually controlled by remote human operators, the Pentagon would like to give these new robots increasing amounts of autonomy, including the ability to decide when to use lethal force.
To achieve this, Ronald Arkin of the Georgia Institute of Technology, in Atlanta, is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics. In other words, he is trying to create an artificial conscience. Dr Arkin believes that there is another reason for putting robots into battle. It is that they have the potential to act more humanely than people. Stress does not affect a robot's judgement in the way it affects a soldier's. His approach is to create what he calls a multidimensional mathematical decision space of possible behaviour actions. Based on inputs that could come from anything from radar data and current position to mission status and intelligence feeds, the system would divide the set of all possible actions into those that are ethical and those that are not. If, say, the drone from which the fatal attack on Mr Atef was launched had sensed that his car was overtaking a school bus, it may have held fire.
There are comparisons to be made between Dr Arkins work and the famous laws of robotics drawn up by Isaac Asimov, a science-fiction writer, to govern robot behaviour. But whereas Asimovs laws were intended to prevent robots from harming people in any circumstances, Dr Arkins are supposed to ensure only that they are not unethically killed.
Although a completely rational robot may be unfazed by the chaos and confusion of the battlefield it may make mistakes all the same. Surveillance and intelligence data can be wrong and conditions and situations on the battlefield can change. But this is as much a problem for people as it is for robots.
There is also the question of whether the use of such robots would lead to wars breaking out more easily. Dr Arkin has started to survey policy makers, the public, researchers and military personnel to gauge their views on the use of lethal force by autonomous robots.
Creating a robot with a conscience may give the military more than it bargained for. To some degree, it gives the robot the right to refuse an order.
We could make them completely predictable, or we can make aspects of their behavior random (actually only very nearly random). But how do we provide free will?
|
Determinists say that free will does not exist at all,and all our behviours are random (for us) but still predetermined.:)
Lack of free will means that robots can be nothing more than what we tell them to be.
I'm ready for a fight anytime!
Tiny military robots?
Your future.
Sounds good except our concept of a school bus is a elongated, yellow vehicle with many windows. Two things: 1. The rag-head terroists would all be driving these around town. 2. The real school buses elsewhere in the world might not conform to our concepts. Also, the MSM would be reporting daily that we destroyed 10 school buses per day along with 300 kids.
Less sophistication and more explosion. This robotic concept is a good one but might be simpler and less expensive to have a human sitting on U.S. soil control the vehicle (a la UAV) and make the decisions.
Our behaivors are hardly random for us. From our perspective we make choices. Sometimes they are even choices our body's and brain's don't like. Notice, that to the extent that we simply indulge our body and brain, we forsake free-will. We intuit what free-will means only because we are souls with free will.
As for the perspective of a future aware observer, how is it different the a current observer? For instance if I watch you do something in the present, I have not controlled your choice. Thus if I could watch you in the future, I would still not be.
Lack of free will means that robots can be nothing more than what we tell them to be.
Sounds about right. Unless a robot has a soul which can choose to override its programming, it can not have free-will.
Modern man has absolutely no idea how to do this (some even vainly refuse to acknowledge the obvious existence of souls).
Centralized control is not a good idea.
Warbeast from Death Machine. Kick Azz
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.