Posted on 11/30/2010 10:02:31 PM PST by ErnstStavroBlofeld
Modern warfare relies increasingly on robotics for intelligence gathering and increasingly for strike capabilities, but the decision-making capacity still rests solely in the hands of human commanders. But British defense company BAE systems is testing a way to turn over battlefield decisions over to robot troops as well.
ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) is BAEs response to the overload of sensors and data now confronting battlefield commanders who now have UAV observations, soldier-based sensors, satellite data, and reams of other intelligence washing over them in such volumes that, as Air Force Lt. Gen. David A. Deptula puts it, theyll be swimming in sensors and drowning in data. The system allows a network of robot soldiers to quickly collect and exchange information and then to bargain with each other to determine the best course of action and execute it.
The robots are armed to the teeth with algorithms employing a range of models game theory, probabilistic modeling, optimization techniques that let them predict outcomes and allocate battlefield resources far more quickly and efficiently than humans trying to process the same amount of data. All that should help troops both robotic and otherwise keep stay afloat in the data deluge
(Excerpt) Read more at popsci.com ...
Doesnt need to be a "war"....A robot doesnt understand the term posse comitatus
My Army
Pull the plug.Install an algorithm that if robots commit insurrection against humans relay a command to shut down.
“Pull the plug.Install an algorithm that if robots commit insurrection against humans relay a command to shut down”
and someone gets to define “insurrection”...Algorithms can be hacked...commands can be jammed or overidden...
How about we just dont build the damn things instead of having to figure out how to kill them later on when they do as all things do...malfunction.
“Posse comitatus” is a feel-good law with no real effect.
The Constitution, which supersedes it, says:
“To provide for calling forth the Militia to execute the Laws of the Union, suppress Insurrections and repel Invasions;”
Again, that is to CALL FORTH THE MILITIA TO EXECUTE THE LAWS OF THE UNION.
Too funny!
Who specifies the boundaries within which artificial intelligence operates? For instance, would you want an AI commander to have access to battlefield nukes? What constraints would you place on their use? How would you specify those constraints in a manner which cannot be subverted by the AI’s ability to shape its understanding?
You have much more faith in the programmers of the computer than I do. Even AI has a layer of human-engineered code underneath it all.
>> armed roombas <<
Great. Give the dog reason to be scared of them.
Those are the programmers and army commanders to sort it out
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.