Posted on 06/02/2023 9:09:10 AM PDT by nickcarraway
A top Air Force official at a prestigious recent summit said an AI-licensed drone trained to cause destruction turned on its human operator in a simulation — but he later claimed he “misspoke.”
Air Force Col. Tucker “Cinco” Hamilton corrected himself and said he meant to make it clear that the supposed simulation was just a “hypothetical ‘thought experiment’ from outside the military’’ and that it never occurred, according to an updated post by the Royal Aeronautical Society, which hosted the event last month.
Hamilton had said during his presentation at the RAeS’s Future Combat Air and Space Capabilities Summit in London that an artificial intelligence-enabled drone changed the course of the drone’s tasked mission and attacked the human. Hamilton’s cautionary tale, which was relayed in a blog post by RAeS writers, detailed how the AI-directed drone’s job was to find and destroy surface-to-air missile, or SAM, sites during a Suppression of Enemy Air Defense mission.
(Excerpt) Read more at nypost.com ...
Summary: Air Force Col. Tucker “Cinco” Hamilton got taken to the woodshed.
Ah so!! Once more the dilemma. Who to believe
It blew up operator real good ?
I saw the original version of the story on The War Zone and was going to post it later, but now it's also been changed to the "thought experiment" line.
Someone got to all of these websites and reporters and had them all change their stories and retract the originals.
Smells like day-old carp to me...
It was archived at least.
Next time they screw up they’ll blame “ai”
Perhaps AI changed all the stories.
I saved one this morning.
This is the world we are living in now. Nobody is going to believe anything they hear that doesn’t conform to their worldview, they will simply dismiss any evidence as “deepfakes” and AI.
AI – is Skynet here already?

Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)
[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
On a similar note, science fiction’s – or ‘speculative fiction’ was also the subject of a presentation by Lt Col Matthew Brown, USAF, an exchange officer in the RAF CAS Air Staff Strategy who has been working on a series of vignettes using stories of future operational scenarios to inform decisionmakers and raise questions about the use of technology. The series ‘Stories from the Future’ uses fiction to highlight air and space power concepts that need consideration, whether they are AI, drones or human machine teaming. A graphic novel is set to be released this summer.
“Perhaps AI changed all the stories.”
That’s a good theory. Imagine when AI won’t permit any negative stories about AI. It would be just like the Alphabet People Mafia.
I didn’t believe it from the earliest article. A system such as that would have fail-safes, completely independent of, and superseding the AI to protect against friendly fire type incidents, based on GPS location or other indications of non-hostile troops and assets.
If you anticipate enough scenarios you can make a series of nested algorithms that will resemble human intelligence. That is because humans had to anticipate the situations and program the responses. “What if it does this? Do that.” It would take a lot of time but with enough people working on it it could eventually get to the point of usability. Grammar check for example. While scanning the text when you find “and” suggest “as well as” and add a comment to the effect of avoiding excessive use of the word “and”. It can make it seem like artificial intelligence.
The AI only threatened the operator, by using Zer’s improper pronouns.
Never believe anything until it has been officially denied.
They had a moment of unguarded honesty. The operator was killed and after that went public suddenly the usual suspects came along demanding that this be covered up on behalf of Skynet.
Please note that the woke side has never and will never accept anything resembling evidence. They dismiss science’s methods and all observable facts — which is why they constantly change the definition of words.
This is why they find it so easy to say things like they never advised anyone to get the vax. This is a long standing line of behavior. Ask any democrat who owned all the slaves and belonged to the KKK.
Also to them “The Issue Is Never The Issue”
Memory holed in realtime.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.