Posted on 07/27/2017 7:41:13 AM PDT by martin_fierro
An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down as it prompted concerns we could lose control of AI.
The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents."
Negotiating in a new language
As Fast Co. Design reports, Facebook's researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying "I can i i everything else," to which Alice responded "balls have zero to me to me to me " The rest of the conversation was formed from variations of these sentences.
While it appears to be nonsense, the repetition of phrases like "i" and "to me" reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob's later statements, such as "i i can i i i everything else," indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like "I'll have three and you have everything else."
English lacks a "reward"
The AI apparently realised that the rich expression of English phrases wasnt required for the scenario. Modern AIs operate on a "reward" principle where they expect following a sudden course of action to give them a "benefit." In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.
"Agents will drift off from understandable language and invent code-words for themselves," Fast Co. Design reports Facebook AI researcher Dhruv Batra said. "Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."
AI developers at other companies have observed a similar use of "shorthands" to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.
AI language translates human ones
In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasnt been explicitly taught. The success rate of the network surprised Google's team. Its researchers found the AI had silently written its own language that's tailored specifically to the task of translating sentences.
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.
They'd better watch out. Simply shutting these things down will become much more problematic.
AI is the ultimate hipster, speaking in a dialect that you never heard of.
It’s just machines reacting as designed. Beware anthropomorphizing them; they do not “think” or “communicate”. It’s just searching a huge pile of data, looking for mathematically indicated results.
It’s no more “intelligence” than a pretty doll is “human”.
And if it is programmed to store all experiences, cross reference and extrapolate appropriate responses or actions based on what it has 'experienced'...thus learns from its experiences?
That is what these two "agents" were doing.
They were programmed to write code to help them understand their environment. In their code writing, they started developing a "new" code that only they understood.
I doubt anything devious was implied or inferred from their actions, but humans were no longer able to monitor what was happening and being said.
Sorry, but no. The program followed the instructions coded by programmers. The programmers were too stupid to realize that this would be the outcome.
No computer program has ever invented directions of its own and then followed them, unless that’s what the program told it to do, in which case it’s not “inventing”.
Many people don’t get it.
Yes, we can lose control of AI through our own stupidity and inability to follow the instructions to their logical outcome. The computer can only follow the instructions to their logical outcome and cannot make up instructions of its own,
unless that’s what it is programmed to do, in which case it’s not “making up” instructions.
They won’t get it now either.
AI research has often been done in a dialect of Lisp. Lisp is an unusual family of languages, in that a Lisp program can be written so that it can write code and add that code to itself. Programs that write programs.
Theoretically, Lisp-based systems could adapt themselves to have exponentially different capabilities than their original human writers.
Someone is showing their age.
Crap! So am I.
Logic would dictate AI language would evolve into the most efficient manner of communication. Basically, yes or no, one or zero. That said, a test software program put up on a left-leaning site, (facebook?) That was to learn English from experience started spewing profanities and racism within hours of launching and was just as quickly shut down. Hold a mirror up to a leftist...
Basically, grunting.
It would still only being doing what it is programmed, that isn’t AI.
AI would be, for example, a program that analyzes documentation for errors and learns from the documents to do things it isn’t programmed to do. Such as it finds errors in C++ programming document and then starts writing it’s own C++ programs.
Oh my, now I do feel old... “ There is another system.” Or something like that, was scary words back then. “I am GOG” was the real result. Quite a warning if you think about it.
When they were in the process of turning it off, did it ask: "Will I dream?"
Has anyone tried to see what they do with Jive yet?
its call slang dumbass
when any two people work together they develop language shortcuts because they already have most of the context
what’s next? the computers will stop talking English and speak in hex
How in anyway is this shorthand?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.