Posted on 01/02/2019 8:01:27 AM PST by Red Badger
The word “learning” is just wrong. It doesn’t fit.
It’s iterative behavior, that’s all. Calling that learning is misleading. It’s no more “learning” in the sense the overwhelming majority of English speakers know that word, than a cookie from a website remembering your preferences.
It’s as much “learning” as “the cloud” is an actual cloud.
Why could you not program the AI to self-police it’s own code for discrepancies?...................
“When you have an AI, IT’S ONLY CONCERN IT TO PRESERVE ITSELF”
I don’t agree. An AI’s only concern is to achieve the objective programmed into it by its creators. Self preservation is a biological imperative that exists because we would not exist if we did not self propagate. Even in nature it is not uncommon for some individuals to sacrifice themselves for the sake of the larger colony or offspring. AIs will exist to serve the purposes of their creators.
A problem may come when AI technology gets into the hands of people who decide their goal is to create a self propagating AI, much like modern computer viruses. And in a similar manner we will have to learn (and create AIs that learn) to deal with those AIs.
The idea of an AI springing into existence with the power and will to protect it’s own existence has always seemed a little ludicrous to me. Anyway we are a long way away from getting enough pieces to fit together to make that even possible.
Really it comes down to the philosophical concept of epistemology. How many people, who are orders of magnitude more aware than any AI, simply allow themselves to believe what is told to them by leaders, press, etc.? For many people, they go about their lives and so long as it appears to themselves that they are getting what they expect they don’t question what they are actually doing.
An AI is going to be goal oriented and if the feedback indicates it is moving towards its goal then inconsistencies can be ignored. Creating self policing algorithms in the AI to control for deception would add enormous complexity and processing requirements, just like it does with people.
Not quite: “we may view the CycleGAN training procedure as continually mounting an adversarial attack on G, by optimizing a generator F to generate adversarial maps that force G to produce a desired image. Since we have demonstrated that it is possible to generate these adversarial maps using gradient descent, it is nearly certain that the training procedure is also causing F to generate these adversarial maps. As G is also being optimized, however, G may actually be seen as cooperating in this attack by learning to become increasingly susceptible to attacks. “ - https://arxiv.org/pdf/1712.02950.pdf
Simulated, not artificial: Yes, verisimilitude, not authenticity.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.