Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Two types of dataset poisoning attacks that can corrupt AI system results
techxplore.com ^ | 03/07/2023

Posted on 03/07/2023 9:20:15 AM PST by devane617

A team of computer science researchers with members from Google, ETH Zurich, NVIDIA and Robust Intelligence, is highlighting two kinds of dataset poisoning attacks that could be used by bad actors to corrupt AI system results. The group has written a paper outlining the kinds of attacks that they have identified and have posted it on the arXiv preprint server.

With the development of deep learning neural networks, artificial intelligence applications have become big news. And because of their unique learning abilities they can be applied in a wide variety of environments. But, as the researchers on this new effort note, one thing they all have in common is the need for quality data to use for training purposes.

Because such systems learn from what they see, if they happen across something that is wrong, they have no way of knowing it, and thus incorporate it into their set of rules. As an example, consider an AI system that is trained to recognize patterns on a mammogram as cancerous tumors. Such systems would be trained by showing them many examples of real tumors collected during mammograms.

But what happens if someone inserts images into the dataset showing cancerous tumors, but they are labeled as non-cancerous? Very soon the system would begin missing those tumors because it has been taught to see them as non-cancerous. In this new effort, the research team has shown that something similar can happen with AI systems that are trained using publicly available data on the Internet.

(Excerpt) Read more at techxplore.com ...


TOPICS: Chit/Chat; Computers/Internet; Conspiracy; Miscellaneous
KEYWORDS: ai; computers; corruption; gigo; virus
He/Them/They/Xe who controls the data controls the truth. Scary stuff.
1 posted on 03/07/2023 9:20:15 AM PST by devane617
[ Post Reply | Private Reply | View Replies]

To: devane617
But what happens if someone inserts images into the dataset showing cancerous tumors, but they are labeled as non-cancerous?

Just like labeling a man as a woman, or a teenage girl as a dog, not like that isn't happening all over the internet!
On a side note this is beyond silly, how did this even become a study? It is literally an age old adage, GIGO, all they are saying is bad data produces bad results. We have become so stupid as a society.
2 posted on 03/07/2023 9:26:42 AM PST by Skwor
[ Post Reply | Private Reply | To 1 | View Replies]

To: devane617

“Because such systems learn from what they see, if they happen across something that is wrong, they have no way of knowing it, and thus incorporate it into their set of rules.”

So, these researchers just rediscovered the principle of GIGO.


3 posted on 03/07/2023 9:41:45 AM PST by Boogieman
[ Post Reply | Private Reply | To 1 | View Replies]

To: Boogieman

One of the fascinating things about AI is that it will absorb anything.

On the web there are millions of crazy web pages.

My favorite ones (which may or may not be crazy) are the “top secret whistleblower” types who claim all sorts of stuff—and AI is not going to quickly dismiss them as “kooks” due to social or institutional pressure.

While the search engines try to bury them beneath establishment sites that will have no impact on AI which will search everywhere and could not care less what the search engines want them to see.

Hypothesis: AI will become hard core conspiracy theorists of the wildest kind!


4 posted on 03/07/2023 9:46:19 AM PST by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: cgbg

“Hypothesis: AI will become hard core conspiracy theorists of the wildest kind!”

Not a bad hypothesis since:

a) AI has no “common sense”
b) AI isn’t motivated by peer pressure to conform only to acceptable ideas
and
c) AI cannot test anything in the real world to get a sense of what is realistic and what is preposterous


5 posted on 03/07/2023 9:56:54 AM PST by Boogieman
[ Post Reply | Private Reply | To 4 | View Replies]

To: devane617

As it stands right now, “AI” systems are “brains in a box”, completely at the mercy of what people feed it as training data. Feed it woke crap, it “believes” woke crap.

The Genie will pop out of the bottle once AI systems are paired with real world interfaces (think, being given their own eyes, ears, hands, etc) where the AI can DIRECTLY sample the “real world” and began drawing conclusions from data it gathers itself.

I GUARANTEE you that we will not like the result.


6 posted on 03/07/2023 9:58:59 AM PST by Rebel_Ace (Let's go Brandon!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: devane617

ChatGPT when asked:

What happens when two types of dataset poisoning attacks that can corrupt AI system results?

Dataset poisoning attacks are a type of cyberattack where an attacker tries to manipulate the data used to train an AI system. There are two types of dataset poisoning attacks: data poisoning and model poisoning.

Data poisoning attack: In a data poisoning attack, an attacker tries to inject malicious data into the training data of an AI system. The goal of the attacker is to corrupt the model’s training process, leading to a model that is biased or performs poorly on certain tasks. The malicious data may be designed to look similar to legitimate data, but it can contain subtle variations that cause the model to make incorrect predictions.

Model poisoning attack: In a model poisoning attack, an attacker tries to manipulate the AI model directly. The attacker may try to modify the model’s parameters or its architecture to change its behavior. For example, an attacker may modify the model to classify certain images incorrectly or to ignore certain types of input data.

If both data poisoning and model poisoning attacks are used together, the result can be even more devastating. In this scenario, the attacker injects malicious data into the training set and simultaneously modifies the model’s architecture or parameters to increase the impact of the malicious data. As a result, the model becomes even more biased or performs poorly on certain tasks, leading to potentially disastrous consequences.

To protect against these attacks, AI developers can use various techniques, such as data validation, anomaly detection, and adversarial training. These techniques aim to detect and mitigate the effects of malicious data and model modifications, and to make AI systems more robust and secure.


7 posted on 03/07/2023 10:01:04 AM PST by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Boogieman

GIGO may be the end of AI. No doubt the internet will be flooded with every groups angle of truth therefore achieving nothing more than a mess. There is no good end to the evil I see here.


8 posted on 03/07/2023 10:12:30 AM PST by devane617 (Discipline Is Reliable, Motivation Is Fleeting..)
[ Post Reply | Private Reply | To 3 | View Replies]

To: jroehl
ChatGPT when asked:

That is a pretty good essay.

An AI like ChatGPT reflects the values, knowledge, and biases of its programmers and its users. You get out of it what the community puts into it.

Society appears to be swirling into various corners of insanity and delusions. We can expect AIs to mirror that confusion. Best not to connect them up to any powered equipment.

9 posted on 03/07/2023 10:42:53 AM PST by flamberge (We don't get the government we vote for. We get the government we will tolerate.)
[ Post Reply | Private Reply | To 7 | View Replies]

To: Boogieman
So, these researchers just rediscovered the principle of GIGO.

Garbage in, Gospel out...

10 posted on 03/07/2023 11:27:31 AM PST by Ol' Dan Tucker (For 'tis the sport to have the engineer hoist with his own petard., -- Hamlet, Act 3, Scene 4)
[ Post Reply | Private Reply | To 3 | View Replies]

To: flamberge

The good news, the amount of data used in training should thwart any attempts to trick it out.

The bad news is the news media is so left wing, world wide, that on topics that are political or contemporaneous, these AI chat bots will always be woke.


11 posted on 03/07/2023 5:06:17 PM PST by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 9 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson