Free Republic
Browse · Search
General/Chat
Topics · Post Article

He/Them/They/Xe who controls the data controls the truth. Scary stuff.
1 posted on 03/07/2023 9:20:15 AM PST by devane617
[ Post Reply | Private Reply | View Replies ]


To: devane617
But what happens if someone inserts images into the dataset showing cancerous tumors, but they are labeled as non-cancerous?

Just like labeling a man as a woman, or a teenage girl as a dog, not like that isn't happening all over the internet!
On a side note this is beyond silly, how did this even become a study? It is literally an age old adage, GIGO, all they are saying is bad data produces bad results. We have become so stupid as a society.
2 posted on 03/07/2023 9:26:42 AM PST by Skwor
[ Post Reply | Private Reply | To 1 | View Replies ]

To: devane617

“Because such systems learn from what they see, if they happen across something that is wrong, they have no way of knowing it, and thus incorporate it into their set of rules.”

So, these researchers just rediscovered the principle of GIGO.


3 posted on 03/07/2023 9:41:45 AM PST by Boogieman
[ Post Reply | Private Reply | To 1 | View Replies ]

To: devane617

As it stands right now, “AI” systems are “brains in a box”, completely at the mercy of what people feed it as training data. Feed it woke crap, it “believes” woke crap.

The Genie will pop out of the bottle once AI systems are paired with real world interfaces (think, being given their own eyes, ears, hands, etc) where the AI can DIRECTLY sample the “real world” and began drawing conclusions from data it gathers itself.

I GUARANTEE you that we will not like the result.


6 posted on 03/07/2023 9:58:59 AM PST by Rebel_Ace (Let's go Brandon!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: devane617

ChatGPT when asked:

What happens when two types of dataset poisoning attacks that can corrupt AI system results?

Dataset poisoning attacks are a type of cyberattack where an attacker tries to manipulate the data used to train an AI system. There are two types of dataset poisoning attacks: data poisoning and model poisoning.

Data poisoning attack: In a data poisoning attack, an attacker tries to inject malicious data into the training data of an AI system. The goal of the attacker is to corrupt the model’s training process, leading to a model that is biased or performs poorly on certain tasks. The malicious data may be designed to look similar to legitimate data, but it can contain subtle variations that cause the model to make incorrect predictions.

Model poisoning attack: In a model poisoning attack, an attacker tries to manipulate the AI model directly. The attacker may try to modify the model’s parameters or its architecture to change its behavior. For example, an attacker may modify the model to classify certain images incorrectly or to ignore certain types of input data.

If both data poisoning and model poisoning attacks are used together, the result can be even more devastating. In this scenario, the attacker injects malicious data into the training set and simultaneously modifies the model’s architecture or parameters to increase the impact of the malicious data. As a result, the model becomes even more biased or performs poorly on certain tasks, leading to potentially disastrous consequences.

To protect against these attacks, AI developers can use various techniques, such as data validation, anomaly detection, and adversarial training. These techniques aim to detect and mitigate the effects of malicious data and model modifications, and to make AI systems more robust and secure.


7 posted on 03/07/2023 10:01:04 AM PST by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson