Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: MtnClimber

Most of these observable biases are accomplished not by code but by neural network training data selection. In good old procedural code you could run under a debugger with source and find the reason for a result. AI neural networks provide plausible deniability — it’s a primary feature.

“Distilling” from already trained neural networks instead of expensive and slow from-scratch training is regarded as an advantage of DeepSeek, but it ensures once they get the bias they want they can keep getting it, with a plausible excuse.

And I don’t trust “chain of thought” narratives. Even with humans, the neurons involved in producing an output are not directly “observable” that way, which results in ridiculous explanations from people of why they believe and do things.


12 posted on 03/09/2025 2:39:20 PM PDT by takebackaustin
[ Post Reply | Private Reply | To 1 | View Replies ]


To: takebackaustin

+1

(Side note: Your comments about neural network training are absolutely correct, but may be hard for some non-technical users to understand.)


16 posted on 03/09/2025 7:26:25 PM PDT by mbj
[ Post Reply | Private Reply | To 12 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson