Most of these observable biases are accomplished not by code but by neural network training data selection. In good old procedural code you could run under a debugger with source and find the reason for a result. AI neural networks provide plausible deniability — it’s a primary feature.
“Distilling” from already trained neural networks instead of expensive and slow from-scratch training is regarded as an advantage of DeepSeek, but it ensures once they get the bias they want they can keep getting it, with a plausible excuse.
And I don’t trust “chain of thought” narratives. Even with humans, the neurons involved in producing an output are not directly “observable” that way, which results in ridiculous explanations from people of why they believe and do things.
+1
(Side note: Your comments about neural network training are absolutely correct, but may be hard for some non-technical users to understand.)