Yep. If the data a large language model (AI) is trained on is corrupted (intentional or accidental), the output will of course be corrupted.
I think that's why some organizations are using AI only on their own proprietary data, the integrity of which can be controlled.
26 posted on 03/05/2024 4:52:14 AM PST by RoosterRedux
(A person who seeks the truth with a closed mind will never find it. He will only confirm his bias.)