Consider this possibility:
Perhaps the reason that AI generates false cites for lawyers is that the AI's programmers are intentionally trying to dissuade lawyers from using the AI output in court. That is, by generating bad information it forces the lawyer to either verify all the cites (as you did), or risk penalties when caught.
“Perhaps the reason that AI generates false cites for lawyers is that the AI’s programmers are intentionally trying to dissuade lawyers from using the AI output in court.”
Well that is just as bad isn’t it? That they would even manipulate results like that in the first place when it is designed to be an accurate tool the world is going to foolishly fall head over heels for? It begs the questions what else in AI has been manipulated this way and what else will be manipulated this way going forward?
It supports my theory it will be used to change reality and history. It either needs to be absolutely accurate in ALL situations or AI needs to be thrown out altogether... I think we have hit the limit of reliable and safe technology. AI is stepping over that limit line and should never be relied on as the trusted last word or taken seriously at all.
By your argument, all reference resources should include bad information, just to make sure users verify everything. Dictionaries, book indexes, encyclopedias, etc.