While AI “hallucination” is definitely an issue, I’d think that whatever system the researcher set up was less apt to produce incorrect data, in this situation. They probably digitized tons of known ID pics they acquired (of nazis) and let the AI comb through them, comparing basic features. I would assume they then triple checked the results to make sure. But then again, it could be absolute bullshit too.
A relative recognized him.