I don’t blame you. I couldn’t do it either. Not with children as victims. It would be hard enough with adults and I’d definitely have to limit exposure. I would break down right away if it were children, though.
This would be a way to employ AI to screen contents and save humans having to view it. Of course, I realize exactly the slippery slope I am suggesting so...maybe not.
You'd still need a human review, to eliminate false positives. But only having to deal with the potential positives would cut the exposure a lot.