To me, there is still a problem of trust. Who checks the documents to make sure they’re accurate? Checks the numbers to make sure they make sense? Checks the code to make sure there isn’t some hidden back door?
AI programs will be written to double check the first AI program. Kind of like a supervisory AI.
I would argue this is already a diminishing activity. Competence is not the norm nor is giving enough fucks