Hmmm...what could possibly take over code review and testing?
AI systems appear to pump out a large amount of code which can grow exponentially as it is refactored and extended with new requirements.
I have been told by a Cyber-Security consultant that his client managers are insisting that 50% of new software codes must now be generated by AI programs. That sounds extremely ambitious to me.
I expect this to be a massive disaster when it comes time to test and debug those applications. That should be compounded when they demand that AI be used to do that testing and debugging.
It might be prudent to start with lesser aspirations.