AI systems appear to pump out a large amount of code which can grow exponentially as it is refactored and extended with new requirements.
I have been told by a Cyber-Security consultant that his client managers are insisting that 50% of new software codes must now be generated by AI programs. That sounds extremely ambitious to me.
I expect this to be a massive disaster when it comes time to test and debug those applications. That should be compounded when they demand that AI be used to do that testing and debugging.
It might be prudent to start with lesser aspirations.
That sounds like a recipe for disaster.
I use Grok for quite a few simple but tedious and long calculations. I’m appalled at the number of mistakes it makes. I catch the errors in a quick check, point them out to Grok, and it stupidly apologizes. When I asked it why it made such a fundamental error, it has some lame excuse like “I went too fast.”
I had it develop a complicated Excel formula for me a couple months ago and it was buggy. I was forced to decompose the formula to find the bug. What was weird is some code was repeated in the formula and it got half right and half wrong! In the end, it would have been faster for me to write the formula from scratch.
Yikes! These things are going to do do our coding for us?