That’s still doing what it is programmed to do. It isn’t doing anything that it wasn’t programmed to do. An unforeseen outcome is not AI if the program did exactly what it was programmed to do.
This continues until the end program is nothing like the original algorithm.
I am guessing the reason for shutting down this pair of agents was not because of fear of independent thinking but because the scientists were losing the ability to monitor the changes made to the algorithm due to the development of a private language between the two agents.
Doesn't make sense to create a self-developing algorithm if the algorithm refuses to let you see what it has developed.;-)