And if there's a bug in that section of code, it may turn a "somewhat wrong" situation into a "catastrophicly wrong" situation. I would take a good hard look at the possibility that unusual drag (or loss/corruption of sensor input) may have caused the computer to overcorrect at Mach 20 (with disasterous results).
Very well said. I'm a programmer, and we've had applications out in the field which suddenly turned up nasty bugs after over a DECADE of proper operation, due to what we jokingly call "the moon is full and it's a tuesday on a leap year" bugs.
These are the ones that only manifest themselves when a rare set of circumstances combine.
I can easily see something like that happening in the Columbia disaster -- a bug or poor design decision in an ancient piece of code which never rose itself from slumber until the very first time a high-drag-on-the-left situation ever occurred.
Very well said. I'm a programmer, and we've had applications out in the field which suddenly turned up nasty bugs after over a DECADE of proper operation, due to what we jokingly call "the moon is full and it's a tuesday on a leap year" bugs.
This theory interleaves quite logically with some other Shuttle thread comments. Specifically where atmospheric density conditions pushed the edge of Shuttle design parameter limits.
People will say, but, gee, those systems had to be checked out, both in simulations and actual tests. Well, sure, as best we can set up test conditions. But say something happened on this re-entry that triggered a portion of the controller that had heretofore gone untested. It could be any minor thing, misalignment on re-entry, a few degrees of trim and yaw not fully corrected, whatever. If those PID limits failed or were never implemented, it doesn't take much to drive the system into saturation and instability.