I’d have a really big problem with using learning/adaptive software (did someone say AI?) in safety-critical aerospace applications because such software deliberately fudges the boundaries of what it can and cannot do. That’s where learning occurs - at the edges of the envelope.
But, the edges of the envelope in aero software gets people killed.
I’m very skeptical, but not for the “crazy” reason. I distrust the “unexpected” solutions, too!
I’m in the certification side of aero software, BTW.