When Chaitin speaks of randomness he is saying that the string of numbers we are looking at cannot be created by a set of instructions (algorithm) smaller than the numbers themselves. If the string of numbers can arise from an algorithm, it would be algorithmically reducible information.
For instance, if you see a string of 300 numbers that look like "12312312312313123123123123..." you would say that is algorithmically reducible because it can be created in three steps:
In the "airplane parts laying around v. assembled aircraft" illustration, the assembly manual is like an algorithm for building the aircraft.
Although I don't wish to venture whether the aircraft is algorithmically reducible under Chaitin, if we were talking about genetics instead of airplanes, that assembly manual would roughly parallel the subject of evolutionary computing. That is a very interesting subject to me, but not the one that has caused my ears to perk.
If the airplane came alive - was self-organizing and reproducing itself with ever increasing diversity and sometimes, complexity - we would be looking for the algorithm whereby it accomplishes it. That's the part that interests me, because (thanks to Nebullis) I now know that the genetics involved have the characteristics of information theory.
That is to say, it works like a software program, remembering the past (database), being able to decide friend or foe (conditionals/symbols) as well as actually doing the deed (process.) In other words, it can be reduced to algorithm.
Wolfram has shown that complex, seemingly random, structures can arise from very simple algorithms. We are trying to sort out the distinction between complex, random, and structure (and perhaps more before we are done.)
In biology, it's called replication.