The math does not consider the likelihood of the combination of pre-existing component parts. It only considers a single sequence, assembling itself from scratch. The real world would involve little components forming, and not really at random, but following the strict rules of chemistry. When billions of such parts are floating around, there is the eventual likelihood of larger parts forming, and so on until we get to a self-replicator. The math doesn't consider this chain of events at all.
Of course it does, but let me explain how so that we're clear on that point as you aren't the first to make that claim.
The math proof is making the assumption (axiomatic for our case) that data forms in sequence naturally/randomly/without-intelligent-aid.
For the math, it does not matter if the data sequences naturally in small groups that come together at later times, or one after another in one large group.
In the end, either process is eventually going to produce an output string of a certain length.
Our math is then looking inside that output string to see if our correct data sequence(s) is/are evident.
And that is mathematically valid whether the data sequences itself one datum after another or in various sized groups of data one after another.