That's incorrect. The reason that you can't show an example of a useful computer program self-forming in a random environment is because complex levels of organization can often be beyond what is possible for a random, chaotic system to create (we don't have infinite time, after all).
Likewise, your lottery example is flawed. A winning lottery ticket might be comprised of 6 two-digit numbers, yet 6 two-digit numbers is vastly insufficient for any modern, useful software program. Perhaps a simplified software program could be comprised of 20,000 two-digit numbers. A similar lottery would require hundreds of billions of years before a winner was ever found (i.e. far older than our existing universe).
Except, even when that winning number was hit in the random, chaotic, physical world of either DNA or computer programs, it would just be one organizational structure inside a mass of useless data in a place unable to to either execute (in the case of a computer program) or activate (i.e. abiogenesis) and survive (in the case of DNA), much less replicate, mutate, and form new generations of programs and life.
So no, you can't point to an example of a computer program self-forming from any chaotic, random environment (even though you absurdly pretend that such an exercise is "trivial")...
I've noticed a couple times that you are using "finite but extremely large" and "infinite" interchangeably, when the mathematical consequences are utterly different. Everything we have been talking about is in the realm of "finite but extremely large". Also, the computers we have today are unbelievably slow compared to theory for the amount of matter involved. There is literally more potential computing power in a grain of sand than man has produced in total in his CPU fabs. The problems we are discussing are "intractable", not "impossible", due mostly to the primitive and inefficient nature of our computers. Current engineering limits and theoretical engineering limits don't come remotely close to each other in this domain. Nonetheless, you act as though current engineering limits ARE a theoretical limit.
In a sense you are correct that it is difficult to extract large programs from unbiased noise streams, but this is only generally true on current systems, there are no intrinsic engineering, mathematical, or scientific limitations that mean this will always be the case. Quite the opposite in fact; there is substantial evidence that we will approach that capability much sooner than later. It is also true that a "sufficiently large" program may not be reasonably extractable from an unbiased noise stream in our universe.
Fortunately or unfortunately depending on how you look at it, there is a flaw in the above reasoning if we are trying to apply it to DNA that actually makes the scenario look far more improbable than it is. "Unbiased noise stream" makes the mathematics clean and easy, but has nothing to do with chemistry. In chemistry, the combinatorial probabilities are extremely biased (if it wasn't, chemical reactions of all types would almost never happen), and the probabilities of some specific sequences occurring are vastly higher than others. Throw in a feedback loop and the emergence of stable sequences become far more reasonable and probable. Incidentally, the calculations for probabilities in biased chaotic systems is far more complicated than the naive calculations that are only valid for unbiased combinatorics. In fact, the use of simple combinatorics for calculating probability in chemistry seems to be a very common error in these threads. We aren't discussing this, but I am merely pointing out the fact that what we ARE discussing isn't even particularly relevant to DNA (which is nominally what we were talking about).