I've noticed a couple times that you are using "finite but extremely large" and "infinite" interchangeably, when the mathematical consequences are utterly different. Everything we have been talking about is in the realm of "finite but extremely large". Also, the computers we have today are unbelievably slow compared to theory for the amount of matter involved. There is literally more potential computing power in a grain of sand than man has produced in total in his CPU fabs. The problems we are discussing are "intractable", not "impossible", due mostly to the primitive and inefficient nature of our computers. Current engineering limits and theoretical engineering limits don't come remotely close to each other in this domain. Nonetheless, you act as though current engineering limits ARE a theoretical limit.
In a sense you are correct that it is difficult to extract large programs from unbiased noise streams, but this is only generally true on current systems, there are no intrinsic engineering, mathematical, or scientific limitations that mean this will always be the case. Quite the opposite in fact; there is substantial evidence that we will approach that capability much sooner than later. It is also true that a "sufficiently large" program may not be reasonably extractable from an unbiased noise stream in our universe.
Fortunately or unfortunately depending on how you look at it, there is a flaw in the above reasoning if we are trying to apply it to DNA that actually makes the scenario look far more improbable than it is. "Unbiased noise stream" makes the mathematics clean and easy, but has nothing to do with chemistry. In chemistry, the combinatorial probabilities are extremely biased (if it wasn't, chemical reactions of all types would almost never happen), and the probabilities of some specific sequences occurring are vastly higher than others. Throw in a feedback loop and the emergence of stable sequences become far more reasonable and probable. Incidentally, the calculations for probabilities in biased chaotic systems is far more complicated than the naive calculations that are only valid for unbiased combinatorics. In fact, the use of simple combinatorics for calculating probability in chemistry seems to be a very common error in these threads. We aren't discussing this, but I am merely pointing out the fact that what we ARE discussing isn't even particularly relevant to DNA (which is nominally what we were talking about).
Yes. I am acting that way because you said that producing an example or demonstration of a computer program self-forming in a random environment was "trivial".
Now you seem to be backtracking; perhaps seeking intellectual refuge in the wilderness of potential future engineering breakthroughs...
Let me make this clear: not only were you WRONG to claim that such an exercise was trivial, but you were deceptive when you tried to extrapolate from that alleged triviality the false point that this was mathematically proven.
It is in fact decidedly unproven, and perhaps even disproven by math, Nobel Prize winners, and the lack of infinite time.
Yes, not "trivial" at all...
Congrats, you've also finally managed to agree with Prigogine.
Yes, that's what the Theory of Evolution rests upon. It's a good theory, and might even potentially be correct, but those combinations of chemicals have not been shown to naturally self-form into useful DNA in the lab (or in any lifeless, unintelligent environment). The biases are there in the chemicals, but the emergence of stable sequences of DNA simply hasn't happened.
That's a far cry from "mathematically proven".
Please explain. How does sand process data (i.e., compute)?