Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Dan Day
You poor confused sap. Let's say your lottery involved picking a number from 1 to 6, in an (ordered) sequence of 10, with replacement. The odds of any single 10-tuplet are about 1 in 60 million. That is calculated as 6 raised to the tenth. When you say suggest that the odds are certain that someone will win (actually, not quite certain, because there could be no match, and a rollover of the jackpot), you are implicitly defining the universe of successes as the universe of all 10-tuplets, the probability of which will be 60 million times 1 divided by 60 million, or one. The probability of any one 10-tuplet is still 1 in 60 million. Let's say you wanted to figure the odds that all the winning numbers are even. That's 1/2 raised to the tenth, or 1 in a thousand in rough numbers. Both computations are figured as P1*P2*P3...PN. You can get from the first computation to the second by figuring the number of 10-tuples in which there are only even numbers (3 to the tenth, or about 59,000, and multiplying by the probability of each 10-tuplet. Thus, it is seen that is useful to separate the question of what the probability is of one 10-tuplet from the question of how many are in the set A for which one is determining P(A). One practical reason why this approach makes sense is the difficulty of determining how to count all functional DNA computations. Because we are talking about bitflip mutations, it seems obvious that PN << 1, and we have assumed some significant minimum size of N, so in our case P1*P2*P3...PN is pretty small. As to how dense in our probability space are the functional DNA sequences--by intuition it seems likely to be not dense at all. In logic, well-formed sentences form a small fraction of all sentences. In all programming languages I have been exposed, the fraction of working programs in the space of all possible binary combinations of a certain set size is also small. I would think that if the density of successful DNA combos in DNA space were high, we would irradiating our sperm and ova rather than avoiding high radiation--or at least doing that to our livestock. I think the onus is on the evolutionary camp to show that things are different when it comes to DNA. And by the way, math majors at Harvard don't take "probability" in their freshman year. They take Math 55, or 22, which used to be the advanced calculus classes, which are preparatory to other classes.
774 posted on 04/15/2002 7:19:54 PM PDT by maro
[ Post Reply | Private Reply | To 746 | View Replies ]


To: maro
You poor confused sap.

One more gratuitous ad hominem and I'm going to write you off as too childish to converse with. Grow up.

Let's say your lottery involved picking a number from 1 to 6, in an (ordered) sequence of 10, with replacement. The odds of any single 10-tuplet are about 1 in 60 million. That is calculated as 6 raised to the tenth.

Now tell me something I don't know.

When you say suggest that the odds are certain that someone will win (actually, not quite certain, because there could be no match, and a rollover of the jackpot),

I suggested nothing of the sort. My point had nothing to do with how often someone may or may not win, nor did I make any claim whatsoever about the frequency of winners. Please work on your reading comprehension.

you are implicitly defining the universe of successes as the universe of all 10-tuplets, the probability of which will be 60 million times 1 divided by 60 million, or one.

That's not the "universe of successes", that's the universe of all possible trials.

The probability of any one 10-tuplet is still 1 in 60 million. Let's say you wanted to figure the odds that all the winning numbers are even. That's 1/2 raised to the tenth, or 1 in a thousand in rough numbers. Both computations are figured as P1*P2*P3...PN. You can get from the first computation to the second by figuring the number of 10-tuples in which there are only even numbers (3 to the tenth, or about 59,000, and multiplying by the probability of each 10-tuplet.

Bored now. You don't need to recite basic algebra to me, either.

Thus, it is seen that is useful to separate the question of what the probability is of one 10-tuplet from the question of how many are in the set A for which one is determining P(A).

Yes, which is why I *POINTED THIS OUT TO YOU* earlier. Don't suddenly act like you're trying to teach it to me.

One practical reason why this approach makes sense is the difficulty of determining how to count all functional DNA computations.

"Computations"? Don't you mean configurations?

Because we are talking about bitflip mutations, it seems obvious that PN << 1, and we have assumed some significant minimum size of N,

No, *you* have "assumed some significant minimum size of N". The fact remains that many beneficial mutations are the result of a single nucleotide change.

so in our case P1*P2*P3...PN is pretty small.

Backpedaling so soon? A little while ago it was "next to impossible", now you've toned it down to "pretty small".

As to how dense in our probability space are the functional DNA sequences--by intuition it seems likely to be not dense at all.

"By intuition"? And here you had been trying to give the impression that your conclusion was mathematically sound. Now you admit you're guessing.

In logic, well-formed sentences form a small fraction of all sentences. In all programming languages I have been exposed, the fraction of working programs in the space of all possible binary combinations of a certain set size is also small.

Both true, but neither gives a good guide as to how many DNA changes are possibly useful. Computer programs and sentences in a language are far more rigid in allowable syntax than DNA strands DNA is far more "forgiving" of changes, for a variety of reasons.

Your analysis also overlooks the likelihood that workable DNA sequences are "near" each other in sequence. In other words, while a totally random DNA sequence would likely be useless, and so would any single mutation of it, the same can't be said for mutations of already "working" DNA, where changes would result in only a small adjustment to something that was already functional, in which case the odds are much higher that the results will be functional as well.

Using your own example, while it's true that a random sequence of words is unlikely to form a valid sentence, taking an already valid sentence and randomly substituting a new word for one of the existing words is much more likely to produce something that's readable. The quick brown fox jumped over the lazy accountant.

I would think that if the density of successful DNA combos in DNA space were high, we would irradiating our sperm and ova rather than avoiding high radiation--or at least doing that to our livestock.

Nonsense. Natural mutations occur at close to the "optimum" rate for the introduction of new features in a species without producing so many mutations that each generation consists mostly of mutants. One mutation here or there is grist for natural selection -- thousands (the result of irradiation), on the other hand, is a recipe for disaster.

I think the onus is on the evolutionary camp to show that things are different when it comes to DNA.

They already have, try reading some of the literature.

But you yourself made a claim -- that the chances of several neutral mutations combining to perform a useful function was "next to impossible". Therefore, the onus is on *you* to support such a claim. Despite your recitations of basic math, you have yet to do so.

786 posted on 04/17/2002 12:42:26 AM PDT by Dan Day
[ Post Reply | Private Reply | To 774 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson