Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Ahban

In short, gore3000s numbers are better, its not 14K gene changes between man and chimp in 10 million years, but rather 150K changes that have established themselves througout the population.

(Gore3000 claimed 150 million mutations.)

The point of Britten's new study is that these previously missing mutations were simple insertions & deletions. So if you have a 1000 bp duplication, it's still just one mutation. I think the "extra 3.9%" figure refers to the increased difference in sequence, not to 2 1/2 times more mutations. I couldn't find the post I was thinking of from back in September (on another board) that explained the point directly, but here's an article from CalTech that hints at what I'm saying:

To describe exactly what Britten did, it is helpful to explain the old method as it was originally used to determine genetic similarities between two species. Called hybridization, the method involved collecting tiny snips of the DNA helix from the chromosomes of the two species to be studied, then breaking the ladder-like helixes apart into strands. Strands from one species would be radioactively labeled, and then the two strands recombined.

The helix at this point would contain one strand from each species, and from there it was a fairly straightforward matter to "melt" the strands to infer the number of good base pairs. The lower the melting temperature, the less compatibility between the two species because of the lower energy required to break the bonds.

In the case of chimps and humans, numerous studies through the years have shown that there is an incidence of 1.2 to 1.76 percent base substitutions. This means that these are areas along the helix where the bases (adenine, thymine, guanine, and cytosine) do not correspond and hence do not form a bond at that point. The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species. These are different from the aforementioned substitutions. Such differences, called "indels," are readily recognized by comparing sequences, if one looks beyond the missing regions for the next regions that do match.

To accomplish the more complete survey, Britten wrote a Fortran program that did custom comparisons of strands of human and chimp DNA available from GenBank. With nearly 780,000 suitable base pairs available to him, Britten was able to better infer where the mismatches would actually be seen if an extremely long strand could be studied. Thus, the computer technique allowed Britten to look at several long strands of DNA with 780,000 potential base pairings.

As expected, he found a base substitution rate of about 1.4 percent-well in keeping with earlier reported results-but also an incidence of 3.9 percent divergence attributable to the presence of indels. Thus, he came up with the revised figure of 5 percent.[emphasis mine]

That really sounds to me like what I was saying: The 5% represents the total difference in base pair sequences, but it took a number of mutations equal to 1.4% of the total length to produce those differences.

Uh-oh... I think my math was off, too. 3 billion total bps x 1.4% mutations = 42 million mutations. 90 million gene-encoding bps x 1.4% = 1.26 million mutations. That is a lot, though much less than gore3000's 150 million mutations.

However that is just the numbers. I think you are right on one important part. He seems to be counting all of those mutations as favorable, when you point out that many of them, most even, could be neutral. I'd like to know what gore3000's reasoning is on that. It seems to me that there is no reason all of those changes have to be favorable.

So how fast do mutations, neutral or favorable, work their way into populations today? That should give us a measuring stick to see of 150,000 mutations can work their way into the human genome in ten million years. Perhaps it would be better to say "work their way into the genome of an isolated group like Icelanders" since human populations were much smaller during most of our history.

That would be one mutation (neutral or favorable) working its way into the whole population every 67 years. I wish someone who knows about the rate now would speak up here, but that sounds like a really, really really short time, don't you think? I mean, we don't breed like flies, it takes a while for mutations to be established, yes?

Let's see... 10 million years divided by 42 million mutations = 1 fixation every .238 years (3 months or so). But keep in mind that there are always many mutations at different locations in the genome working in parallel to get themselves fixed at the same time. How many? I have no idea, but if there were 1000 different alleles out there in the population at the same time that would mean an average allele would have 238 years in which to fixate for the numbers to work out. If there are 100,000 alleles then the average allele has 23,800 years to acheive fixation for the numbers to work out. (Did I state that clearly?)

As for how long it takes for an individual allele to achieve fixation, I don't know the exact numbers, but they do fixate more quickly in small populations than in large ones. (If there are 10 in the population, 1 has a new neutral mutation, & every breeding pair produces 2 offspring, then the mutation could represent 0%, 10%, or 20% of the next generation's population. In the 3rd generation I think it would represent 0%, 10%, 20%, 30%, or 40%.)

Another thing to ponder is that through most of humanity's history, we were divided into many small, somewhat isolated tribes that had relatively little gene flow between them. I'll bet that genetic drift was rampant for a long time, even when the total human population number was relatively large. It wasn't until a couple thousand years ago that we truly became one big population with lots of biracial children. ("Lots" as measured over several generations.) So the total amount of genetic change was probably higher thousands of years ago than is happening today.

So even with 42 million mutations between humans & chimps, I don't think it presents any problem.

37 posted on 02/04/2003 4:21:41 PM PST by jennyp (http://crevo.bestmessageboard.com)
[ Post Reply | Private Reply | To 34 | View Replies ]


To: jennyp
Some lurkers may faint. A real, numbers-based dialog without acrimony. I need to read your post another time or two in order to sort it out. Off-hand, it seems like we are getting better numbers, and coming together on the numbers.
38 posted on 02/04/2003 5:54:09 PM PST by Ahban
[ Post Reply | Private Reply | To 37 | View Replies ]

To: jennyp; Ahban
The point of Britten's new study is that these previously missing mutations were simple insertions & deletions. So if you have a 1000 bp duplication, it's still just one mutation. I think the "extra 3.9%" figure refers to the increased difference in sequence, not to 2 1/2 times more mutations.

First of all, the article you cited is not a new study. All it does is rework the what Britten did and make it sound more favorable towards evolutionary theory. The person, as I pointed out is an ideological hack who continues to tell the EVOLUTIONIST LIE that 97% of the DNA is junk. The article ahban cited on the brain - about this very DNA which this EVOLUTIONIST LIAR says scientist consider nonsense, shows what he is.

In fact, what modern biology has found is that it is not the genes but what evolutionist call 'junk dna' that is the most important part of our genome, it is what makes us tick and makes the genes work properly:

Within a single bacterial cell, genes are reversibly induced and repressed by transcriptional control in order to adjust the cell’s enzymatic machinery to its immediate nutritional and physical environment. Single-celled eukaryotes, such as yeasts, also possess many genes that are controlled in response to environmental variables (e.g., nutritional status, oxygen tension, and temperature). Even in the organs of higher animals --- for example, the mammalian liver --- some genes can respond reversibly to external stimuli such as noxious chemicals. ...

The most characteristic and exacting requirement of gene control in multicellular organisms is the execution of precise developmental decisions so that the right gene is activated in the right cell at the right time during development of the many different cell types that collectively form a multicellular organism. In most cases, once a developmental step has been taken by a cell, it is not reversed. Thus these decisions are fundamentally different from bacterial induction and repression. In executing their genetic programs, many differentiated cells (e.g., skin cells, red blood cells, lens cells of the eye, and antibody-producing cells) march down a pathway to final cell death, leaving no progeny behind. The fixed patterns of gene control leading to differentiation serve the needs of the whole organism and not the survival of an individual cell.
From: Regulation of transcription initiation

So what this hack says about 'junk dna' is total unscientific nonsense. Without the mechanisms set up by this 'junk DNA' the genes would not work at all, period. The organism would not function, period. What we see here is an evolutionist lying through his teeth trying to save a totally decrepit and false theory through lies.

39 posted on 02/04/2003 7:10:17 PM PST by gore3000
[ Post Reply | Private Reply | To 37 | View Replies ]

To: jennyp
here's an article from CalTech that hints at what I'm saying:

You are misreading the article you cite:

The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species.

What the above means is simply that because of deletions in each species, the strands selected did not align properly, hence a simple 'alphabetic' comparison of the sequences gave a wrong number. What Britten did, and the reason he revised the figures, is he properly aligned the strands according to what was the purpose of them. In this way he came up with the more accurate 5% number.

Now as to neutral mutations, they just cannot spread throughout a species - according to studies made by evolutionists themselves when they were trying to solve the problem posed by genetics. The basis of population genetics is the Hardy-Weinberg principle which says that in a stable population the genetic mix of the population will remain stable absent any genetic advantage of a particular genetic makeup. What this means is that a neutral mutation in a population of 1 million organisms will continue to be in only 1 millionth of the population if it is neutral. In fact it will likely dissappear completely due to chance (if you play a game at odds of 2 to 1 with two dollars long enough you will lose both dollars), so neutral mutations cannot be in any way responsible for these differences in any significant way.

Due to the above, yes, the differences are 5%. Yes, you need some 150 million mutations. Yes, mostly all of them have to be favorable to have survived.

40 posted on 02/04/2003 7:26:25 PM PST by gore3000
[ Post Reply | Private Reply | To 37 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson