Posted on 01/31/2003 4:19:03 PM PST by jennyp
Does it, or does it merely speculate that such has occured in animals? The duplication of genes is (according to the article) far from random. Maybe genes that get expressed the most or tend to mutate the most have the most copies. That is engineering and it does not demand a naturalistic explanation.
The article assumes that gene duplication will automatically produce an increase in complexity. There is little evidence that this is true. The reason is that there is precious little evidence that gene copies can ever become a gene with a function much different from the old gene. Because of that, all the duplications in the world won't get you from ameoba to man. It will just get you to a man with a lot of psuedogenes. We have lots of such genes, but so much "junk DNA" is conserved that some have been led to wonder if it does not have a function after all, such as steering protiens to the right part of the cell.
Yup, facts are such a problem for evolutionists, one more thing for them to need to try to explain away! They also have their own math. For example, one would think that if you take away something you would end up with less not more. However in evo math when natural selection takes away from the gene pool you get more genes than you had before!
The author is not speaking to the distribution of duplication events before selection. From our POV that is still random. We don't know initial conditions, don't know a mechanism, and can't predict future events. That's as random as things get.
What it comes down to is that nobody can distinguish between a random and an "intelligent design" event.
There are supposedly some 10 million years of mutations separating man from chimps. Chimps and men differ by some 5% of their DNA (the evolutionist 1% has been proven wrong by the same man who originally made the statement).BZZZZZT! It's more like 1.4% where it counts - in the genes themselves:
The new estimate could be a little misleading, said Saitou Naruya, an evolutionary geneticist at the National Institute of Genetics in Mishima, Japan. "There is no consensus about how to count numbers or proportion of nucleotide insertions and deletions," he said.Besides, IIRC his method of counting insertions & deletions would treat a 100 base pair insertion as 100 mutations. Nebullis, do you remember if this is true?Indels are common in the non-functional sections of the genome, said Peter Oefner, a researcher at Stanford's Genome Technology Center in Palo Alto, California. Scientists estimate that up to 97 percent of DNA in the human genome has no known function. However, he added, indels are extremely rare in gene sequences.
"We haven't observed a single indel in a [gene] to date between human and chimp," said Oefner. Therefore, the revised estimate doesn't alter the amount of DNA that holds information about our species. Humans and chimps still differ by about one percent in gene sequences, he said.
Since chimps and men have about 3 billion DNA base pairs that 5% represents some 150,000,000 favorable mutations in those ten million years. Since with all our science, all our billions in research on DNA for decades have not shown a single favorable mutation has ever happened, I think that your statement is absolutely wrong scientifically - just as evolution is completely wrong scientifically.Plugging in the correct numbers & assumptions:
Since chimps and men have about3 billion90 million gene-encoding DNA base pairs that5%1.4% represents some150,000,00014,000 neutral or favorable mutations in those ten million years.
Trying to use bogus numbers to defend their ideas makes us furious. :-)
She did, in #27.
Not quite. This 'study' is not a study at all. It is a reinterpretation of the work done by Roy Britten in comparing the sequences of human and chimp DNA. It is a reductionist view of the DNA differences between humans and chimps. It throws away most of the differences because supposedly they are unimportant because they are not in genes. Well the rest of the DNA does matter unlike what this hack has to say. Yes 97% of DNA does not code for genes, but his statement that
Scientists estimate that up to 97 percent of DNA in the human genome has no known function.
is totally false and he is not a scientist if he made it. The last half dozen years of biological research have been concerned with finding out just exactly what that 97% of DNA which evolutionists call "junk" does. What this DNA does is control what the gene does, when and how much protein it is to make, and even what specific proteins, amongst several which many genes can make, are to be made by the gene. In other words 'this junk' which this hack says scientists say 'has no known function' is what makes an organism function. In one single discovery, they have found what 10% of that DNA does - it acts as a zipper during cell division. So your article is total nonsense and National Geographic should be ashamed to publish such garbage.
So the 3.9% difference you wish to throw away is indeed important as is the 97% of DNA which your phony article claims is non-functional. What this shows is the quality of science being peddled by what were once respectable magazines in their attempt to save the totally discredited theory of evolution by discrediting the good reputation they had built up for decades.
In short, gore3000s numbers are better, its not 14K gene changes between man and chimp in 10 million years, but rather 150K changes that have established themselves througout the population.
(Gore3000 claimed 150 million mutations.)
The point of Britten's new study is that these previously missing mutations were simple insertions & deletions. So if you have a 1000 bp duplication, it's still just one mutation. I think the "extra 3.9%" figure refers to the increased difference in sequence, not to 2 1/2 times more mutations. I couldn't find the post I was thinking of from back in September (on another board) that explained the point directly, but here's an article from CalTech that hints at what I'm saying:
To describe exactly what Britten did, it is helpful to explain the old method as it was originally used to determine genetic similarities between two species. Called hybridization, the method involved collecting tiny snips of the DNA helix from the chromosomes of the two species to be studied, then breaking the ladder-like helixes apart into strands. Strands from one species would be radioactively labeled, and then the two strands recombined.The helix at this point would contain one strand from each species, and from there it was a fairly straightforward matter to "melt" the strands to infer the number of good base pairs. The lower the melting temperature, the less compatibility between the two species because of the lower energy required to break the bonds.
In the case of chimps and humans, numerous studies through the years have shown that there is an incidence of 1.2 to 1.76 percent base substitutions. This means that these are areas along the helix where the bases (adenine, thymine, guanine, and cytosine) do not correspond and hence do not form a bond at that point. The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species. These are different from the aforementioned substitutions. Such differences, called "indels," are readily recognized by comparing sequences, if one looks beyond the missing regions for the next regions that do match.
To accomplish the more complete survey, Britten wrote a Fortran program that did custom comparisons of strands of human and chimp DNA available from GenBank. With nearly 780,000 suitable base pairs available to him, Britten was able to better infer where the mismatches would actually be seen if an extremely long strand could be studied. Thus, the computer technique allowed Britten to look at several long strands of DNA with 780,000 potential base pairings.
As expected, he found a base substitution rate of about 1.4 percent-well in keeping with earlier reported results-but also an incidence of 3.9 percent divergence attributable to the presence of indels. Thus, he came up with the revised figure of 5 percent.[emphasis mine]
That really sounds to me like what I was saying: The 5% represents the total difference in base pair sequences, but it took a number of mutations equal to 1.4% of the total length to produce those differences.
Uh-oh... I think my math was off, too. 3 billion total bps x 1.4% mutations = 42 million mutations. 90 million gene-encoding bps x 1.4% = 1.26 million mutations. That is a lot, though much less than gore3000's 150 million mutations.
However that is just the numbers. I think you are right on one important part. He seems to be counting all of those mutations as favorable, when you point out that many of them, most even, could be neutral. I'd like to know what gore3000's reasoning is on that. It seems to me that there is no reason all of those changes have to be favorable.
So how fast do mutations, neutral or favorable, work their way into populations today? That should give us a measuring stick to see of 150,000 mutations can work their way into the human genome in ten million years. Perhaps it would be better to say "work their way into the genome of an isolated group like Icelanders" since human populations were much smaller during most of our history.That would be one mutation (neutral or favorable) working its way into the whole population every 67 years. I wish someone who knows about the rate now would speak up here, but that sounds like a really, really really short time, don't you think? I mean, we don't breed like flies, it takes a while for mutations to be established, yes?
Let's see... 10 million years divided by 42 million mutations = 1 fixation every .238 years (3 months or so). But keep in mind that there are always many mutations at different locations in the genome working in parallel to get themselves fixed at the same time. How many? I have no idea, but if there were 1000 different alleles out there in the population at the same time that would mean an average allele would have 238 years in which to fixate for the numbers to work out. If there are 100,000 alleles then the average allele has 23,800 years to acheive fixation for the numbers to work out. (Did I state that clearly?)
As for how long it takes for an individual allele to achieve fixation, I don't know the exact numbers, but they do fixate more quickly in small populations than in large ones. (If there are 10 in the population, 1 has a new neutral mutation, & every breeding pair produces 2 offspring, then the mutation could represent 0%, 10%, or 20% of the next generation's population. In the 3rd generation I think it would represent 0%, 10%, 20%, 30%, or 40%.)
Another thing to ponder is that through most of humanity's history, we were divided into many small, somewhat isolated tribes that had relatively little gene flow between them. I'll bet that genetic drift was rampant for a long time, even when the total human population number was relatively large. It wasn't until a couple thousand years ago that we truly became one big population with lots of biracial children. ("Lots" as measured over several generations.) So the total amount of genetic change was probably higher thousands of years ago than is happening today.
So even with 42 million mutations between humans & chimps, I don't think it presents any problem.
First of all, the article you cited is not a new study. All it does is rework the what Britten did and make it sound more favorable towards evolutionary theory. The person, as I pointed out is an ideological hack who continues to tell the EVOLUTIONIST LIE that 97% of the DNA is junk. The article ahban cited on the brain - about this very DNA which this EVOLUTIONIST LIAR says scientist consider nonsense, shows what he is.
In fact, what modern biology has found is that it is not the genes but what evolutionist call 'junk dna' that is the most important part of our genome, it is what makes us tick and makes the genes work properly:
Within a single bacterial cell, genes are reversibly induced and repressed by transcriptional control in order to adjust the cells enzymatic machinery to its immediate nutritional and physical environment. Single-celled eukaryotes, such as yeasts, also possess many genes that are controlled in response to environmental variables (e.g., nutritional status, oxygen tension, and temperature). Even in the organs of higher animals --- for example, the mammalian liver --- some genes can respond reversibly to external stimuli such as noxious chemicals. ...
The most characteristic and exacting requirement of gene control in multicellular organisms is the execution of precise developmental decisions so that the right gene is activated in the right cell at the right time during development of the many different cell types that collectively form a multicellular organism. In most cases, once a developmental step has been taken by a cell, it is not reversed. Thus these decisions are fundamentally different from bacterial induction and repression. In executing their genetic programs, many differentiated cells (e.g., skin cells, red blood cells, lens cells of the eye, and antibody-producing cells) march down a pathway to final cell death, leaving no progeny behind. The fixed patterns of gene control leading to differentiation serve the needs of the whole organism and not the survival of an individual cell.
From: Regulation of transcription initiation
So what this hack says about 'junk dna' is total unscientific nonsense. Without the mechanisms set up by this 'junk DNA' the genes would not work at all, period. The organism would not function, period. What we see here is an evolutionist lying through his teeth trying to save a totally decrepit and false theory through lies.
You are misreading the article you cite:
The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species.
What the above means is simply that because of deletions in each species, the strands selected did not align properly, hence a simple 'alphabetic' comparison of the sequences gave a wrong number. What Britten did, and the reason he revised the figures, is he properly aligned the strands according to what was the purpose of them. In this way he came up with the more accurate 5% number.
Now as to neutral mutations, they just cannot spread throughout a species - according to studies made by evolutionists themselves when they were trying to solve the problem posed by genetics. The basis of population genetics is the Hardy-Weinberg principle which says that in a stable population the genetic mix of the population will remain stable absent any genetic advantage of a particular genetic makeup. What this means is that a neutral mutation in a population of 1 million organisms will continue to be in only 1 millionth of the population if it is neutral. In fact it will likely dissappear completely due to chance (if you play a game at odds of 2 to 1 with two dollars long enough you will lose both dollars), so neutral mutations cannot be in any way responsible for these differences in any significant way.
Due to the above, yes, the differences are 5%. Yes, you need some 150 million mutations. Yes, mostly all of them have to be favorable to have survived.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.