Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: donh
I confess that I do not know enough about your point number three to respond.

As far as point number one goes, it is my understanding that a LOT of reuse of code occurs. This is why they found out that human beings only had about a third as many genes as expected on the Human Genome Project. There were more protiens made than genes to make them! The reason is that code is reused. This is how donh, intelligent designer, said it should be done and it is. Funny how things are really done the way an intellignet designer says they should be, eh?

As far as point two goes, the articles I cited earlier show that the rendundancy is a feature of the error-minimization properties of the code. The substitution errors most likely to occur are the ones where different codons code for the same amino acid!!!! I think God has found a way to minimize error that donh did not think of first. He did not have triple redundnacy tags on each codon, but He does have it so that the errors most often made are not really errors.

I have been amazed by what I know about the code often enough to where it is reasonable to assume that I have not been amazed for the last time. It is a safe bet that there are other discoveries out there that will address your #3.

I just cannot get over how blithely you assume you could write better code than DNA, the instructions for every part of your body are in every cell. How can you compress than much info in such a space?

100 posted on 06/23/2002 4:27:05 PM PDT by Ahban
[ Post Reply | Private Reply | To 91 | View Replies ]


To: Ahban
it is my understanding that a LOT of reuse of code occurs.

Sure, but this is not the same thing as the packing density question. There are several senses in which genomes and genome fragments are re-used:

1) mRNA are sometimes cut and respliced within the endoplasmic reticulum before final delivery.

2) Our own reproductive machinery is built from a combination of RNA and protein strings. This had to involve substantial post-generation mRNA resplicing.

3) Our immune system takes pre-canned splices of mRNA from three different sets in the DNA, and cuts and fits to match up with invading presences in the body, than stores the resulting multiple splice back into the DNA. That means we have billions of potential mRNA in our arsenal because of the initial existence of only three handfuls of initial mRNA. Now that's efficient packing in spades!

All this is fascinating, of course, but it occurs AFTER the unpacking of the DNA, so it does not address the question to hand.

102 posted on 06/23/2002 4:36:52 PM PDT by donh
[ Post Reply | Private Reply | To 100 | View Replies ]

To: Ahban
but He does have it so that the errors most often made are not really errors.

Actually, there are some heavy assumptions in this error-correcting argument that don't seem terribly sound to me. Are you aware of the fact that a single DNA codon in a double helix is almost infallibly error correcting all by itself? When the Transcriptase read head goes over a codon pair, it checks the two molecules out and replaces a broken one if it's been degenerated.

The discovery of this fact has cast doubt on the notion that background radiation is the only, or the fundamental cause of mutation--which may go a long way toward explaining why the mutational distance clocks are so far off. The proposal being that mutational frequency increases are self-induced as a reaction to external stress. Much like the way the immune system works, only writ large.

You would have to think of this as primary error correction--rather like using the 8th bit for parity detection in internet traffic. Or, rather, more exactly, like using one of 11 or 12 bit translations of a bit to make a byte thats one or two bits correcting, and three or four bits detecting. With the parity bit working, every other error-detection and correction scheme will be of such vanishingly small utility, that it's largely just window dressing.

105 posted on 06/23/2002 4:48:56 PM PDT by donh
[ Post Reply | Private Reply | To 100 | View Replies ]

To: Ahban
I just cannot get over how blithely you assume you could write better code than DNA, the instructions for every part of your body are in every cell. How can you compress than much info in such a space?

I make no such claim, nor anything remotely like it. I merely claim I can pack the data better than it is currently packed, which is, your contention and cite notwithstanding, obviously true from even the slightest casual inspection of matters as they stand.

113 posted on 06/24/2002 12:26:21 PM PDT by donh
[ Post Reply | Private Reply | To 100 | View Replies ]

To: Ahban
!!!! I think God has found a way to minimize error that donh did not think of first. He did not have triple redundnacy tags on each codon, but He does have it so that the errors most often made are not really errors.

I don't get this from your cite. I think this has the earmarks of a random guess. There are quite a few measures of redundency and correction still left untouched either by the authors you cite, or by my previous litany. For example:

It has been observed that the most centrally important genes, the ones whose existence in a creature we take to delineate his domain or family or kingdom or phyla (the major branches of the Tree of Life), are all coded for multiple times in the genome. In general, we've observed that the more important the gene, the more often it's duplicated.

Your authors do not touch on this, but this is a redundant encoding not accounted for by their calculations. How you can therefore claim some sort of calculatable optimal balance between encoding density and error-correction has occured is quite beyond my ken.

115 posted on 06/24/2002 12:48:55 PM PDT by donh
[ Post Reply | Private Reply | To 100 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson