Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: donh
It's not even as efficient as 3-redundant Hemming code. It double-books several of the triplets for the exact same amino acid

Your lack of awe at the genetic code may be more a function of your incomplete knowledge of its wonders rather than weakness in the code itself.

DNA does not need a 3-redundant hemming code, and it would on average be more wasteful of space to build triple redundnacy in a code that is so sublime it seldom needs such measures.

Don't take my word for it. Look at Journal of Molecular Evolution #47 of 1998. An article by Freeland and Hurst (pp238-248). They calculated the error-minimizing capacity of one million randomly generated codes and found that the actual genetic code fell outside the distribution. Further research used estimates of 1018 possible codes possing the same degree of redundancy of the universal genetic code and found all of them were inside the distribution. This was a follow-up by Freeland in Molecular Biology and Evolution #17 of 2000, pp 511-518.

The code we have is the best possible code given the amount of redundancy it has. And it has all it needs to produce an astounding array of life. I have serious doubts about your boast that you could do better. There is an intellegent designer, and His name isn't Don!!!

87 posted on 06/23/2002 9:59:08 AM PDT by Ahban
[ Post Reply | Private Reply | To 80 | View Replies ]


To: Ahban
The code we have is the best possible code given the amount of redundancy it has. And it has all it needs to produce an astounding array of life. I have serious doubts about your boast that you could do better. There is an intellegent designer, and His name isn't Don!!!

The Freeland and Hurst article deals with one aspect of redundant coding along a single dimension of concern. It is painfully obvious that there are elements of our basic DNA design that that are far from optimal--I already named several:

1) There is so little orthogonal reuse of genome fragments that any present-day database packer would between double and triple the data-per-volume if allowed to run for a few hours looking for like fragments it could optimize to one fragment. Fragmenting and Offset routing of triplets, which I referred to is far from the only way to do this. Furthermore, nature does, in fact, do this, but only in a very sparse and haphazard way.

2) As I said, there is simple and quite useless redundancy in the fundamental coding: quite a few triplets map to the same amino acid. If you really wanted jet-age DNA, you would have used the redundant triplets to map some amino acids we don't currently have any use for, and thereby open up the protein space to more possibly useful codings.

3) There are actually two ways that you could be using the same genome fragment to generate different usable mRNA strings. You could be re-entering the DNA string at count-of-three offsets, which is what we observe, and you could be re-entering at less-than-count of three, thereby re-defining the start points of the triplet codons. We have examples of the former, which I was just talking about, but no examples of the latter, of which I am aware. If we could recode this densely, I'd predict that a database packer would probably produce a 5 to 1 compression or better.

Insofar as information density is concerned, I could do better--and if you gave me unlimited funds and modern computers and universal polymer generators (machines which currently exist), I could do so today. Would my constructs survive out in the real world? I don't know, and neither do you.

91 posted on 06/23/2002 4:00:11 PM PDT by donh
[ Post Reply | Private Reply | To 87 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson