Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Evolution through the Back Door
Various | 6/15/2003 | Alamo-Girl

Posted on 06/15/2003 10:36:08 AM PDT by Alamo-Girl

click here to read article


Navigation: use the links below to view more comments.
first previous 1-20 ... 601-620621-640641-660661-675 next last
To: Alamo-Girl
Even though the Invariance theorem[11, Thm 2.1.1] guarantees that any two UTMs U and V will agree on the complexity of a string x up to a constant independent of x, for any descriptions x and y, there will be two machines U and V disagreeing on whether x is more complex that y, or vice-versa.

This isn't a correct application for our universe. In our universe, two arbitrary machines will always agree on the complexity of a string. I touched on this in previous posts. The Invariance Theorem is misleading in practical application; our universe doesn't contain UTMs as far as we can tell, which has consequences.

621 posted on 06/29/2003 11:00:09 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 595 | View Replies]

To: gore3000
The rules are algorithms and the 'shortness' and ability of an algorithm to produce a result is highly dependent on the abilities of the programmer, and yes even on the intuitional abilities of the person writing it. Seems there is something highly subjective in the formula.

This only seems true because of your practical experience with what we humans call computer science. As I mentioned in a previous post, there is a mathematical solution to this (Solomonoff Induction applied to AIC) which has never been solved as a matter of practical implementation. We know it is solvable (i.e. we could develop a software expression of it), but our view of computer science is colored by the fact that no one has. If you want to make a mint and become famous, that is a problem worth some effort.

Your objection only really stands because no one has yet solved a problem that we know to be solvable. In this sense, the objection is weak because it will obviously collapse the second some bright person publishes a decent universal implementation of the mathematics and it gets incorporated into the body of knowledge we call "computer science".

622 posted on 06/29/2003 11:09:19 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 601 | View Replies]

To: tortoise
Thank you for your reply!

Most of the types of complexity listed in that appendix appear to be limited or special cases of AIC, and therefore fall under its umbrella.

That’s the impression I got also!

There has been some mathematical effort in the last three years or so to unify all the concepts of "complexity" found in various places. I believe there are now quite a few proofs that strongly suggest AIC is the grand-daddy of all "complexity", and the more work that is done, the stronger this assertion is. I made this conjecture back in the late 90s (and I wasn't the first), but it seems that others have actually taken the effort to prove it.

Kudos to you on seeing it ahead of the pack!

IMHO, for what we are exploring in “biological autonomous self-organizing complexity” it is crucial to have agreement on the terms. Seems to me the classic bitstream interpretation of KC lacks the dimensionality (my term) to tell us anything useful. The dynamic system application would be more relevant.

But to express the complexity/entropy of the genetic code, it also sounds reasonable to me that the hierarchical nature of phrasing ought to be a factor. That is where I found the proposal beginning at pdf page 17 of this pdf to be most interesting! I’d appreciate your reaction, if you’d care to share.

623 posted on 06/29/2003 11:14:23 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 619 | View Replies]

To: Alamo-Girl
The average binary code length of the genetic code exceeds the optimal average length of the Huffman code only by 2 to 5%, showing that the genetic code is close to optimal.

Another minor technical nit: Huffman coding is not an optimal measure of information representation in any universal sense. Being within 2-5% of "optimal" for Huffman coding could very well be 25% from the mathematical optimal. Huffman is only considered optimal (as a statistical model -- arithmetic coding is more efficient in the same domain, strictly speaking) in a Shannon information theory perspective.

624 posted on 06/29/2003 11:18:28 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 614 | View Replies]

To: tortoise
Thanks for the technical nit on the "monkey at the computer!"

That metaphor strikes me as a cute but false attempt at truism and no more. It would be meaningful to say that the manuscripts of Shakespeare along with all other texts would eventually occur somewhere in an infinite stream of random characters.

But again that tells me nothing useful in looking at complexity/entropy in biology - because time, environment and the benchmarks in the fossil record are finite.

625 posted on 06/29/2003 11:23:29 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 620 | View Replies]

To: tortoise
Thank you for your post on the Invariance Theorum! Except for arriving at some agreement on terms WRT the biological "machine" - the difference of opinion is not troubling to me.
626 posted on 06/29/2003 11:27:48 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 621 | View Replies]

To: Alamo-Girl
Seems to me the classic bitstream interpretation of KC lacks the dimensionality (my term) to tell us anything useful. The dynamic system application would be more relevant.

Definitely. But most of our understanding of KC has generally been limited to the classical "pattern KC" aspect, so most people don't know much about the mathematics of the dynamic "system" KC. It has only really become a properly supported area of mathematics as a result of people in artificial intelligence research taking an interest to it as of late, which shined enough light on the area for other people to take interest in it.

627 posted on 06/29/2003 11:27:52 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 623 | View Replies]

To: Doctor Stochastic
placekarker
628 posted on 06/29/2003 11:29:58 AM PDT by js1138
[ Post Reply | Private Reply | To 596 | View Replies]

To: tortoise
Thank you so much for the explanation of the Huffman coding's application to Shannon information theory!

Notably, you seem to discount the import of Shannon in the biological issues at hand - but many (if not most) of the articles I've found defer to Shannon.

This particular difference of opinion I find very relevant and would like to know more of your reasoning for not leaning to Shannon.

629 posted on 06/29/2003 11:31:19 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 624 | View Replies]

To: tortoise
Kudos to you and your A.I. fellows in addressing the K.C. issue of dynamic systems! Y'all are first "up to bat" on the issues - but I'm fairly confident you would agree that any hope of understanding biological complexity in the evolution sense requires a similar (if not the same) formulation.
630 posted on 06/29/2003 11:34:03 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 627 | View Replies]

To: Alamo-Girl
bump for later read
631 posted on 06/29/2003 11:48:13 AM PDT by Captain Beyond (The Hammer of the gods! (Just a cool line from a Led Zep song))
[ Post Reply | Private Reply | To 1 | View Replies]

To: Captain Beyond
Thanks for bumping by! I look forward to any comments you may have.
632 posted on 06/29/2003 12:09:51 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 631 | View Replies]

To: Alamo-Girl
You're missing the point of the rules I posted. The sequence is not random. The sequence does generate every possible sequence of bits. This has been know since: Champernowne, D. G., J. London Math. Soc. 8, 254-260, 1933.

Of course it generates Shakespeare with one typo, with two typos, three typos, etc.
633 posted on 06/29/2003 1:44:50 PM PDT by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 616 | View Replies]

To: Alamo-Girl
It seems to me that one of the things you are going to bump up against is that the measure of complexity that applies to evolution (as yet, organismal behavior has not been mapped to any dynamical system) is going to take a form of a measure of information about its niche. And that measure of complexity will increase over time with the evolution of organisms. Natural selection has to be taken into account.
634 posted on 06/29/2003 1:51:20 PM PDT by Nebullis
[ Post Reply | Private Reply | To 630 | View Replies]

To: Alamo-Girl
This particular difference of opinion I find very relevant and would like to know more of your reasoning for not leaning to Shannon.

Shannon information theory had some theoretical deficiencies, such that it is possible to construct hypothetical situations where the results from a pure Shannon perspective gives results that don't seem correct upon casual inspection. Shannon did much of his work around coding theory, which is really a narrow "static" case of what we now file under Information Theory. Kolmogorov took more of a computational and system theoretic perspective, and attempted to resolve the apparent problems and inconsistencies that fell out of Shannon information theory.

Today we generally consider the Kolmogorov version and its derivatives (e.g. Solomonoff, Chaitin, and others) of information theory to be the correct one, and view Shannon's version in a manner that is similar to Newtonian mechanics -- a special case of the true model but still useful for many things because it is well-studied and much simpler to apply. Nonetheless, for many things you will get incorrect results if you use the "simple" model when you should be using the "correct" model. Most people who are not planning on being experts in information theory learn the Shannon model, though they are generally very casually familiar with Kolmogorov. In the same fashion, most people who learn physics learn Newtonian mechanics and don't really go into Einsteinian mechanics unless they plan on being specialists in the field. It is the old 80-20 rule; the simple models are sufficient for 80% of the cases and require 20% of the effort of actually learning the entire field.

I think there are a number of aspects of biological systems that make it clear that Shannon is inappropriate and that Kolmogorov must be used. First, there is the fact that biological systems are complex dynamic systems, which is Kolmogorov's domain in general. Second, biological systems, genomes, and similar deal with a lot of non-sequential pattern transforms, which Shannon only partially handles and badly when it does. These two facts alone make Shannon almost useless for a really good analysis. Kolmogorov offers many solutions to these issues that you won't have at your disposal if you only have a purely Shannon model to use.

In fact, one could almost say that biological systems of the type we are talking about are the pathological case scenario from the standpoint of trying to use Shannon information theory.

635 posted on 06/29/2003 2:40:44 PM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 629 | View Replies]

To: tortoise; Alamo-Girl
Your objection only really stands because no one has yet solved a problem that we know to be solvable.

Weak or not, my objection is valid at the present time. However, I think it is unsolvable. Yes, you can compare two algorithms to see which is more efficient, but you cannot determine whether there is a better algorithm unless you can write a better one yourself.

This leads me back to the basic problem with these 'rules' of Wolfram. The problem is not so much description as it is of creation. It should be noted that while Wolfram believes that such can be done - which is the real test of such a theory - he himself does not claim that he has achieved it.

636 posted on 06/29/2003 4:31:52 PM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 622 | View Replies]

To: Doctor Stochastic
Of course it generates Shakespeare with one typo, with two typos, three typos, etc.

Which in the case of biology is a very big problem since 'typos' would be deadly.

I also think that much of the discussion of rules has a problem which is when it comes to real life implementation and why I disagree that just a few simple rules can be the source of everything we see. The rules seem to be somewhat like a plan for a house. Describing the plan is not the same as building the house. Many things we see follow different rules. To transform the 0/1's of a computer to a graphic representation takes many different rules (and they differ according to the graphic card being used!). To transform a picture into a screen representation requires one set of rules, another to put it on paper. The same can be said with life. The DNA code itself while a primary building block of life, is not all there is to it. You have to assemble the DNA molecules and the rest of the parts of a living thing and that will take even more rules.

637 posted on 06/29/2003 4:50:27 PM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 633 | View Replies]

To: gore3000
However, I think it is unsolvable.

Then you would be wrong.

It isn't a theoretical difficulty but an engineering difficulty, and engineering difficulties always fall eventually. There have already been toy back-of-the-cocktail-napkin implementation solutions described that just hack what we already know how to do. A really useful solution would have to take a different implementation approach though. "Unofficially" I could say that the very first universal solutions to the problem have already been invented and are being worked on in R&D labs, but I don't need to assert this to prove my point.

Just because a bi-plane can't go to the moon does not mean that traveling to the moon is impossible. It just means that you've limited yourself to looking at an impossible solution.

638 posted on 06/29/2003 4:57:54 PM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 636 | View Replies]

To: gore3000
Yes, you can compare two algorithms to see which is more efficient, but you cannot determine whether there is a better algorithm unless you can write a better one yourself.

This represents a fundamental misunderstanding of the topic I'm talking about, and many of the implied assumptions themselves are incorrect. I've already said that I'm not going to even attempt to explain the full scope of Solomonoff Induction required to make my response to your assertion here obvious, so you are on your own. All I will say is that your confusion stems from a very incomplete understanding of the subject matter, the full theoretical consequences of Solomonoff Induction in particular.

So in short, your assertion is based on multiple false assumptions. Explaining the precise problem will require a lengthy dissertation on Solomonoff Induction that is way beyond the scope of FreeRepublic, so I'm not even going to attempt it. Explaining Solomonoff Induction so that people that don't grok the prerequisite theoretical background have some semblence of understanding makes explaining Kolmogorov complexity, entropy, and all that stuff a walk in the park by comparison, and those topics are already very obviously pushing the boundary of things you can discuss in a forum like this.

This isn't a slight against the posters on this forum, but there is a limit to how far we can go theoretically when a lot of prerequisite theory is missing. Things like Kolmogorov complexity and entropy can be couched in terms that most people can figure out if explained enough times because most people are vaguely familiar with the underlying ideas. Like with relativistic physics or quantum mechanics, once you stray too far from most people's experience/intuition/"common sense" it becomes MUCH harder to explain things without a proper theoretical understanding because the theory starts violating the assumptions most people make.

That is the situation with Solomonoff Induction; the mathematical facts violate a lot of intuition and unless everyone fully understands the underlying mathematics I'd end up spending all my time arguing with people who don't want to believe the mathematics because it violates their intuition regardless of whether or not their intuition is actually incorrect. I don't have time for that on this particular subject area, and experience shows that this is what happens.

639 posted on 06/29/2003 5:31:45 PM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 636 | View Replies]

To: Doctor Stochastic
Thank you for your post!

Indeed, the sequence is not random, therefore I strongly disagree with this statement:

"The sequence does generate every possible sequence of bits."

If that were true, Chaitin's Omega would be there as well.

For Lurkers, here is the Champernowne’s binary constant concatenated to base 10:

Champernowne’s Constant

Champernowne's constant 0.1234567891011... is the number obtained by concatenating the positive integers and interpreting them as decimal digits to the right of a decimal point. It is normal in base 10 (Champernowne 1933, Bailey and Crandall 2003). Mahler (1961) showed it to also be transcendental. ..

The "binary" Champernowne constant is obtained by concatenating the binary representations of the integers

C2=0.(1)(10)(11)(100)(101)(110)(111)…
=0.86224012586805…

You said:

Of course it generates Shakespeare with one typo, with two typos, three typos, etc.

Because more character representations of 8 bit ascii values 0-255 are unreadable than not, I continue to assert the Champernowne binary constant cannot be successfully concatenated to a readable character representation of Shakespeare.

640 posted on 06/29/2003 8:22:41 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 633 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 601-620621-640641-660661-675 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson