Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Evolution through the Back Door
Various | 6/15/2003 | Alamo-Girl

Posted on 06/15/2003 10:36:08 AM PDT by Alamo-Girl

click here to read article


Navigation: use the links below to view more comments.
first previous 1-20 ... 481-500501-520521-540 ... 661-675 next last
To: Doctor Stochastic; tortoise
...you cannot build a program that say does anything useful with just two machine code instructions...-me-

Not what he said. He said only two internal states. This not the same thing at all.

I agreed with him that you could have just two instructions processing the data such as yes/no. However, his post was in response to mine regarding Wolfram's claim that you could perhaps model life with a few lines of code or rules which made his answer IMHO a bit non-responsive. My point in the response is that you need an instruction set in order to have a computer do things - you need a program and that takes much more than simple yes/no. In the turing machine classical examples the instructions are the 'tape' and this is the meat of what you would need to have in order to construct any sort of a model.

Let me just point out that while one can perhaps construct a useful program with just two instructions, this would essentially require the person writing the program to write an interpreter which is a fairly big program in itself . We are now used to having a single line of code do quite a bit because under it there are an interpreter that turn binary into machine code and the language in which we write into machine code. These are very large programs in themselves which are being used to interpret the instructions with which we program nowadays. What this means for example is that Wolfram's 5-6 rules would need numerous rules under it for them to be implemented in a computer program.

As to your example of a one instruction machine, it seems to me that you have two. You have an if/then in your example -if negative skip, else execute which makes for two instructions subtract and skip.

501 posted on 06/22/2003 4:15:04 AM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 488 | View Replies]

To: tortoise
I'm afraid that for rigorous discussion your opinion does not trump a huge swath of established mathematics. If you disavow mathematics, then I have nothing further to say because my argument assumes that mathematics is valid.

I do not disavow mathematics at all. I do not even disavow that the concept of Kolmogorov complexity can be useful in analyzing certain information. What I do disavow is the premise that the content of information is irrelevant and I think I showed quite clearly why it is relevant. I do not think that even Kolmogorov would say that the content of information is irrelevant as you implied.

What I am saying essentially is that you are exercising extreme reductionism, so extreme that it becomes absurd. I have no problem with what Wolfram and others are doing in this regard, I wish them luck. I think we will learn a lot from it. What I disagree with is the assumption that the answer to 'what is life, the universe and everything' is just a number.

502 posted on 06/22/2003 4:50:50 AM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 484 | View Replies]

To: Doctor Stochastic; tortoise; betty boop; gore3000
I’ve been following up on our conversation last night about the usefulness of Kolmogorov complexity in understanding biological autonomous self-organizing complexity when the single-instruction RAM computer you described is used to emulate a Universal Turing Machine – and what the alternatives might be.

For Lurkers: In the discussion last night, Doctor Stochastic said that a single instruction RAM computer (the following instruction is hard-wired) could emulate a Universal Turing Machine:

subtract memory location being pointed-at from the accumulator; if the accumulator is negative, skip the next location, else execute the next location

That statement unseated my confidence in Kolmogorov Complexity as a representation for biological autonomous self-organized complexity and thus I suggested that perhaps we ought to look at entropy. I specifically suggested von Neumann entropy which is applicable in a computational density matrix (as in the many states of quantum mechanics) v. a Shannon entropy which is more applicable in classical physics.

For those following our discussion, here are some useful links and definitions:

The Kolmogorov complexity of a string of bits is the length of the smallest Turing machine program which produces the bit string as output.

A Turing Machine is an idealized computer consisting of an infinite tape and a read-write "head" which moves back and forth on the tape, reading and writing, according to a rule set that refers to i) what it sees on the tape ii) an internal "memory" state.

A Universal Turing Machine is a Turing machine with a rule set which allows it to imitate any other Turing machine (if the rule set and the input of the machine to be emulated are presented on the tape).

Entropy, for a closed system, is the quantitative measure of the amount of thermal energy not available to do work. It is the opposite of available energy and is often used to state the second law of thermodynamics: entropy in a closed system can never decrease.

Entropy is also used to mean disorganization or disorder, i.e. a measure of disorder or randomness in a closed system. (Boltzmann) This is the meaning of the term in information theory.

Both Feynman and Shannon (information theory) recognize that there is a difference – arriving at the number of possible arrangements is an arbitrary parceling in information theory whereas thermodynamics is objective. Shannon: “If we change coordinates, the entropy will in general change.”

Yockey agrees the different kinds of entropy do not correlate, but notes that Shannon entropy does not distinguish between viable DNA sequences and happenstance DNA sequences of the same length. Thus he uses Shannon entropy in his book, Information Theory and Molecular Biology which debunks the notion of abiogenesis.

In this panspermia discussion the conclusion drawn is that ”things never organize themselves.” That of course runs counter to the entire point of autonomous self-organizing complexity, which appears to be supported by what we see present in the Hox and Pax genes which are virtually identical across phyla, e.g. eyeness. The issue is how this autonomous self-organizing complexity could arise from non-life (abiogenesis.)

In the same article, the author sees quantum entropy as a possible solution in the distant future.

But, IMHO, that bridge is already being crossed:

Entropic Nonextensivity: a possible measure of complexity

Physics of Computation and the Quantum

503 posted on 06/22/2003 8:28:35 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 502 | View Replies]

To: Doctor Stochastic
I have often wondered about babies....without language what are they "thinking" ?
504 posted on 06/22/2003 9:26:46 AM PDT by revolted
[ Post Reply | Private Reply | To 360 | View Replies]

To: gore3000
Are you going to tell me that that information is of equal value? Of course not.

You presume too much. You determination of "value" is purely subjective, and has no objective merit. Same goes for my personal determination of "value". Information simply is.

505 posted on 06/22/2003 9:57:19 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 483 | View Replies]

To: gore3000
While essentially all computer programs work on a yes/no basis, you cannot build a program that say does anything useful with just two machine code instructions.

I would say you are wrong. Heck, I do research on an extremely advanced form of universal computer that has less than 5 instructions in total. But you don't have to take my word for it. Here is a link to a website that describes the entire instruction set of a couple universal computer languages that prove my point. Google is your friend, you should use it more.

http://ling.ucsd.edu/~barker/Iota/

Tell me how you pre-scribe all that from the first bacteria to those two animals with all the intervening species in between with two lines of code.

A computer that only has two instructions is still allowed to process an arbitrarily large amount of information. Just because the control function of the computer is extremely tiny does not mean that you can't build incredibly large and expressive systems. I think you misunderstood what having a small instruction set means. You can have a machine that only knows two instructions and STILL have millions of lines of code. Remember, there is no real difference between a program and data anyway.

Turing machines have a halting problem, living things do not

From this statement, I'm not sure that you actually grok the Halting Problem. I would also state as a relevant point that there exists novel Turing Machine (i.e. universal computer) models that effectively "cheat" the halting problem by tweaking some of the underlying assumptions of Turing machines. We've had such machines running on silicon for a few years now.

506 posted on 06/22/2003 10:19:54 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 486 | View Replies]

To: Doctor Stochastic
There is a theorem that says that all universal Turing machines compute any function with a difference of only an additive constant. What's important is that this constant doesn't depend on the length of the input.

One of the things I've had to prove in the last year to the satisfaction of some others (I am lazy, lazy, lazy about actually doing proper rigorous publication of mathematics) is that this theorem allows a greater degree of freedom than actually exists.

For all finite TMs, the additive constant for a given function implementation is the same, making the intrinsic Kolmogorov complexities of the system identical. It is undefined for the UTM case, but then the intrinsic Kolmogorov complexity of a proper Universal Turing Machine is infinite, making the analysis a meaningless exercise. The assumption of infinite memory confused the usefulness of the theorem.

Therefore, all equivalent finite computational systems have the same Kolmogorov complexity even if the control functions vary. One of my Big Things has been rewriting computational information theory from the assumption of purely finite systems and treating the infinite case as an edge case rather than the assumed case. I've come across a number of small but important differences in the basic theorems of computational information theory by removing the assumptions of infinities, enough so that I've managed to drag some previously skeptical mathematicians into this exercise.

507 posted on 06/22/2003 10:48:43 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 497 | View Replies]

To: Alamo-Girl
I can't even see this as a single instruction in interpretive language code.

Universal computers have a "finite control function", that defines the nature and granularity of manipulations of the state. Depending on the type of machine, the execution of a single "instruction" (which is an abstract rather than literal construct) can have either very simple or very complex consequences to the state. The folding of a protein is an extremely complex behavior, but it can be triggered by the execution of a single "instruction" within that computational system. You are having problems with this because you are thinking of things like machine code, which is a very narrow instance of all possible control functions.

One of the mental hazards of computational theory is that most people view computers as being solely like the kinds of computers we build with silicon. They way we build computers in practicing is a consequence of history and practical engineering concerns, and doesn't even scratch the surface of the entire space of things that constitute "universal computers". This is a case where limited experience leads to conceptual prejudices that aren't justified.

508 posted on 06/22/2003 10:58:20 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 490 | View Replies]

To: Alamo-Girl
In other words, before introducing Kolmogorov Complexity ? the computer/machine we are speaking of must be normalized or else it is apples and oranges.

Not if we are talking about finite systems (and we are); for these the intrinsic KC of the entire system is always going to be the same. See #507 (I think) about this. As I state in my other post about this specific point, this is new and also unpublished -- it is actually one of my personal (and relatively minor) contributions to the field. One of the things high on my TODO list is to publish a comprehensive tome on finite computational theory, particularly with respect to how it differs from traditional computational theory.

509 posted on 06/22/2003 11:09:45 AM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 499 | View Replies]

To: tortoise
Are you going to tell me that that information is of equal value? Of course not.-me- You presume too much. You determination of "value" is purely subjective, and has no objective merit.

I guess you would say it is objective to state that the output of million monkeys in typewriters is of equal value than Shakespeare's?

But more to the scientific points which is what we are discussing here, it is ludicrous to say that a scientific formula on which many advances have been built on has the same value as somebody randomly typing on a keyboard.

What you are doing is clearly doing what evolutionists have done with regards to 'proving' materialism - excluding anything which is not materialistic from possible discussion. You are doing the same here. Because your formula cannot account for the value in information, you say it does not exist, that value is out of the question. It certainly is not if we are talking about the real world.

510 posted on 06/22/2003 11:14:07 AM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 505 | View Replies]

To: tortoise; gore3000; Doctor Stochastic
Thank you for your post!

You are having problems with this because you are thinking of things like machine code, which is a very narrow instance of all possible control functions.

Not so fast! My protest was against such a hard-wired macro emulating a Universal Turing Machine and thus effecting Kolmogorov Complexity of the result!

In a sense, you have addressed my complaint with the discussion of infinite v finite in the above post. Nevertheless, I still have an issue - based on your post to gore3000 at 506:

In the url you provided, the Iota language which reduces to two instructions is expressed by this statement in R5RS Scheme:

(let iota ()
(if (eq? #\* (read-char)) ((iota)(iota))
(lambda (c) ((c (lambda (x) (lambda (y) (lambda (z) ((x z)(y z))))))
(lambda (x) (lambda (y) x))))))

Scheme is interpretive like Algol or Lisp. To me that indicates if Iota is actualized, it is hard-wired to perform a macro of even greater order than this, much like the example Doctor Stochastic gave.

This is obviously relevant to information theory, but looking at biological autonomous self-organizing complexity - the instruction set for determining Kolmogorov complexity in abiogenesis surely isn't at a macro or super-macro level.

IOW, for Rocha’s abiogenesis theory to work, RNA must toggle between states of autonomy for editing and not for gathering, much like a computer. At each autonomous toggle-step, the opportunity rises to increase or decrease complexity. Presumably where complexity increases, including syntax, conditionals, memory and recursives - entropy increases as well – or stays the same - but never decreases.

It seems to me that entropy, and not Kolmogorov Complexity, is the best tool to evaluate what might have happened in abiogenesis theory.

511 posted on 06/22/2003 11:19:03 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 508 | View Replies]

To: tortoise
Heck, I do research on an extremely advanced form of universal computer that has less than 5 instructions in total.

I think you have been missing the point of my postings. I am quite aware that the number of 'instructions' in a computer is of little relevance to what it can do. A greater number of instructions just makes programs run faster and makes programming easier.

A computer that only has two instructions is still allowed to process an arbitrarily large amount of information. Just because the control function of the computer is extremely tiny does not mean that you can't build incredibly large and expressive systems. I think you misunderstood what having a small instruction set means.

No I did not, in fact you are practically repeating what I said in the post you are responding to (#486) while claiming I do not understand the question. I do understand the problem quite well. The problem is not how many instructions there are in a process but how much code is needed to accomplish a task. By the term 'code' I mean either code within the computer or within the program since as we both agree they are interchangeable to a great extent.

My point has been all along that you need a lot more than just a few lines of code to accomplish the task of specifying the vast variety of life we see. You need a lot more than the 5-6 rules which Wolfram claims is all that is necessary to pre-scribe the complexity of living things (and which started our discussion).

Turing machines have a halting problem, living things do not. -me-

From this statement, I'm not sure that you actually grok the Halting Problem. I would also state as a relevant point that there exists novel Turing Machine (i.e. universal computer) models that effectively "cheat" the halting problem

I do understand it. I understand that a Turing machine does not understand what a human immediately perceives. Yes, you can 'cheat' to stop the halting problem, but then that is not what I was talking about is it? It is no longer a classical turing machine. You have been forced to intelligently design a way around the problem.

512 posted on 06/22/2003 11:46:01 AM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 506 | View Replies]

To: Alamo-Girl
This is obviously relevant to information theory, but looking at biological autonomous self-organizing complexity - the instruction set for determining Kolmogorov complexity in abiogenesis surely isn't at a macro or super-macro level.

Yes, it seems that the discussion is running in circles due to terms being used way too loosely. The simplest instruction in a computer whether the smallest or the most advanced is yes/no or rather 0/1. That is all that present computers understand. The 'instruction sets' of advanced computers are just a hardware implementation of what was previously in software. Nowadays the difficult math of multiplication and division and even higher math is often put on a chip (in fact some computers used to have entire programming languages on a chip).

However, that does not mean that it does not take a vast amount of instructions - the 0/1 kind to accomplish it a simple division. It is just that it is not as visible as it used to be. In biological systems, instead of base 2 (binary) we see the simplest instructions are in base 4 (a bit pair of DNA has 4 possible values). To implement rules in such a system one will need a lot of instructions in order to accomplish anything, certainly much more than the 5-6 rules which Wolfram speaks of.

513 posted on 06/22/2003 12:03:46 PM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 511 | View Replies]

To: gore3000
It is no longer a classical turing machine.

It is a Turing Machine; you can both trivially implement any Turing Machine on it and it can be implemented on any Turing machine. Perhaps it isn't "classical" in the sense of being utterly conventional, but that doesn't have any merit anyway. Welcome to the bleeding edge computer science.

The reason that there are so few fundamental behaviors is that while the instructions it needs are extremely simple, the effects are powerful and pervasive. It doesn't operate on a "register" or "datum", but on finite "patterns" in memory, sort of like a Turing Machine with a continuously variable number of tapes. (ObNote: All multi-tape TMs are mathematically equivalent to single-tape TMs.) Furthermore, all the operations are mathematically purely stochastic in the abstract but functionally deterministic (in fact, there isn't a single floating point value or operation in the entire thing, being purely integer); this confuses people but it is a consequence of there not being any functional concept of absolute values, only relative ones.

My point being that there are a lot of things that qualify as Turing Machines that fall way outside most people's conception of what a Turing Machine is. Some of these, like the model I just mentioned, are much more potent and powerful TMs than the "standard" one that everyone is familiar with anyway, hence why they can't reasonably be ignored.

514 posted on 06/22/2003 12:39:23 PM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 512 | View Replies]

To: Alamo-Girl
I'm going to adjust your definitions -- many of them are wrong in key aspects. They are just correct enough to give a layman the wrong impression of the actual consequences. :-)

I don't have time right at this moment, but maybe later today if I have the time.

515 posted on 06/22/2003 12:43:31 PM PDT by tortoise (Would you like to buy some rubber nipples?)
[ Post Reply | Private Reply | To 503 | View Replies]

To: gore3000
Thank you so much for your post!

Indeed, as you say "it seems that the discussion is running in circles due to terms being used way too loosely."

I'll be glad to see tortoise's improvement on the definitions I posted at 503, which were primarily derived from the linked articles. When we get the terms clarified, then hopefully we can make additional forward progress on this thread!

To implement rules in such a system one will need a lot of instructions in order to accomplish anything, certainly much more than the 5-6 rules which Wolfram speaks of.

I believe the high information content is why Yockey said this:

This self-catalytic molecule must have a very small information content. By that token, there must be very few of them [Section 2.4.1] As they self-reproduce and evolve the descendants get lost in the enormous number of possible sequences in which the specific messages of biological are buried. From the Shannon-McMillan theorem I have shown that a small protein, cytochrome c is only 2 x 10^-44 of the possible sequences. It takes religious faith to believe that would happen. Of course the minimum information content of the simplest organism is much larger than the information content of cytochrome c.


516 posted on 06/22/2003 4:01:53 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 513 | View Replies]

To: tortoise
Thank you for your post! I look forward to the adjusted definitions. The ones I posted at 503 are derived primarily from linked sources, so the adjusted definitions will also be informative as to the valuation of the sources themselves.
517 posted on 06/22/2003 4:04:14 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 515 | View Replies]

To: Alamo-Girl
Of course the minimum information content of the simplest organism is much larger than the information content of cytochrome c.

There is an interesting item I ran into regarding Hox genes:

Another striking thing about the homeotic genes is that some of them have enormous introns. One of the several Antennapedia introns is about 57,000 base pairs long --over 10 times the combined length of all its exons! This extra length may be important in temporal regulation of the gene's expression (Kornfeld, et al. 1989). In Drosophila embryogenesis, events happen quickly. The gap, pair-rule, and homeotic selector genes in the embryo are each active for only about three hours. Since it is estimated that Drosophila genes are transcribed at a rate of 1000 nucleotides per minute at 25 degrees C (Ashburner, 1990), this huge Antennapedia intron adds nearly 1 hour to the time lag before the protein is expressed.
From: Multiple regulatory modes for homeodomain proteins

Some here may be too young to remember, but some years ago there were no reliable timers on computers so in order to say do buzz sound for an error what would be done is just write a loop that would do nothing except waste time to keep the sound on for the appropriate time. Seems the organism is doing the same thing - it is running through a time wasting process just to keep the timing correct. This shows a couple of things, one of which is that DNA is a program, and that it is a very sophisticated one. While many thought introns were just a waste, we find that they have quite a few uses, one is to make it easier to make several proteins out of one gene, and in this case (and perhaps others) as a form of proper protein production regulation. This timing is especially important in a gene which is involved in the development of the organism since many functions have to occur in just the proper sequence for the organism to develop properly.

So from the above and many other idiosyncracies one finds when examining the biology of living things, it is proper to say that the amount of code needed to pre-scribe a human being would be - some 3 billion bit pairs of DNA code - but only if it was written by a very intelligent designer. If the folks at Microsoft had been let loose on it it probably would require a few gigabits of code.

518 posted on 06/22/2003 8:23:12 PM PDT by gore3000 (Intelligent people do not believe in evolution.)
[ Post Reply | Private Reply | To 516 | View Replies]

To: gore3000
Great catch! Fascinating information. Thank you!!!

It is stunning that timing itself is part of the information content. Stunning...

519 posted on 06/22/2003 8:45:10 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 518 | View Replies]

To: betty boop; Alamo-Girl; Heartlander
"This fact assumes that the creator system is in a certain way transformed into the to-be-created subsystem, the ‘whole’ is transformed to the ‘part.’ This global-local transformation is a necessary condition of the generation of the new system. Therefore the Universe acted continuously as an agent with organisational ability, and is progressively transformed from the largest of its subsystems into the smallest ones."

Haven't devoted much mindshare to this yet, but intend to. Thanks for your diligence (or feel free to substitute a more appropriate word)! ;-)

520 posted on 06/23/2003 7:48:47 AM PDT by unspun ("Do everything in love.")
[ Post Reply | Private Reply | To 480 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 481-500501-520521-540 ... 661-675 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson