Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: js1138; betty boop; Doctor Stochastic; tortoise; Physicist; PatrickHenry; cornelis; marron; ...
Thank you for your reply, js1138. Again, I would have to argue that your interpretation of complexity is transactional. There is nothing inherently more complex about a human blueprint than that of a newt. It is the interaction between the blueprint and the supportive infrastructure that appears as complexity. But the notion that tiny changes to the underlying blueprint can be read as profound differences in structure and complexity is an underlying assumption of Darwinian evolution,

I do not define complexity in terms of a "transaction". Nor am I valuing the complexities or interpreting them beyond their definitions as described. But, by all means, see for yourself:

Here are the two basic types of complexity:

NECSI: Complex Systems

Complexity is ...[the abstract notion of complexity has been captured in many different ways. Most, if not all of these, are related to each other and they fall into two classes of definitions]:

1) ...the (minimal) length of a description of the system.

2) ...the (minimal) amount of time it takes to create the system.

The length of a description is measured in units of information. The former definition is closely related to Shannon information theory and algorithmic complexity, and the latter is related to computational complexity.

And here are the type of complexity I mentioned, their definitions and categories in which they seem to fit, to me:

Least Description

NIST: Kolmogorov Complexity

Definition: The minimum number of bits into which a string can be compressed without losing information. This is defined with respect to a fixed, but universal decompression scheme, given by a universal Turing machine.

Wikipedia: Cellular Automata (aka Self-Organizing Complexity)

A cellular automaton (plural: cellular automata) is a discrete model studied in computability theory and mathematics. It consists of an infinite, regular grid of cells, each in one of a finite number of states. The grid can be in any finite number of dimensions. Time is also discrete, and the state of a cell at time t is a function of the state of a finite number of cells called the neighborhood at time t-1. These neighbors are a selection of cells relative to some specified, and does not change (Though the cell itself may be in its neighborhood, it is not usually considered a neighbor). Every cell has the same rule for updating, based on the values in this neighbourhood. Each time the rules are applied to the whole grid a new generation is produced.

Adami: Physical Complexity

In this paper, we skirt the issue of structural and functional complexity by examining genomic complexity. It is tempting to believe that genomic complexity is mirrored in functional complexity and vice versa. Such an hypothesis, however, hinges upon both the aforementioned ambiguous definition of complexity and the obvious difficulty of matching genes with function. Several developments allow us to bring a new perspective to this old problem. On the one hand, genomic complexity can be defined in a consistent information-theoretic manner [the "physical" complexity (4)], which appears to encompass intuitive notions of complexity used in the analysis of genomic structure and organization (5). On the other hand, it has been shown that evolution can be observed in an artificial medium (6, 7), providing a unique glimpse at universal aspects of the evolutionary process in a computational world. In this system, the symbolic sequences subject to evolution are computer programs that have the ability to self-replicate via the execution of their own code. In this respect, they are computational analogs of catalytically active RNA sequences that serve as the templates of their own reproduction. In populations of such sequences that adapt to their world (inside of a computer's memory), noisy self-replication coupled with finite resources and an information-rich environment leads to a growth in sequence length as the digital organisms incorporate more and more information about their environment into their genome. Evolution in an information-poor landscape, on the contrary, leads to selection for replication only, and a shrinking genome size as in the experiments of Spiegelman and colleagues (8). These populations allow us to observe the growth of physical complexity explicitly, and also to distinguish distinct evolutionary pressures acting on the genome and analyze them in a mathematical framework.

If an organism's complexity is a reflection of the physical complexity of its genome (as we assume here), the latter is of prime importance in evolutionary theory. Physical complexity, roughly speaking, reflects the number of base pairs in a sequence that are functional. As is well known, equating genomic complexity with genome length in base pairs gives rise to a conundrum (known as the C-value paradox) because large variations in genomic complexity (in particular in eukaryotes) seem to bear little relation to the differences in organismic complexity (9). The C-value paradox is partly resolved by recognizing that not all of DNA is functional: that there is a neutral fraction that can vary from species to species. If we were able to monitor the non-neutral fraction, it is likely that a significant increase in this fraction could be observed throughout at least the early course of evolution. For the later period, in particular the later Phanerozoic Era, it is unlikely that the growth in complexity of genomes is due solely to innovations in which genes with novel functions arise de novo. Indeed, most of the enzyme activity classes in mammals, for example, are already present in prokaryotes (10). Rather, gene duplication events leading to repetitive DNA and subsequent diversification (11) as well as the evolution of gene regulation patterns appears to be a more likely scenario for this stage. Still, we believe that the Maxwell Demon mechanism described below is at work during all phases of evolution and provides the driving force toward ever increasing complexity in the natural world.

Least Time

NECSI: Functional Complexity

Given a system whose function we want to specify, for which the environmental (input) variables have a complexity of C(e), and the actions of the system have a complexity of C(a), then the complexity of specification of the function of the system is:

C(f)=C(a) 2 C(e)

Where complexity is defined as the logarithm (base 2) of the number of possibilities or, equivalently, the length of a description in bits. The proof follows from recognizing that a complete specification of the function is given by a table whose rows are the actions (C(a) bits) for each possible input, of which there are 2 C(e). Since no restriction has been assumed on the actions, all actions are possible and this is the minimal length description of the function. Note that this theorem applies to the complexity of description as defined by the observer, so that each of the quantities can be defined by the desires of the observer for descriptive accuracy. This theorem is known in the study of Boolean functions (binary functions of binary variables) but is not widely understood as a basic theorem in complex systems[15]. The implications of this theorem are widespread and significant to science and engineering.

Wikipedia: Irreducible Complexity

The term "irreducible complexity" is defined by Behe as:

"a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning" (Michael Behe, Molecular Machines: Experimental Support for the Design Inference)

Believers in the intelligent design theory use this term to refer to biological systems and organs that could not have come about by a series of small changes. For such mechanisms or organs, anything less than their complete form would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Proponents of intelligent design argue that while some complex systems and organs can be explained by evolution, organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must thus have created or guided life.

Specified Complexity

In his recent book The Fifth Miracle, Paul Davies suggests that any laws capable of explaining the origin of life must be radically different from scientific laws known to date. The problem, as he sees it, with currently known scientific laws, like the laws of chemistry and physics, is that they are not up to explaining the key feature of life that needs to be explained. That feature is specified complexity. Life is both complex and specified. The basic intuition here is straightforward. A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified...

How does the scientific community explain specified complexity? Usually via an evolutionary algorithm. By an evolutionary algorithm I mean any algorithm that generates contingency via some chance process and then sifts the so-generated contingency via some law-like process. The Darwinian mutation-selection mechanism, neural nets, and genetic algorithms all fall within this broad definition of evolutionary algorithms. Now the problem with invoking evolutionary algorithms to explain specified complexity at the origin of life is absence of any identifiable evolutionary algorithm that might account for it. Once life has started and self-replication has begun, the Darwinian mechanism is usually invoked to explain the specified complexity of living things.

But what is the relevant evolutionary algorithm that drives chemical evolution? No convincing answer has been given to date. To be sure, one can hope that an evolutionary algorithm that generates specified complexity at the origin of life exists and remains to be discovered. Manfred Eigen, for instance, writes, "Our task is to find an algorithm, a natural law that leads to the origin of information," where by "information" I understand him to mean specified complexity. But if some evolutionary algorithm can be found to account for the origin of life, it would not be a radically new law in Davies's sense. Rather, it would be a special case of a known process.

Principia Cybernetica: Metatransition (a kind of punctuated equilibrium)

Consider a system S of any kind. Suppose that there is a way to make some number of copies from it, possibly with variations. Suppose that these systems are united into a new system S' which has the systems of the S type as its subsystems, and includes also an additional mechanism which controls the behavior and production of the S-subsystems. Then we call S' a metasystem with respect to S, and the creation of S' a metasystem transition. As a result of consecutive metasystem transitions a multilevel structure of control arises, which allows complicated forms of behavior.


875 posted on 01/18/2005 9:31:44 PM PST by Alamo-Girl
[ Post Reply | Private Reply | To 863 | View Replies ]


To: Alamo-Girl
But what is the relevant evolutionary algorithm that drives chemical evolution? No convincing answer has been given to date.

I thought this was a given. I believe what you are saying is that selection doesn't operate unless there is replication. That is also an assumption shared by most biologists.

But there are other consequenses of pre-biotic chemistry. No preditors, no consumers.

I have no trouble agreeing that we don't know how or where first life arose, but it seems clear that once life exists, chemical evolution takes a back seat.

889 posted on 01/19/2005 7:25:23 AM PST by js1138 (D*mn, I Missed!)
[ Post Reply | Private Reply | To 875 | View Replies ]

To: Alamo-Girl

I do believe it is up to biochemists to demonstrate a plausible route to chemical evolution. The lack of such a route is why there is no theory of biogenesis.

I don't believe we will ever know the exact path to first life, even if we can demonstrate a plausible natural path.

There are several reasons why most, biologists believe in abiogenesis. One is historical and cultural. It was thought for years that organic compounds could not be synthesized, but eventually they were. It was thought for years that complex organics like amino acids could no arise undirected, but they did. It is reasonable to attempt further steps anong this line.

Another reason is that science has no choice. Science isn't in the business of proving things can't be reduced to natural causes. Just the reverse. Lack of success proves nothing.


890 posted on 01/19/2005 8:24:04 AM PST by js1138
[ Post Reply | Private Reply | To 875 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson