Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Karl_Lembke
1. An electrically charged object can be seen as generating a signal. Electric charges generate a flood of virtual photons, which transmit forces between those objects and other charged objects. These forces tend to repel or attract the other objects.

In a way, these photons can be seen as carrying a message, saying "come closer" or "move farther away". Is this a valid signal? Is it a coded signal? Please explain why or why not. (I'm trying to determine how this notion of information is applied, and to see if you -- or anyone, for that matter -- really understands it.)

I been thinking about your questions for a couple of days, believe it or not. Rather than re-inventing the wheel, allow me to refer to what William Dembski has already written about some of the issues you have raised here in a very detailed way, including what constitutes information, how it is measured (21 paragraphs), and how it relates to the study of evolutionary biology (25 paragraphs). He says that information in a very general sense can be defined as the actualization of one possibility to the exclusion of others. Information can be measured in terms of its complexity. Smaller probabilities signify more information, not less. A higher level of of complexity would be actualization of circumscribed possibilities corresponding to patterns. A still higher level of information would be when the patterns are specified, that is, independently given-in-advance (i.e., not simply read off information after the fact.) I would say (leaving aside the existence and nature of the photons themselves for the moment) that they constitute an elemental level of information between a non-intelligent sender and receiver, because they actualize one possibility to the exclusion of others. I would not think that this level of information is coded, because coded to me represents the existence of a convention to the exchanges of information between sender and receiver.

2. Suppose a string of amino acids is assembled at random. We know from organic chemistry that such a random protein will very likely catalyze some chemical reaction. Is there information in this random chain of amino acids? How do we measure or interpret this information? Where did it come from?

3. Presumably, the difference between living and nonliving matter depends on the information content, otherwise you wouldn't have brought up information and abiogenesis in the same message.

When the first living thing came to be, where did the critical information come from? How was it delivered to the system? Can we see any instances of any similar processes today?

I think to ask where information comes from is to ask what the fundamental nature of the universe is. It may be like like asking where numbers come from. That is a very, very big question. Again, Dembski has this to say, (in part)..."The abiotic infusion of exogenous information is the great mystery confronting modern evolutionary biology. It is Manfred EigenÕs mystery with which we began this paper. Why is it a mystery? Not because the abiotic infusion of exogenous information is inherently spooky or unscientific, but rather because evolutionary biology has failed to grasp the centrality of information to its task. The task of evolutionary biology is to explain the origin and development of life. The key feature of life is the presence of complex specified informationÑCSI. Caught up in the Darwinian mechanism of selection and inheritance with modification, evolutionary biology has failed to appreciate the informational hurdles organisms need to jump in the course of natural history. To jump those hurdles, organisms require information. WhatÕs more, a significant part of that information is exogenous and must originally have been infused abiotically.

"In this section I want briefly to consider what evolutionary biology would look like if information were taken as its central and unifying concept. First off, letÕs be clear that the Darwinian mechanism of selection and inheritance with modification will continue to occupy a significant place in evolutionary theory. Nevertheless, its complete and utter dominance in evolutionary theoryÑthat selection and inheritance with modification together account for the full diversity of lifeÑthis inflated view of the Darwinian mechanism will have to be relinquished. As a mechanism for conserving, adapting, and honing already existing biological structures, the Darwinian mechanism is ideally suited. But as a mechanism for innovating irreducibly complex biological structures, it utterly lacks the informational resources. As for symbiotic infusion, its role within an information-theoretic framework must always remain quite limited, for even though it can account for how organisms trade already existing biological information, it can never get at the root question of how that biological information came to exist in the first place.

"Not surprisingly, therefore, the key task an information-theoretic approach to evolutionary biology faces is to make sense of abiotically infused CSI. Abiotically infused CSI is information exogenous to an organism, but which nonetheless gets transmitted to and assimilated by the organism. Two obvious questions now arise: (1) What is the mode of transmission of abiotically infused CSI into the organism? and (2) Where is this information prior to being transmitted? If this information is clearly represented in some empirically accessible non-biological physical system, and if there is a clear informational pathway from this system to the organism, and if this informational pathway can be shown suitable for transmitting this information to the organism so that the organism properly assimilates it, only then will these two questions receive an empirically adequate naturalistic answer. But note that this naturalistic answer, far from eliminating the information question, simply pushes it one step further back, for how did the CSI that was abiotically infused into an organism first get into a non-organism? Because of the Law of Conservation of Information, whenever we inquire into the source of some information, we never resolve the information problem, but only intensify it. This is not to say that such inquiries are unilluminating (contra Dawkins, 1987, pp. 11­13; and Dennett, 1995, p. 153 who think that the only valid explanations in evolutionary biology are reductive, explaining the more complex in terms of the simpler). We learn an important fact about a pencil when we learn a certain pencil-making machine made it. Nonetheless, the information in the pencil-making machine exceeds the information in the pencil. The Law of Conservation of Information guarantees that as we trace informational pathways backwards, we have more information to explain than we started with..."

3-a. When a living thing dies, where does its information go? Does it simply vanish into nowhere? If information can vanish into nowhere, can it emerge from nowhere?

3-b. Is information conserved? When a living thing reproduces itself, does this create additional information? Where does this information come from?

Demsbski argues that information is conserved: "...(1) Chance generates contingency, but not complex specified information. (2) Functions (e.g., algorithms and natural laws) generate neither contingency, nor information, much less complex specified information... This result, that neither chance nor functions nor some combination of the two can generate CSI (complex specified information), I call the Law of Conservation of Information, or LCI for short. Though formulated at a high level of mathematical abstraction, LCI has many profound implications for science. Among its immediate corollaries are the following: (1) The CSI within a system closed to outside information always remains constant or decreases. (2) If CSI increases within a system, then CSI was added exogenously. (3) CSI cannot be generated spontaneously, originate endogenously, or organize itself. (4) To explain the CSI within a system is to appeal to a system whose CSI is equal or greater in complexity still (in particular, reductive explanations of CSI are never adequate)..."

From where I sit, the notions of information you are relying on appear to be very ill-defined. Indeed, they seem to amount to hand-waving, serving no function other than to sound impressive while saying "it just happened that way".

Demsbski's information-theoretic work is regarded as mathematically rigorous and fairly respectable, even by those who disagree with his conclusions.

I'll be interested in seeing how you apply information theory to answer these questions, and I'd appreciate it if you can show your work.

Well thank you for the compliment. I'm guessing that my reply here will have disappointed you, but if I were to be able to answer some of these questions I should win the Nobel Prize. I don't think you really want to see my work because I'm a paralegal, and it's very, very, very boring stuff. That's why I hand around here. You people make me think much more.

Cordially,

471 posted on 02/14/2002 8:05:05 AM PST by Diamond
[ Post Reply | Private Reply | To 468 | View Replies ]


To: Diamond
OK, here it is. It's a bit long, I'm afraid...

Indeed, it seems I'm going to have to break it into pieces. <sigh>


Rather than re-inventing the whell, allow me to refer you to what William Dembski has already written...

Don't mind if I do.

The distinction between specified and unspecified information may now be defined as follows: the actualization of a possibility (i.e., information) is specified if independently of the possibility's actualization, the possibility is identifiable via a pattern. If not, then the information is unspecified. Note that ... specified information cannot become unspecified, though unspecified information may become specified information. ... For instance, a cryptographic transmission whose cryptosystem we have yet to break will constitute unspecfied information. Yet as soon as we break the cryptosystem, the cryptographic transmission becomes specified information.

Um. Dembski has just as much as stated that information can be both specified and unspecified at the same time. The encrypted transmssion referred to above is presumably "specified" as far as the transmitter and intended recipient (by convention in discussions of cryptographic protocols, "Alice" and "Bob") are concerned. The fact that an intended evesdroppper ("Eve") is unable to decipher the message does not make it any less "specified" for Alice and Bob. And presumably, the fact that Alice and Bob know what the message says does not make it "specified" for Eve.

We also have signals that are alleged to be specified, but may or may not be. You may have heard of the book, The Bible Code. The thesis of this book is that messages are hidden in the text of the Tanach (The Hebrew Bible), and these messages may be discovered by reading every Nth letter. Different sets of messages are found by using different values of N. A large number of messages have been "discovered" by this method, and these messages seem to refer to recent and current events of significance.

The problem is, although every individual "message" is highly improbable, the set of possible matches to any given string is quite large. The result is something we might call "The Rorchach Effect, or maybe "The Nostradamus Effect". Given any sufficiently ambiguous signal, a "match" can usually be found. Especially if you're not too picky about how close a match you get.

Indeed, in critiques of The Bible Code, one reviewer applied the same test to a classic novel. (I think it may have been Moby Dick. ) He found similarly significant "messages". Conclusion: complex specified information can appear spontaneously without the intervention of any complex specified design.

Continued...

472 posted on 02/20/2002 10:02:11 PM PST by Karl_Lembke
[ Post Reply | Private Reply | To 471 | View Replies ]

To: Diamond
Continued...
...The key conceptual difficulty here is to characterize the independence condition that obtains between patterns and information.

Indeed.

Is the origin of life specified?

That is a very good question. In a way, it is. The question is, how specified is it? And how complex?

Current research in abiogenesis is focusing on, among other things, RNA. RNA has been shown to form spontaneously, given the right conditions. It has even been shown to polymerize under the right conditions. Some of these polymers have been shown to catalyze various chemical reactions, including the polymerization of RNA.

Once we have a system in place that makes copies of RNA, a form of evolution can take place.

We may never know the exact pathway that life took. If life is as improbable as Dembski and others think it is, we might know with high confidence how it came to be. I suspect, though, there will turn out to be a multitude of different pathways that are available, and could have been followed. To make up some numbers, a 100-stage process, with two alternatives at only ten of those stages, yields 1024 different pathways. Unless every pathway leads to a different final result, we won't be able to tell which one was followed.

The Law of Conservation of Information...

On reading through this section, I see that in fact information, or at least CSI, is not conserved. In the third paragraph of this section, we find that the information from one source, upon being filtered through a function, can never be greater (according to Dembski) than the information that was originally present in that source. It can be less. Indeed, if a message is destroyed, then it would seem to follow that the information is also destroyed. This is not how conservation laws work.

Secondly, information can be created, simply by copying it. A message may have N bits, and a copy of that message would have N bits as well. Two copies of the message have 2N bits, although the Shannon entropy would be only slightly larger than N. (A file compression program would code the new message as "two copies of ...".) Once a second (third, fourth, etc) copy of a message exists, it can then be changed slightly, depending on environmental conditions. For example, I have had occasion to write batch files which begin as lots of copies of "move file A to file 2001A in directory D". I change the "A" in the second copy to "B", to "C" in the third copy, and so on. Although intelligently designed in this case, it shows how copies of information, after being slightly altered, can become additional information.

The question of whether the new information in copies subjected to random changes is useful is another question entirely. The answer seems to be that sometimes it is. Dawkins discusses a case where mammalian hemoglobin appears to have arisen from just such a duplication-and-modification event. The modified hemoglobin is at least as efficient as the original ancestral form, and a synergistic interaction between the two types makes the entire system more efficient. If, in fact, this is the result of duplication-and-modification, then it is an example of information created with no apparent creator.

In his application of his variant of information theory to evolution, he makes a serious blunder in paragraphs 9 and 10. He counts bits of new information by counting the number of offspring in any given critter, and taking the log2 of that number. Thus, a critter with an average litter size of 4 increases the information content of the species by two bits. A critter with one offspring increases the information content of the species by zero bits.

This is wrong. The information content of (to use Dembski's term) an actualized possibility is not based on the number of instances of that actualization, but on the number of a priori possibilities there were to begin with. For example, in human reproduction, 46 chromosomes are combined from the male and female gametes in the child zygote. Each chromosome had a 50% chance of being selected from the parent's chromosome pair. Thus, the chance of any particular set of chromosomes being pieced together in the final zygote is one in 2^26, or one in just over seventy trillion. By this argument, each human child increases the information in the species by 46 bits.

But wait, there's more!

Continued...

473 posted on 02/20/2002 10:04:04 PM PST by Karl_Lembke
[ Post Reply | Private Reply | To 471 | View Replies ]

To: Diamond
Continued...

Each gene has one chance in 10,000 of mutating. There are between 30,000 and 70,000 genes in the human genome. (They're arguing over counting methods right now.) A mutation consists of, at least, a base change, from the current nucleotide base to one of three other bases. There are three billion base pairs in the human genome. 2% of that is in expressed genes. That means that 60 million base pairs are subject to being replaced with one of three choices. That's 180 million possible substitutions, for 25 .8 bits of additional information. I'll leave insertions, deletions, duplications and crossing-over events for someone with a stronger math-ochistic streak than I have. (By the way, if I assume that all 3 billion bases have some meaning in the genetic code, we get to 33 bits per mutation.)

OOPS! I'm sorry, at 30,000 genes, that's an average of three mutations. Triple the 25.8, or the 33, whichever you prefer.

Anyway, we're now looking at over a hundred bits of new information with the birth of each child.

Another blunder Dembski makes is that he neglects parallelism. He takes the case of a bacterium which divides every 20 minutes. This, he states, creates one bit of additional information. (Again, so much for "conservation of information".) He then divides 20 minutes into a billion years to get 26 trillion bits of information. Neglecting the fact that he's neglected sources of information, let's consider:

One bacterium divides, producing two daughter bacteria, and one bit of information. These daughters divide 20 minutes later, and each produces one bit. Total bits now = 3. At the end of a day, 4.7 * 1021 bits of information have, in theory, been added. (This would produce about 10,000 cubic meters of bacteria. Not an unreasonable number -- yet.)

Using the numbers I calculate for the amount of information that can result from a birth, I note that the human race consists of about 6 * 109 individuals. Worldwide, each human gives rise to about 1.1 descendants. Thus, in this generation, we can expect the production of 6.6 * 109 * 100 bits, give or take. This is 6 * 1011 bits per generation. A lot of that information can be filtered out, and still produce a measurable increase in useful information.

And in fact, all the information introduced into the genome is filtered, by a filter Dembski seems to prefer not to credit -- natural selection.

Natural selection is a filter which passes information which fits a certain specification more closely than other available information. The specification used is whatever criteria the environment cares to provide. If the environment includes a cold temperature, information which produces an ability to generate heat, or conserve heat, is going to be passed on as "useful". If the environment includes visible light, information which codes for the construction of some sort of receiver and translator will be "useful", and will be passed on.

Dembski has developed a nice vocabulary, and has thrashed through quite a bit of number theory, but he has not proven that any amount of intelligence is required for evolution to occur.


Well, I seem to have run out of stuff to say about information for the moment, and it's getting late. I'm going to upload this and go to bed.

G'night!

.............Karl

474 posted on 02/20/2002 10:05:06 PM PST by Karl_Lembke
[ Post Reply | Private Reply | To 471 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson