Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Alamo-Girl
This is an extraordinary and mostly overlooked import of information theory (successful communication, paraphrased from Shannon) relative to evolution!

It doesn't matter whether it is one dimension or twenty-seven dimensions, information is information. The "geometric form" in countable dimensions is not separate from the information, and in fact all multidimensional information theoretic structures have an equivalent one dimensional form. Transforming information theoretic structures to various numbers of dimensions is both trivial and relatively basic theory. As a conceptual example, this is in evidence in computers, which have a single linear address space (one dimension) but in which we can represent geometries with an arbitrary number of dimensions to an arbitrary precision. Anything that can be represented on a computer can be transformed to a one-dimensional description ipso facto.

Most people have a hard time visualizing it when you put it literally, but it is done routinely in computer science and is a basic theorem in the mathematics. There is nothing special, in an information theoretic sense, about forms in 3-space or 4-space or n-space and you can move information between dimensional forms at will. This is related to the reason that there is nothing special about massive parallelism; there is an equivalent serial form for all parallel computations.

115 posted on 08/26/2004 7:58:38 PM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 114 | View Replies ]


To: tortoise; betty boop
Thank you so much for your reply! It is always delightful to get your "take" on such things.

The "geometric form" in countable dimensions is not separate from the information, and in fact all multidimensional information theoretic structures have an equivalent one dimensional form...

there is an equivalent serial form for all parallel computations.

I do not dispute the point that you raise, and in connection with molecular biology, information theory follows the Shannon sphere construct:

A sphere in a high dimensional space which represents either a single message of a communications system (after sphere) or the volume that contains all possible messages (before sphere) could be called a Shannon sphere, in honor of Claude Shannon who recognized its importance in information_theory. The radius of the smaller after spheres is determined by the ambient thermal noise, while that of the larger before sphere is determined by both the thermal noise and the signal power (signal-to-noise ratio), measured at the receiver. The logarithm of the number of small spheres that can fit into the larger sphere determines the channel capacity (See: Shannon1949). The high-dimensional packing of the spheres is the coding of the system.

There are two ways to understand how the spheres come to be. Consider a digital message consisting of independent voltage pulses. The independent voltage values specify a point in a high dimensional space since independence is represented by coordinate axes set at right angles to each other. Thus three voltage pulses correspond to a point in a 3 dimensional space and 100 pulses correspond to a point in a 100 dimensional space. The first `non-Cartesian' way to understand the spheres is to note that thermal noise interferes with the initial message during transmission of the information such that the received point is dislocated from the initial point.

Since noisy distortion can be in any direction, the set of all possible dislocations is a sphere. The second `Cartesian' method is to note that the sum of many small dislocations to each pulse, caused by thermal noise, gives a Gaussian distribution at the receiver. The probability that a received pulse is disturbed a distance x from the initial voltage is of the form p(x) * e-x2. Disturbance of a second pulse will have the same form, p(y) * e-y2.

Since these are independent, the probability of both distortions is multiplied: p(x,y) = p(x) p(y). Combining equations, p(x,y) * e-(x2 + y2) = e-r2, where r is the radial distance. If p(x,y) is a constant, the locus of all points enscribed by r is a circle. With more pulses the same argument holds, giving spheres in high dimensional space. Shannon used this construction in his channel capacity theorem.

For a molecular machine containing n atoms there can be as many as 3n-6 independent components (degrees of freedom) so there can be 3n-6 dimensions. The velocity of these components corresponds to the voltage in a communication system and they are disturbed by thermal noise. Thus the state of a molecular machine can also be described by a sphere in a high dimensional velocity space.

Channel capacity of molecular machines

However, in this article, the author also raises the causation of (functionally unified) biological geometric form in 4 dimensions (3 spatial, 1 time). That exceeds the investigation of channel considerations in higher dimensions (Shannon) and raises the issue of causation - which (instinctively) would suggest a reduction of uncertainty in communication (information) inter-dimensionally, at origin.

119 posted on 08/26/2004 8:34:07 PM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 115 | View Replies ]

To: tortoise; Alamo-Girl; marron
The "geometric form" in countable dimensions is not separate from the information, and in fact all multidimensional information theoretic structures have an equivalent one dimensional form.

Yeah-but tortoise: your concept of "information" seems to have more to do with the requirements of communication channels than with the meaning of what is being communicated. Did you ever plan to put that part of "real life" back into your picture?

We can't put "information" into the category of uniform laws of nature, for information refers to the order of contingent events. It is no mere abstraction, but indicative of the concrete order of the world, seeking to communicate, not "0s" and "1s," not sequences of binary bits, but actionable intelligence or meaning. It is not an "abstraction," or set of algorithmic rules for programming a computer (say), but the actual, concrete order of the world we live in and experience. The "geometry" lies behind the world of contingent phenomena we see, as a kind of blueprint of the possible ("formal cause" in the Aristotelian sense). But acting on the blueprint requires information. And information requires an interpretation of meaning in order to be useful to conscious, choosing, purposeful beings, like we humans.

123 posted on 08/26/2004 9:00:35 PM PDT by betty boop
[ Post Reply | Private Reply | To 115 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson