Posted on 11/30/2004 6:21:11 PM PST by betty boop
It is not contempt, it is indifference. Do you realize that he basically stopped publishing before the "reformation" of information theory into its modern form? I don't care what he's done in other fields, but his understanding of information theory is antiquated. Incidentally, while Chaitin is famous in relation to "algorithmic information theory" (and a smart guy), his area of specialty is really kind of a tangential thing -- he has kind of google-bombed the namespace. This has been a chronic problem with the Intelligent Design "information theory experts"; from everything I've read, I was in kindergarten the last time they updated their understanding of the mathematics. They NEVER cite any of the core theory papers that make up modern information theory, nor do they exhibit familiarity with the important new concepts that are in those papers. If you compare the reference appendix of a paper by a credible leading mathematician in the field (e.g. Schmidhuber) and compare it with the reference appendix of folks like Yockey, there is almost no intersection. Yockey is not an outright fraud (unlike Dembski), but he is way past his academic prime and it shows. We have newer and better models for dealing many of these things, and simply ignoring these advancements is not helpful. It would be kind of like a physicist refusing to acknowledge post-Newtonian physics. Any derivative work would only be "correct" in a qualified sense.
As for strong AI, modern mathematical theory is almost completely derivative and deeply intertwined with the broader field of algorithmic information theory (the unified grand-daddy of all the little subfields in that general area). Yes, I do a lot of work in that area, but it is basically the same theorems and math as we are talking about here, and I've done a hell of a lot of work in this area of mathematics. Intelligence is purely a mathematical problem, and the fact that it traditionally has not been treated that way goes a long way toward explaining what has taken so long to develop an implementation theory in computer science.
Information theory, computational theory, transaction theory, decision theory, a lot of probability theory, and bits and pieces of a lot of other fields are all the same mathematical thing. You treat these fields like the blind men in the old fable about the elephant. I routinely work on the unified theoretical constructs (which is nominally described as "algorithmic" or "computational" information theory) and make no distinction between them because it would be nonsensical for me to do so, and would make my work impossible in any case.
I'm waiting for any view of this that actually applies modern information theory i.e. a perspective that understands and fully integrates the computation and transaction theoretic aspects into the simple Shannon model. Time and again, I get the impression that no one really wants to deal with the inconvenient consequences of this, metaphorically prefering to stay in the comfy Newtonian physics rather than redefining their perspective as required by acknowledging Relativity and its applicability to real problems. If all these "theorists" make no effort to stay relevant, I see no reason to treat them as though they are relevant.
IOW, a person who is not ideologically committed to atheism can take a scientific look at the subject. Elsewhere, he says that he is anti-Creationism. So the converse is true to him as well.
IOW, if a person in ideologically committed to the notion that the cosmos, the earth and everything in it was created 6,006 years ago from our space/time coordinates - then they cannot take a scientific look at the subject either.
IIRC, there was a general mathematical description and proof of this published circa 1992 by Merhav and his gang. It has since been reformulated in a half-dozen different ways by a number of folks. This is widely considered to be a very elegant mathematical basis for the concept of "free will".
But to paraphrase Mark Twain, I believe the rumors of Yockeys irrelevance may have been greatly exaggerated. The second edition of his greatest work, Information Theory and Molecular Biology is not yet available for shipment so who can say what he has or has not incorporated?
Ive been following Jurgen Schmidhuber as well as Yockey, Schneider, Chaitin, Tegmark, Penrose, Rocha and many others. But again I see a huge fork in the direction being taken by those working in artificial intelligence and those working on information theory in molecular biology.
In the biological research, at the level of the molecular machine, the issue is one of communications, semiotics. Within that research, it forks again between those like Schneider whose research with NIH is oriented to the medical implications and evolution while the likes of Rocha, Wolfram and Yockey are examining how it may have emerged. However, from what I have read concerning complex systems, those investigators are interested in both algorithm and communications.
I am not aware of any advancement in unified theoretical constructs which would change the Shannon model for communications like Einstein's special and general relativity changed Newton's theory of gravity. Considering the high profile of Schneider's work for NIH, and the fact that he has an exhaustive website - I would have expected him to make note of such things.
Of all these people listed, the only ones that can reasonably be considered theoretical experts in information theory are Schmidhuber and Chaitin. These two guys are among the half dozen that actually author the theory and who I know for a fact understand it as well as anyone else does. A couple of the other guys are physicists, but have no special understanding of information theory (e.g. Tegmark and Penrose). Most of these folks have an undergrad CS level of understanding by my reckoning.
You are correct to a certain extent about a "fork" in the theory, though it is more pragmatic than substantive. Information theory was born out of electrical engineering, which has a very limited view and use for those maths. Dealing with stuff like algorithmic predictive limits really do not have much utility for them. But as information theory matured and was theoretically generalized, it's scope has increased immensely over time. The EEs have kept their own Shannon-esque perspective, but it isn't a separate field. It is actually a narrow expression of a much broader and more general mathematical field that grew out from the original Shannon perspective. You get the Shannon version by taking the modern general theory and reducing the degrees of freedom, making certain arbitrary assumptions axiomatic.
But when you do this, you have to understand the assumptions that are taken as axiomatic when you apply it. If you develop a model that tacitly rejects an assumption of the Shannon model, you have to use the general theory. The general theory that has developed are incredibly broad in scope, completely consuming fields like computation theory and transaction theory. Many mathematicians in the field are specialists in narrow sub-regions -- I would describe Chaitin this way, for example. The number of mathematicians who truly grok the nature and scope of the general theory are relatively small subset of these folks, and I would put people like Schmidhuber in that category. Most of the really interesting work in information theory is in regards to its unification with computation and decision theory IMO, and that is where a lot of the brain power is being expended. Most of the questions you are talking about are really well-framed narrow sub-problems of the general theory.
The ID theory needs to be worked at the level of the people at IDSIA in Switzerland (Schmidhuber, Hutter, and Legg), or who at least have the level of understanding of Merhav and his crew. It is a very rich field, but poor of real experts. At the very least, I would appreciate it if discussions were cited from Li and Vitanyi as a baseline, which is the de facto bible of the field. I will occasionally make assertions that are not citable, but I can always make an argument from citable sources when I do. I don't expect most people to be current on any of this stuff -- it is the very definition of esoterica -- but I do expect any cited "experts" to be reasonably current.
I do think the Brownian motion model is good for most "random" systems in biology. In a large system, say the size of a few cells in an organism, Brownian motion provides most of the "randomness" with a small kick from QM (radioactive decay leading to mutation). Selection is more of a "relative independence" type of thing. Selection criteria for a group of individuals need not have any relation to the chemistry of the cell (lions and tigers and bears eating antelope, for example.)
I'm not a mathematician, but the above seems right to me too. I've said before that until we get to creatures like us, with some degree of free will, I suspect that -- after the moment of creation -- the whole ball-o-wax is determined.
Thanks for the ping. Well done! :-)
It seems to me the common thread between Jurgen Schmidhuber, Ming Li and Paul Vitanyi is Kolmogorov Complexity. Please let me know if I have the right guys. Heres the homepage for Paul Vitanyi and for Ming Li.
Heres a link for Lurkers wanting to know more about Brownian motion
I am very glad to see that we have put panspermia on the back burner for now, because it makes sense to deal with the pre-biotic earth first before considering any new variables which might be involved in cosmic ancestry.
However, I must insist that we not dismiss information theory especially communications and semiosis in discussing the origins of biological life. It is not enough to say the chemicals were present and catalytic; for abiogenesis to be viable, there must also be a mechanism for the rise of autonomy, symbolization (Rocha, Pattee) and very importantly successful communication (Shannon). Even that much is weak without agreeing first that biological systems are capable of self-organizing complexity once all of these are functioning.
Also, I gathered from your discussion of Shannon and modern theoretical constructs - that using Shannon in the narrow sense that Schneider et al have concerning molecular machines is not necessarily short-sighted concerning the contributions of Kolmogorov, Solomonoff, Li, Vitanyi and Schmidhuber. If that is incorrect, please let me know.
Looks pretty straightforward, StJacques. But as Denton indicates, there's a problem regarding the lower-order life, which on the one hand, needs a reducing environment in order to arise, but on the other, needs an oxygen-rich environment in order to survive (i.e., preeminently oxygen's role as constituent of the ozone layer, which protects organisms against the fatal effects of ultraviolet radiation) -- the "Catch-22" of abiogenesis. In other words, absent oxygen -- the presence of which (it is argued) would preclude abiogenesis -- the life rising by means of a methane-rich reducing environment would be wiped out almost immediately after coming into existence, presumably before it had the chance to replicate. You cautioned me that your line of progression "does not prove abiogenesis ... but it does propose a plausible theory into which abiogenesis fits as a necessary step in the evolution of life on earth." But it seems to me that in your line of progression, the term abiogenesis is acting more as a placeholder for some yet-unknown process rather than as establishing a plausible explanation for the origin of life.
You wrote, "from the perspective of the physical sciences there is nothing 'spontaneous' or 'random' about abiogenesis...." Before we can speak of a thing as being spontaneous or random or otherwise, first we have to validate that the thing is an actual process arising in nature. WRT abiogenesis, this is still very much an open question.
I really liked this:
"...the concept of 'randomness' must always be formulated as 'random relative to what?,' because it implies a perspective. To take a pertinent example relative to Stanley Miller's experiment which proved that amino acids could be generated from inorganic matter, consider the predictability of when and where lightning will strike. You can take a meteorologist and a physicist schooled in electromagnetism, position them both on the ground while a thunderstorm rages overhead, and ask them to tell you when and where the next lightning bolt will strike. In terms of simple ordinary language they cannot do it, so you might conclude that the timing and positioning of lightning strikes are 'random' relative to an explanation you would accept in ordinary language. But those same two scientists could express in the language of Physics an equation that will take into account the variables that go into cloud formation, the density of moisture within clouds, temperature, wind patterns, differentials between the electromagnetic charges of the clouds and the ground beneath, etc. and express to you an equation that will predict where and when lightning will strike and that equation will be accurate, though it will not be understood in ordinary language. Therefore; from the perspective of ordinary language, the lightning strike appears as a 'random' event, but from the perspective of the meteorologist and the physicist, it is nothing of the kind, because the location and timing of the strike is governed by the laws of physics as they pertain to meteorological phenomena and electromagnetism."
But it seems that this situation or analogy cannot shed much help on the question of abiogenesis. For the actual variables that go into cloud formation, the density of moisture within clouds, temperature, wind patterns, differentials between the electromagnetic charges of the clouds and the ground beneath, etc., are all well-known. But the actual variables applicable to abiogensis are still a matter of speculation. For even after Miller's showing that amino acids can be generated from inorganic matter, it still seems a bit of a reach (to this skeptic, at least) to say that what can be done under controlled experiments in laboratories necessarily tells us what the actual process of the rise of life on the pre-biotic Earth looked like. Or in other words, the demonstration that amino acids can be generated from inorganic matter does not constitute a proof that abiogenesis occurred; it only demonstrates that the chemical properties of matter are not inconsistent with the rise of an indispensable building block of living organisms. Which is what we would expect to find, really; for everything living and non-living is made out of the "same stuff," matter, plus whatever else might be required (e.g., information, successful communication) for existents to form as such.
In other words, to say that something might have occurred in a certain way is not the same thing as saying that it actually did occur in that way.
You wrote: "I must point out that Overman is a lawyer and for that reason I question his credentials to describe the state of scientific opinion on the early earth's atmosphere...." Overman is an international lawyer, partner in a high-power Washington law firm. But he is obviously also a deeply insightful historian of science who is avidly paying attention to breaking developments in a variety of scientific fields. If only the "experts" were allowed to discuss scientific topics, then probably very little would be publicly said about science at all; and what non-specialists are doing here, right on this thread, would be an unforgivable presumption.
Whatever. There are two things you can say about a highly successful attorney like Overman: (1) they excel at spotting logical fallacies in the other guy's argument (he devotes his first chapter to logic and the most common fallacies that crop up in scientific theorizing; the book is worth the price just for this one chapter!); and (2), they know how to qualify and analyze evidence presented in the given case, and tend to notice when evidence is "missing." FWIW.
Thank you for your very fine post, StJacques. I have much catching-up to do on this thread, and so I look forward to speaking with you again soon.
So true, PH! Yet it seems that we are glossing over important distinctions here: If all things are "determined" by natural law, what is it that "determines" natural law such that it "governs" all things? In other words, if natural law operates as a cause, what caused it? Or are we even sure that natural law is a cause? I thought it was very perceptive of Wolfhart Pannenberg (in the essay at the top of this thread) to suggest that natural laws are not themselves causes; rather they are descriptions of regularies that arise in nature as the result of contingent events that are constantly taking place (e.g., Brownian motion).
I think you're on to something when you say that "given the laws of nature, one might even say that the appearance of life is inevitable." Indeed, one might say that. But in saying it, one is suggesting a certain teleology is at work in nature, that nature seeks as its goal the rise of life. If we are speaking of nature having "goals," then what do we really mean by this? Only self-conscious beings seem to have goals. Is nature a self-conscious being? Or could it be the manifold or medium in which a supernatural consciousness works to achieve its goal or purpose?
Please note this last question is not a "religious question," strictly speaking. Though it has implications for theology, and also for a consideration of the possibility whether this universe has a metaphysical extension.
So far, I'm with you. Very good.
But I would like to separate something here from within the process of natural selection, namely; "random mutations." I would think the "Brownian Motion" model is applicable here too, but I'll await comments.
Agreed. Also, I suggest that the "relatively independent" type of randomness is applicable here, because a mutation may come from some source independent of the creature. A stray cosmic ray or something. So I'm thinking of:
1. Origin of life: "Brownian Motion" type of randomness
2. Natural selection: "relatively independent" type of randomness
3. Mutations: both types of randomness
Just outstanding, Doc -- and so helpful to a non-specialist!
This fourth type of "random" system seems to go straight to the idea of contingency, which is more a philosophical idea than a strictly scientific one. The germ cell here on earth is affected by a discrete event on Sirius -- which probably no observer of the germ cell here on earth could be aware of. And yet the mutation we observe is contingent on the action of that unseen event. From the standpoint of the observer of the germ cell, the mutation may appear to be a "random" happenstance. Yet for a hypothetical observer whose view includes the cosmos in toto, there would be nothing at all "random" about the mutation: That observer would see that it had a cause on Sirius. And because it was "caused," from the point of view of the local observer it becomes fair game for scientific observation, and the natural laws are then found to apply from that point on. But the fact that the cause of the mutation was "hidden from view" means that an important predictive factor remains undisclosed to the scientific method. At least for a while. :^) If I might express it that way.
As you say, nothing in the physics or chemistry of DNA caused the volcano to erupt, or the germ cell to be bombarded by cosmic rays. But we can certainly perceive the effects.... Whereupon we may start speculating on the nature of the cause, and never even come close to actually, correctly identifying it. Which is to say, We usually don't know what we don't know. But we have to carry on any way.
Thanks so much for your excellent post.
PH, I meant to ping you to #318 but somehow neglected to do it....
I knew when I posted that piece that someone might look it over and say: "Aha! For things to be determined requires a Determiner!" This is right in line with "Natural laws require a law-giver." But neither is true. At least not necesarily true.
If, as I've said before, natural laws are inherent in the very fact of existence (a photon is a photon and thus behaves as a photon behaves), then things are merely doing what they do, and not as some external Determiner directs that they should do. In other words, a photon doesn't need to be reminded that it's a photon, and not an electron.
Indeed, one might say that [life is inevitable]. But in saying it, one is suggesting a certain teleology is at work in nature, that nature seeks as its goal the rise of life. If we are speaking of nature having "goals," then what do we really mean by this? Only self-conscious beings seem to have goals. Is nature a self-conscious being?
Not necessarily. It's not only the emergence of life which is (or which may be) inevitable. Stars are inevitable too. And planets. And rocks. Everything flows naturally from the moment of creation. Nothing is necessarily the result goal-seeking (which gratuitously introduces what may be an unnecessary Goal Seeker). Is it really required that the Great Determiner (or Goal-Seeker) must ordain that rocks shall exist?
Stars, planets, rocks, and life are all, in my (limited?) Aristotlean view, the natural (and perhaps inevitable) consequences of pre-existing conditions. I'm suggesting that only causality is at work, not teleology. Sometimes, when we look back on a long sequence of natural cause-and-effect events, teleology may seem to have been involved, but this may be an illusion of the retrospective viewpoint.
There is room in my billiard-ball, totally determined universe (and maybe also a necessity) for some kind of First Cause at the moment of creation. But after that, the apparent necessity gets fuzzy. At least in my current thinking. I'm talking about science here. Religion is a separate matter.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.