Free Republic
Browse · Search
Topics · Post Article

Skip to comments.

The Delusion of Darwinian Natural Law
Acton Institute ^ | 12/27/03 | Marc D. Guerra

Posted on 12/27/2003 12:44:51 AM PST by bdeaner

The Delusion of Darwinian Natural Law

Marc D. Guerra

In a short, inconspicuous paragraph in the conclusion to the first edition of On the Origin of Species, Darwin speculates that "in the distant future … psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation." One hundred and forty years later, Darwin's eerie prediction about the revolutionary effect of his work on human beings' self-understanding seems all too prophetic. After a century of dissemination, the once-novel theory of evolution is widely accepted as established scientific fact. Given the quasi-religious hold of evolutionary theory over the modern mind, it is not surprising that it should serve as the spiritual inspiration for developments within the field of psychology. First popularized in the 1970s by Harvard's Edward O. Wilson, evolutionary psychology, originally called sociobiology, interprets all human behavior in light of the evolutionary process. Evolutionary psychology aims to be a comprehensive science, explaining the origins and ends of every human behavior and institution.

Not wanting to be left behind, a number of conservative thinkers have let themselves be caught up in this movement. Conservatism initially identified evolution exclusively with Darwinian materialism and, therefore, viewed it as a fundamental threat to human dignity. But, recently, conservatives such as James Q. Wilson, Francis Fukuyama, and Charles Murray have used evolutionary psychology to show that morality is rooted in human biology. Fukuyama's The Great Disruption goes so far as to claim that "a great deal of social behavior is not learned but part of the genetic inheritance of man and his great ape forbears." Drawing on categories borrowed from evolutionary psychology, Fukuyama argues that human beings are drawn to the kind of moral order provided by traditional rules of trust and honesty.

Evolution's most ambitious and vocal conservative advocate, however, is political scientist Larry Arnhart. But where Wilson and Fukuyama speak of evolution generally, Arnhart appeals directly to Darwin himself. In Darwinian Natural Right: The Biological Ethics of Human Nature, Arnhart argues that conservative thought has fundamentally misunderstood Darwin. For Arnhart, Darwin is not a biological materialist but a modern disciple of Aristotle. Properly understood, Darwinism proves that morality is rooted in human biology. Indeed, Arnhart claims that Darwinism can identify twenty biological desires that are common to all human societies. The fulfillment or frustration of these desires provides universal standards for judging the morality of human social behavior. Darwinian natural right consists of the "right" to have these biological desires satisfied. Arnhart recently argued in the conservative religious journal First Things that both secular and religious conservatives currently "need Charles Darwin." By "adopting a Darwinian view of human nature," both groups would be able to give a rational, non-sectarian response to the prevailing dogma of moral relativism. For Arnhart, the attraction of Darwinism is essentially practical: It provides a "scientific"–not "metaphysical" or "sectarian"–basis for "conservative moral and political thought."

One has to question, however, the wisdom of evaluating any account of human nature primarily in terms of its political utility. But this does explain why, on every critical point, Arnhart lets his political concerns shape his theoretical defense of Darwinism. Consequently, Arnhart never really confronts conservatism's original charge that Darwinism reduces human beings to clever, biologically determined animals. But he does present natural lawyers with an intriguing and, by no means, inconsequential choice: Should they embrace Darwinism and give natural law conclusions the air of "scientific legitimacy," or should they continue to defend an unfashionable but richer account of human nature that transcends human biology?

The Biology of Morality

Essential to the Darwinian defense of morality is the belief that social behaviors are "biologically rooted" in human nature. Darwinians such as Arnhart start from the premise that human beings are "hard-wired" for specific species-preserving behaviors. Darwinism explains all human societies, ranging from families to political communities, as unintended byproducts of the evolutionary process. Social behaviors and institutions came into existence as evolutionary responses to "species-threatening" changes in man's environment. Friendships, marriages, families, and even political communities, all of which are commonly seen as vital features of a meaningful human life, have their origins outside of the moral universe. Every society came into existence in a world where "species-survival" and "species-extinction," not good and evil, were the fundamental human categories. Darwinism views sociality and morality as part of man's genetic inheritance–the adaptive means through which the species perpetuates itself. Contrary to popular belief, morality is really instrumental to the larger goal of individual and collective preservation.

Darwin's thesis that all species, including the human species, possess a biological drive for self-preservation is not novel. Arnhart, for example, frequently observes that Saint Thomas Aquinas, the natural law's classical exponent par excellence, makes a similar claim. And as Arnhart likes to note, Aquinas even once described natural right as "that which nature has taught to all animals." Aquinas's strongest statement on this matter, however, occurs in the context of a wider discussion of natural law. Aquinas there states that the natural law's second inclination, which man shares with all animals, directs him to preserve the species. But as Arnhart shows, Darwin extends this insight substantially further than Aquinas does. In contrast to Aquinas, Darwin believes that those behaviors that are necessary for the survival of the species gradually become woven into human biology itself. Over time, human beings eventually come to view behaviors that are necessary for survival as both meaningful and moral.

The Darwinian defense of morality characteristically points to the end of the family as illustrative of how morality is rooted in human biology. Arnhart himself traces the family back to the strong sexual drive of young men. Rooted in their "biological nature," this drive plays an important role in the preservation of the species, yet it also fulfills "the natural desire for conjugal bonding." Once properly channeled (Arnhart conspicuously never explains how or why this occurs), the sexual drive allows for the kind of bonding that naturally occurs within the family. The preservation of the family and, ultimately, of the species itself are the result of the "biological drive for sexual mating." Scrutinized from the Darwinian perspective, the biological desire for conjugal bonding is revealed to perform the necessary task of stabilizing society.

While Darwinism can defend the family as a natural institution, it is not a genuinely moral or spiritual defense. Wedded to biological materialism, Darwinism necessarily reduces the good to the useful–finally viewing the family as instrumental to evolution's larger goal of the preservation of society. While family life undoubtedly helps stabilize society, this clearly is not the only thing that is good about it. Arnhart's recognition of natural desires for "conjugal and familial bonding" shows that he is aware of this fact. But the logic of his position ultimately requires him to view the family in terms of its preservation of society.

The Morality of Biology

But is this really compatible with conservatism? Is it really possible to understand family life solely in terms of its role in the preservation of society? Setting aside for the moment any sacramental notion of marriage(not mere conjugal bonding) and family life, Darwinism would have one believe that a husband's self-conscious love for his wife or the personal sacrifices that parents willingly make for their children are byproducts of a primordial desire to perpetuate the species. Viewed from the perspective of human beings' lived experience, Darwinism's appreciation of the family is even more dehumanizing than modernity's view of marriage as simply a contractual arrangement.

Part of the reason for this flattening of the human horizon is Darwinism's systematic identification of the good with the flourishing of the species rather than with the self-conscious individual. There is then something fundamentally incoherent about the effort to defend the intrinsic goodness of morality on the basis of Darwinism. This incoherence, however, explains a number of oddities about the Darwinian defense of morality. The most obvious of these is its creative effort to present Darwin as a teacher of "evolution." As surprising as it sounds, Darwin never uses this term in The Origin of Species. Rather, he speaks of "descent with modification." The difference between these terms is not merely semantic. Darwin realized that evolution is a teleological term. To say that something evolved is to say that it has evolved toward something. Evolution implies the kind of purposeful change by which something unfolds according to a prearranged plan–precisely the understanding of evolution that the Roman Catholic Church claims is not necessarily inimical to Christianity. While often popularly misunderstood, what the Catholic Church consistently has opposed, from Pius XII's nuanced 1950 encyclical Humani Generis to John Paul II's recent statements, is not the idea of evolution per se but, rather, those materialist theories that reduce psychic humanity to biological animality.

Darwin, however, eschews such teleological thinking–going so far as to note in his manuscript not to use "hierarchical" terms such as higher and lower. For him, nature is intrinsically mechanistic. Change results from "natural selection," the process by which species adapt to environmental changes by weeding out variations that jeopardize their survival. Far from acting towards an end, nature responds to external forces of chance and necessity. It is not difficult to see why Darwinians such as Arnhart try to gloss over the harshness of this teaching. By drawing attention to the fact that nature is a blind and continuous process, they effectively undermine their political defense of the intrinsic goodness of morality.

Darwinism's teaching on perpetual modification points to another problem with the idea of Darwinian natural law. For Darwin, the process of modification is, in principle, continuous. Contrary to what they may wish to believe, human beings are not the end of the evolutionary process. The Darwinian defense of natural morality, therefore, is not to be taken too literally. Lacking the fixity of any genuine end, the goods supported by natural law are useful only over long periods of time. Like nature itself, they are transitionally good. This explains why Arnhart places so much emphasis on biology, since it offers the only real source of "temporary fixity" in the world.

Natural Law and the Humanization of Biology

What is most striking about the Darwinian defense of morality is that it argues for one of the positions that natural law traditionally has argued against. Natural law historically has opposed any simplistic identification of the natural with the biological. Contrary to Darwinism's identification of the natural with the instinctual, natural law associates the natural with the reasonable. It seeks to humanize and transcend the realm of biology by incorporating it into the realm of reason–to view the low in light of the high, not vice versa. Whereas materialist Darwinians see human nature culminating in the biological instinct to perpetuate the species, Aquinas thinks that man's natural inclination directs him to seek the truth about God and to live in society. Rather than insisting that he be completely at home in the biological world, natural law realizes that his natural desire for transcendence ensures that man can only be ambiguously at home in the world. Psychically different from other creatures, the rational creature (not merely the calculating, species-preserving animal) somehow embodies all of the aspirations of the evolved biological world.

This natural desire to know does not negate the desire to perpetuate the species but, in fact, can explain why such perpetuation is desirable. Part of the attraction of natural law thinking, therefore, lies in its ability to show that human beings are not slaves to their instincts but, rather, that they possess the psychic freedom to make sense of these instincts. Over and against Darwinism's biological determinism, natural law theory is grounded in the all-too-human experience of wrestling with matters of conscience–of trying to do what one ought to do and not merely what one instinctively wants to do. Rejecting the reality of such an inner life, Darwinian-based defenses of morality are necessarily self-defeating. They replace relativism's belief that nothing can legitimately make a claim on the human soul with materialism's belief that human beings are biologically incapable of caring about their souls.

Near the end of his essay in First Things, Arnhart celebrates the remarkable recent advances of science in the areas of neurobiology and genetics. In light of these advances, Arnhart warns that "if conservatism is to remain intellectually vital, [it] will need to show that [its] position is compatible with this new science of human nature." But what does Arnhart think Darwinism has to say to these new sciences? If there really are no natural limits on human beings, if nature really is in a constant slow state of flux, how can a Darwinian, even a morally serious Darwinian, oppose something such as the "new science" of human cloning? A self-conscious Darwinian such as E. O. Wilson realizes that cloning is simply the next stage of human "modification." Faithful to the spirit of his Darwinism, Wilson looks forward to the day when cloning or "volitional evolution" will allow scientists to alter "not just the anatomy and intelligence of the species but also the emotions and creative drive that compose the very core of human nature." Less consistent Darwinians such as Arnhart choose to remain blissfully unaware of this fact. Consequently, they fail to recognize that what they offer is not so much up-to-date moral guidance as the ultimate moral justification for the "brave new world."


Marc D. Guerra teaches theology at Assumption College in Worcester, Massachusetts, and is a contributing editor to Religion & Liberty.

TOPICS: Constitution/Conservatism; Culture/Society; Editorial; Philosophy
KEYWORDS: aristotle; biologicalethics; biology; charlesdarwin; charlesmurray; conservatism; crevolist; darwin; edwardowilson; evolution; francisfukuyama; humannature; jamesqwilson; larryarnhart; marcdguerra; morality; naturallaw; naturalright; psychology; sociobiology; thomasaquinas
Navigation: use the links below to view more comments.
first previous 1-20 ... 141-160161-180181-200201 next last
To: Iris7
No. Only finite variance is required. For example one could take a sum of Gaussian (mean zero, variance 1) steps or discrete steps of size: (-Sqrt(3), 0, Sqrt(3)) with probabilities (1/6, 2/3, 1/6) respectively and the results would be indistinguishable after a few steps. It isn't a continuous vs discrete thing.
161 posted on 12/29/2003 8:44:56 PM PST by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 152 | View Replies]

To: tortoise
In short, there are many processes that we must treat as random even if we know they are deterministic in nature.

This is surely true; often it's more useful to treat things as random even if we know the deterministic version. On the other hand, QM doesn't seem to be this way at all. I don't think the analogy of QM and strong PRNGs is all the good. (I have published about both.)

The problem is that with a cryptographic PRNG (such as DES or AES or Skipjack) is that the "True State" can be known by knowing the key. In QM, the "key" would also be subject to the QM laws. This has always been the problem for "hidden variable" (or cryptographic like) keys.

162 posted on 12/29/2003 8:59:12 PM PST by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 156 | View Replies]

To: balrog666
This One Path" fallacy seems to a variant of Part 6 in Patrick Henry's fallacy discussion. It is identical to claimint that a random mechanism (shuffling, for example) could not produce an observed bridge hand; after all the odds against that hand are: 635013559599 to 1. The problem is that regardless of the probabilities involved, all paths lead somewhere (vide Alice in Wonderland.) The mechanism of this fallacy is the injection of teleology into the system; the fallaxter, either through ignorance or deceit, introduces the idea that there is a "target" that the path generation mechanism is attempting to achieve. Of course, evolutionary theory states that there is no "target state" but merely survival. Ayn Rand's "Stolen Concept" is a similar method of looking at the matter.
163 posted on 12/29/2003 9:18:01 PM PST by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 159 | View Replies]

To: Iris7; balrog666; tortoise; Doctor Stochastic; PatrickHenry; Just mythoughts; whattajoke; donh; ...
Forgive me; I am not academically trained beyond engineering mathematics. My concepts probably won't fit into standard methods.

Shouldn't be a problem. Math is math.

"Remember that "random" means utterly discontinuous functions of time."

That's *one* kind of randomness, sure, but hardly an all-encompassing definition. But since the distinction is not key to your argument, I won't quibble.

A random process produces a time series output that is a function of time. The output is a vector, (x1,x2,x3...x(n-1),xn. Read, "sub." A new vector is generated every clock pulse, or over some other interval. Each vector can have any possible value, in the n-space involved, with equal probability. F of t at every interval has any value in the n-valued "domain and range", and is therefore discontinuous at every point except for the chance of a seeming correlation caused by any point in the n-space being a possible output value.

Accepting that as a given, your mathematical analysis is good, but the problem is that this is *not* how evolution actually proceeds. You're modeling the wrong problem, as it were.

This is why correlation coefficients have a probabilistic output. Now let us talk about evolution as a random process.


Say the Earth has had life for 3,000,000,000 years, or use your own number. Figure the Earth's biomass, say surface area is 200,000,000 square miles, and say biomass is five feet deep over entire sphere. Say mass is 64 pounds per cubic foot. Say 2.8x10 to the 16th pounds.

Only if you forget to multiply by 64lb/ft3... 2.8x1016 is the number of cubic feet, not the mass. Not that a factor of 64 makes a big difference to to the combinatorial argument, though, so no need to rework the rest of the numbers.

Going to grams, say 1.3 x 10 to the 19th grams. Say a cell masses 1x10 to the -5th grams.

*Way* too massive. E. coli cells, for example, mass around 1x10-12g, and an average human body cell (like a liver cell) masses around 8x10-9g. This causes you to undercalculate the number of cells by a factor of about 10,000. Again, though, not a major problem for your estimate, since cells would not be packed shoulder-to-shoulder anyway, there would be a lot of "filler" in the biosphere.

Therefore about 1x10 to the 24th cells.

Close enough for the kind of "feasibility" estimate you're doing.

As a working hypothesis say that the cell is the evolutionary unit, and that an evolutionary random process occurs once a second in each cell, and has done so for 3 billion years, then over that 1x10 to the 17th seconds roughly 1x10 to the 41st evolutionary events have occurred.

Here it starts to get iffy. One thing you're not taking into account is that biochemical processes can occur at mind-boggling rates. Chemical reactions in general often occur in times measured in femtoseconds (1x10-15). Even something as complex as DNA replication, which is an extremely intricate process involving the precision coordination of dozens of coordinating molecular "machines" (enzymes, etc.) and must wait until a matching amino nucleotide comes within "reach", etc., can typically take place at speeds of over a thousand nucleotide additions per second.

And needless to say "random assembly" could take place much faster than the "exact reproduction" requirements of DNA replication.

The other side of the same coin is that vast amount of molecular components in even a single cell. Using DNA replication again as an example, there are roughly 25,000 "cloning machines" active at any given moment, each one zipping along at the 1000 additions/second rate mentioned in the prior paragraph. And when it comes to more simplified cellular processes, the number of simultaneously active chemical "machines" or components reaches millions or billions, depending on the type (keep in mind that in order to divide, a human cell must have on hand over a billion available nucleotides with which to construct a new billion-plus-basepair set of chromosomes).

So the amount and speed with which even a single cell can "shuffle" new configurations of molecules is boggling. A cell can do a *lot* of "molecular work" in one second.

Nonetheless, not every molecular operation is going to be such that it can produce novel content (much of the activity will be involved in just keeping the cell running), and selection events can't come into play every time there's a molecular rearrangement, so for now let's go with your "one per second" net rate of selection events. But keep in mind that chemical processes can produce staggering amounts of varying products under the right conditions (something to remember for "biotic soup" analyses).

The next step is to see how many evolutionary steps are necessary to explain the complexity we see.

The main problem is that the case you describe is not actually made up of "evolutionary steps". It's simply random trials from scratch each time until one "hits" a fully formed outcome. That's not evolution, that's a "brute force" random search.

I realize you're trying to calculate how "hard" it would be to develop a given structure if evolution had no choice *except* to "brute force" it by sheer luck all in one fell swoop, but that doesn't turn a "lucky shot" of the type you are analyzing here into a count of "evolutionary steps", because evolutionary steps are *not* the process which you are analyzing.

The next idea you probably will not like, and that is irreducible complexity.

As an "idea" I like it just fine, and so do evolutionary scientists. The problem is that Behe (and the creationists who follow him) have created a "straw man" version of "IC" which is quite simply incorrect -- but appears to give the conclusion they want.

The original notion of "IC" goes back to Darwin himself. He wrote:

"If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down."
-- Charles Darwin, "On the Origin of Species", 1859
That's "Irreducible Complexity" in a nutshell. It's not as if Behe has pointed out anything that biologists (or Darwin) didn't already realize.

But let's examine Darwin's description of "IC" in a bit more detail (emphasis mine):

No doubt many organs exist of which we do not know the transitional grades, more especially if we look to much-isolated species, round which, according to my theory, there has been much extinction. Or again, if we look to an organ common to all the members of a large class, for in this latter case the organ must have been first formed at an extremely remote period, since which all the many members of the class have been developed; and in order to discover the early transitional grades through which the organ has passed, we should have to look to very ancient ancestral forms, long since become extinct.

We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. Numerous cases could be given amongst the lower animals of the same organ performing at the same time wholly distinct functions; thus the alimentary canal respires, digests, and excretes in the larva of the dragon-fly and in the fish Cobites. In the Hydra, the animal may be turned inside out, and the exterior surface will then digest and the stomach respire. In such cases natural selection might easily specialise, if any advantage were thus gained, a part or organ, which had performed two functions, for one function alone, and thus wholly change its nature by insensible steps. Two distinct organs sometimes perform simultaneously the same function in the same individual; to give one instance, there are fish with gills or branchiae that breathe the air dissolved in the water, at the same time that they breathe free air in their swimbladders, this latter organ having a ductus pneumaticus for its supply, and being divided by highly vascular partitions. In these cases, one of the two organs might with ease be modified and perfected so as to perform all the work by itself, being aided during the process of modification by the other organ; and then this other organ might be modified for some other and quite distinct purpose, or be quite obliterated.

The illustration of the swimbladder in fishes is a good one, because it shows us clearly the highly important fact that an organ originally constructed for one purpose, namely flotation, may be converted into one for a wholly different purpose, namely respiration. The swimbladder has, also, been worked in as an accessory to the auditory organs of certain fish, or, for I do not know which view is now generally held, a part of the auditory apparatus has been worked in as a complement to the swimbladder. All physiologists admit that the swimbladder is homologous, or 'ideally similar,' in position and structure with the lungs of the higher vertebrate animals: hence there seems to me to be no great difficulty in believing that natural selection has actually converted a swimbladder into a lung, or organ used exclusively for respiration.

[Example snipped]

In considering transitions of organs, it is so important to bear in mind the probability of conversion from one function to another, that I will give one more instance. [Long detail of example snipped] If all pedunculated cirripedes had become extinct, and they have already suffered far more extinction than have sessile cirripedes, who would ever have imagined that the branchiae in this latter family had originally existed as organs for preventing the ova from being washed out of the sack?

-- Charles Darwin, "On the Origin of Species", 1859

Darwin makes two critical points here:

1. A modern organ need not have evolved into its present form and function from a precursor which had always performed the same function. Evolution is quite capable of evolving a structure to perform one function, and then turning it to some other "purpose".

2. Organs/structures can reach their present form through a *loss* of function or parts, not just through *addition* of function or parts.

Despite the fact that these observations were laid out in 1859, Behe's version of "Irreducible Complexity" pretends they are not factors, and defines "IC" as something which could not have arisen through stepwise *ADDITIONS* (only) while performing the same function *THROUGHOUT ITS EXISTENCE*.

It's hard to tell whether Behe does this through ignorance or willful dishonesty, but the fact remains that *his* definition and analysis of "IC" is too restrictive. He places too many "rules" on how he will "allow" evolution to reach his examples of "Behe-style IC" structures, while evolution itself *IS NOT RESTRICTED TO THOSE RULES* when it operates. Thus Behe's conclusion that "Behe-style evolution" can not reach "Behe-style IC" hardly tells us anything about whether *real-world* evolution could or could not have produced them.

For specific examples, Behe's example of the "Behe-style IC" flagellum is flawed because flagella are composed of components that bacteria use FOR OTHER PURPOSES and were evolved for those purposes then co-opted (1, 2), and Behe's example of the "Behe-style IC" blood-clotting process is flawed because the biochemistry of blood-clotting is easily reached by adding several steps on top of a more primitive biochemical sequence, *and then REMOVING earlier portions which had become redundant* (1, 2).

Even Behe's trivial mousetrap example turns out to not actually be "IC".

The usual qualitative formulation is: "An irreducibly complex system cannot be slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system, that is missing a part is by definition nonfunctional..."

Note the key error: By saying that it "breaks" if any part is "missing" (i.e. taken away), it is only saying that evolution could not have reached that endpoint by successively only ADDING parts. True enough, but Behe misses the fact that you can also reach the same state by, say, adding 5 parts one at a time, and then taking away 2 which have become redundant. Let's say that part "A" does the job, but not well. But starting with just "A" serves the need. Then add "B", which improves the function of "A". Add "C" which helps A+B do their job, and so on until you have ABCDE, which does the job very well. Now, however, it may turn out that CDE alone does just fine (conceivably, even better than ABCDE does with A+B getting in the way of CDE's operation). So A and B fade away, leaving CDE. Note that CDE was built in "one change at a time" fashion, with each new change improving the operation. HOWEVER, by Behe's definition CDE is "Irreducibly Complex" and "could not have evolved (been built by single steps)" because removing C or D or E from CDE will "break" it. Note that Behe's conclusion is wrong. His logic is faulty.

The other error in Behe's definition lies in this part: "...any precursor to an irreducibly complex system, that is missing a part is by definition nonfunctional". The problem here is that it may be "nonfunctional" for its *current* function, but perfectly functional for some *other* function helpful for survival (and therefore selected by evolution). Behe implicitly claims that if it's not useful for its *current* function, it's useless for *any* function. The flaw in this should be obvious.

"Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have anything to act on."

True as far as it goes, but but this is hardly the same as Behe's sleight-of-hand in the first part of his statement, which relies on the false premise that a precursor to a structure is 100% useless for *any* purpose if *taking away* (but not adding) one part from the current purpose makes it unsuitable for the current purpose. Two gaping holes in that one...

Behe (an anathematized name)

For reasons I've outlined above.

talks of the bacterial flagellum, which contains an acid-powered rotary engine, a stator, O-rings, bushings, and a drive shaft. The machinery of this motor requires approximately fifty proteins.

Except that it doesn't. As many biochemists have pointed out, other organisms have function flagella (even *as* flagella) with fewer proteins (and/or different proteins). That flagellum isn't even "IC" by Behe's own definition since you *can* remove proteins and have it still work as a flagellum.

Say each protein has a weight of 50,000 Daltons, made up of, say, 400 amino acids.

Actually, the median protein length for bacteria is considerably shorter than that:

And for all you know, the flagellar proteins might be unusually short, vastly changing the probability results. But let's stick with a more realistic size of about 200 instead of your 400. At first this may sound as if it would only make the odds of randomly producing one of the proteins only twice as large, but in fact it makes the probability 160,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000 times as likely to occur by chance... Your earlier assumptions were not critical to the outcome, but the size of the proteins *is*.

The protein is not a chain but a complex three-dimensional structure, but we will simplify by treating it as a linear chain.

This doesn't "simplify" it at all, since its "complex three-dimensional structure" is uniquely determined *by* its linear ordering. It's the same analysis.

The odds of the first protein being produced from amino acids (say 20 amino acids are needed for all of these simplified proteins) is (1/20)^400, about 1x10 to the -540.

Get your calculator checked, it's actually 3.87x10-521. You've understated the odds by a factor of about 100,000,000,000,000,000,000,000,000,000.

This immense inverse is larger than the odds of one protein being assembled through random chance since we have picked numbers almost surely so as to make our arithmetic conservative.

No you haven't, since you *overstated* the likely average length of the proteins.

Besides, we are talking about fifty separate proteins before the flagellum works.

Not true, see above.

The odds have this event arising through random chance

Remember this assumption upon which all your calculations are based, we'll come back to it later.

of 1x10 to the -540th power

-521st power, actually, if you get your calculator fixed, or more likely 6x10-261 using a more reasonable protein length.

raised to the 50th power, or about 1x10 to the -(540 raised to the 50th power) or about 1x10 raised to the power of 4x10 to the 136th power. (1x10)^(4^136) What is that, about the inverse of a decillion raised to the decillionth power?

No, that's in fact dead wrong, since (10^A)^B is sure as hell *NOT* equal to 10^(A^B) as you assert. It's actually 10^(AxB), which for large numbers gives a vastly smaller result.

1x10-521 to the 50th power is actually 1x10-26050. Still enormously slim odds to be sure, but practically a sure thing when compared against yours -- not to mention vastly more correct.

This single protein chain, this single tiny organelle, has to be produced in one completely in one timing cycle, and in one cell,

No it doesn't, and here lies the crux of the irrelevancy of your calculation.

so that it can make a bacterium move, and so be selected through natural selection. If one allows more timing cycles

I'd like to point out that you never did actually make use of your biomass/cellcount/cycle-total calculation...

then the odds of the event occurring in the 3x10 to the 31st timing cycles become even lower.

First, "huh"? If you allow more timing cycles, then the odds of the event improve. How do you figure they "become even lower"?

Second, "10 to the 31st"? What happened to your original "10 to the 41st" that you started out with (but never used in a calculation)?

Third, why have all of your errors been in the same direction (erroneously improving your point)? Coincidence?

So the odds against this organelle forming are are very much less than the inverse of 1x10 to the 100th power.

Correct even after your errors have been corrected. But again, one must realize that these are only "the odds against this organelle forming" *totally at random all at once" by sheer chance of jostling amino acids in a bucket. It is *not* the odds against the organelle forming by evolutionary processes.

Physicists call anything less likely than 1x10 to the 50th power "impossible".

Funny you should mention that... First, "physicists" don't say that, since events less likely than that easily occur. For example, shuffle a deck of cards, then spread the deck face-up on a tabletop. Congratulations, the odds of that particular arrangement of cards occurring as a result of a shuffle is less than 1.24x10-68, which is far less than 1x10-50 -- it's a miracle!

Second, even rare events chosen a priori occur easily enough when the number of trials is large enough. For example an atomic state which occurs in less than 1x10-50 atoms is a near certainty to occur in the Earth alone, which contains far more than 1x1050 atoms.

So the "law" you mention is incorrect as stated.

But what's really funny about you mentioning it is that it's an informal rule of thumb (for *human* watchable events, not meant to be applied universally), originated by Emil Borel in a couple of books he wrote in 1943 and 1950 to popularize science. It's sometimes affectionately known as "Borel's Law". And ironically, Borel himself wrote on the topic of biological probability calculations:

In conclusion, I feel it is necessary to say a few words regarding a question that does not really come within the scope of this book, but that certain readers might nevertheless reproach me for having entirely neglected. I mean the problem of the appearance of life on our planet (and eventually on other planets in the universe) and the probability that this appearance may have been due to chance. If this problem seems to me to lie outside our subject, this is because the probability in question is too complex for us to be able to calculate its order of magnitude. It is on this point that I wish to make several explanatory comments.

When we calculated the probability of reproducing by mere chance a work of literature, in one or more volumes, we certainly observed that, if this work was printed, it must have emanated from a human brain. Now the complexity of that brain must therefore have been even richer than the particular work to which it gave birth. Is it not possible to infer that the probability that this brain may have been produced by the blind forces of chance is even slighter than the probability of the typewriting miracle?

It is obviously the same as if we asked ourselves whether we could know if it was possible actually to create a human being by combining at random a certain number of simple bodies. But this is not the way that the problem of the origin of life presents itself: it is generally held that living beings are the result of a slow process of evolution, beginning with elementary organisms, and that this process of evolution involves certain properties of living matter that prevent us from asserting that the process was accomplished in accordance with the laws of chance.

Moreover, certain of these properties of living matter also belong to inanimate matter, when it takes certain forms, such as that of crystals. It does not seem possible to apply the laws of probability calculus to the phenomenon of the formation of a crystal in a more or less supersaturated solution. At least, it would not be possible to treat this as a problem of probability without taking account of certain properties of matter, properties that facilitate the formation of crystals and that we are certainly obliged to verify. We ought, it seems to me, to consider it likely that the formation of elementary living organisms, and the evolution of those organisms, are also governed by elementary properties of matter that we do not understand perfectly but whose existence we ought nevertheless admit.

Similar observations could be made regarding possible attempts to apply the probability calculus to cosmogonical problems. In this field, too, it does not seem that the conclusions we have could really be of great assistance.

-- Emil Borel, "Probability and Certainty", p. 124-126

So there.

Now you can say that there are other ways to get such things done, that this particular protein sequence, or this particular organelle, could be done differently. Well, if this event could have happened in 1 followed by fifty zeros different ways, then the odds against it happening are still grater than 1 followed by fifty zeros to one.

Ah, but what if it could demonstrably have happened in more than "1 followed by fifty zeros different ways"?

There's a thing called "protein functional redundancy". This means that because many amino acids in a protein can often be replaced with certain other (or in some cases, *any* other) amino acid without changing the function of the protein *at all* (because the change is in a portion of the protein which is merely a "placeholder" and does no "work"), there are actually many "alternate" forms for each protein used in organisms which would literally have worked just as well. How many? Far more than you'd possibly imagine.

For example, there are fully 3.8x1093 functionally equivalent variations of the cytochrome C protein (an essential component of the mitochondrial respiratory chain). For perspective, that's around a billion times larger than the number of atoms in the universe. See: [Yockey, H. P. (1992) Information Theory and Molecular Biology. New York, Cambridge University Press].

Presuming for the sake of estimation that your 50 "essential" flagellar proteins have an equal number of equivalents, then each protein is actually 1x1093 times more likely to arise than your original estimates would indicate, and the entire 50-protein set would be 1x104650 times as likely...

But there's more... Surely those "50 protein" aren't the *only* possible set of proteins which would have given rise to a working flagellum, right? How many other workable flagellum "designs" are there? Put another way, how many conceivable ways could God have given a bacteria any sort of functional "paddle"? Zillions, right? (And don't forget, there are *already* organisms with endless varieties of different flagella other than just Behe's one example case.) So the question becomes not just, "how likely is *this* one *precise* flagellar setup", but instead it's "how likely is *any* kind of remotely functional motor structure to arise out of the *countless* variations on a theme possible under every conceivable possibility which still would have worked in some fashion?"

Hmm, haven't calculated *those* odds yet, have you?

Finally, your entire exercise misses the boat entirely. Yes, as you've shown, the odds of a particular flagellar structure arising "from the ground up" in one "step" entirely by random chance of jostling amino acids is practically nil and probably never happened.

But so what? No one's suggesting that it *did* happen that way.

Instead, biologists believe that the flagella arose over a *long* period of accumulated improvements and increased complexity, through evolutionary processes (which weed out the failures while multiplying and accumulating successes), eventually producing that particular modern flagellum by a final combination of *pre-existing* proteins and components available in the cell which had been mostly developed for other purposes. For example, in the case of Behe's favorite flagella, the common bacterial type III export apparatus (used for transporting specific proteins out of the cytoplasm) is recognizably the core of the "motor" of the the flagellum. So the proteins which were developed (through evolution) to provide *that* functionality did *NOT* have to be randomly assembled "from scratch" in order to "luckily" produce the flagellum out of pure raw amino acids, as your calculation presumed. Instead, those proteins were *already present*. How much does that cut down the set of 50, eh? Likewise for other proteins and structures which were already present in the cell but later co-opted for flagellar use. Your entire "all from scratch all at once" calculation is quite simply irrelevant, since it totally fails to take into account things which we *know* were already present in the cell (for *other* purposes -- something Behe keeps forgetting or trying to hide), and therefore did not have to be "invented" randomly on the fly as your calculation requires.

For a far more realistic look at the evolutionary "invention" of the flagellum, see Evolution in (Brownian) space: a model for the origin of the bacterial flagellum , which I linked earlier in this post. From the abstract:

The model consists of six major stages: export apparatus, secretion system, adhesion system, pilus, undirected motility, and taxis-enabled motility. The selectability of each stage is documented using analogies with present-day systems. Conclusions include: (1) There is a strong possibility, previously unrecognized, of further homologies between the type III export apparatus and F1F0-ATP synthetase. (2) Much of the flagellum’s complexity evolved after crude motility was in place, via internal gene duplications and subfunctionalization. (3) Only one major system-level change of function, and four minor shifts of function, need be invoked to explain the origin of the flagellum; this involves five subsystem-level cooption events. (4) The transition between each stage is bridgeable by the evolution of a single new binding site, coupling two pre-existing subsystems, followed by coevolutionary optimization of components. Therefore, like the eye contemplated by Darwin, careful analysis shows that there are no major obstacles to gradual evolution of the flagellum.
Now *that's* science. Behe's stuff is just hand-waving and ivory-tower blowhardedness.

Your refutation?

See above.

164 posted on 12/29/2003 9:41:40 PM PST by Ichneumon
[ Post Reply | Private Reply | To 109 | View Replies]

To: Doctor Stochastic
The problem is that with a cryptographic PRNG (such as DES or AES or Skipjack) is that the "True State" can be known by knowing the key.

Yup, that is the problem with QM. We have no means currently with which to look inside the black box. For a strong PRNG, what can you do if the key is lost? As such, we can only build the kinds of models that we are all familiar with. I am under no illusions that we will ever be able to create a deterministic model of this process for the foreseeable future. My personal suspicion is that QM falls under the auspices of Fisher information and the general predictive limits of our universe i.e. it will never be decipherable as a deterministic process.

I don't think the analogy of QM and strong PRNGs is all the good. (I have published about both.)

Heh, well I've published on neither. That said, I knew the analogy was loose when I posted it. I am familiar with both and the computational aspects of both generally, but I don't claim any particular expertise. Computational chemistry and molecular modeling was my thing many years ago, but I've long since lost interest in it and generally become rusty.

The analogy was really about the information theoretic perspective of deciphering the internal state of a PRNG from the output with no prior knowledge of the internal state (e.g. the "key"). With QM, we approach it with the perspective of someone without the key, much like a third party would trying to cryptoanalyze something encoded with a PRNG. Theoretically, there are grave limits to the kinds of processes that can be deciphered without prior knowledge of the internal state, even very simple processes. Thus it is with QM, IMHO.

165 posted on 12/29/2003 9:49:27 PM PST by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 162 | View Replies]

To: Iris7
Darwinism as it is usually expressed postulates change from random variation instead of intelligent design, that is, random inputs followed by natural selection. The point I am making is that random inputs do not seem to provide adequate variation to model real world outputs.

Here's a quick "sanity check" if it helps. the human genome contains on the order of a billion base pairs of genetic information.

Life on Earth has been around for about 3 billion years. So the rate of "genetic accumulation" in the human ancestral line has been about one successful basepair addition every three years in order to bring about humans as we know them.

In other words, out of the entire breeding population(s) which through the eons gave rise to humans, only a single "keeper" point mutation had to happen every single years and be spread back through the population (the spreading itself could take as long as needed).

And keep in mind that throughout a given population, *countless* mutations are occurring. It has been experimentally determined that every individual human carries about 4 unique mutations which originated with him/her. So one "keeper" mutation out of 4*population_size/reproductiverate_per_year*3 doesn't seem an unreasonable amount of "random" innovation.

In fact, the required rate is a lot less than that (i.e., more years per "keeper"), since much of our genetic "length" is due to gene duplication and retrovirus insertion and such, none of which need to be added as invididual basepair additions, they occur as large sequence additions "at a whack".

166 posted on 12/29/2003 9:57:01 PM PST by Ichneumon
[ Post Reply | Private Reply | To 141 | View Replies]

To: Ichneumon; Iris7
So one "keeper" mutation out of 4*population_size/reproductiverate_per_year*3 doesn't seem an unreasonable amount of "random" innovation.

I just realized that the above expression might be a bit cryptic, so I'll give a concrete example.

About 74,000,000 new humans are born each year. If each one contains 4 new unique mutations (most harmless, by the way), then there will be 296,000,000 new mutations added to the gene pool each year, or . Most will wash out, but in order for today's human population to keep up the "evolutionary pace" which brought us to where we are today (starting from "nothing" 3 billion years ago), one new "keeper" mutation needs to be retained in the population every three years (i.e., out of the 3*296M = 888,000,000 new mutations added to the gene pool over three years).

One "keeper" out of nearly a billion mutations doesn't seem like too much to ask for. And a billion mutations in three years is certainly a huge source for plenty of "random inputs" and "adequate variation", especially since only one out of those billion needs to be a "keeper" in order for evolution to keep up its proven pace.

167 posted on 12/29/2003 10:11:20 PM PST by Ichneumon
[ Post Reply | Private Reply | To 166 | View Replies]

To: tortoise; Physicist
Things get even trickier. (I've pinged Physicist to correct whatever errors I make here.) I'm probably leaving out details of the proposed example.

In QM there are (in general) no marginal distributions. As an example, consider the experiment of passing a beam of atoms througn a Stern-Gerlach apparatus oriented in the X-direction. Half the atoms will have spin +1 in the X-axis ane half have -1 in the X-axis. The spin +1 atoms are then passed through a Y-oriented Stern-Gerlach apparatus and again half get +1 spin in the Y-axis and half get -1. Because having +1 in the X-axis and +1 in the Y-axis simultaneously yields +1 in the Z-axis, one would expect the one-quarter of the atoms remaining to be so aligned. However, passing these through a Z-axis aligned apparatus agains gives half +1 and half -1 in that axis. Thus one must conclude that the atoms cannot be described as a population with some given spin states.

More amusing, if after each pass, one takes the -1 spin atoms and puts them through a spin inverter (yielding +1 spins), then combines the beams; one does get all +1 in the Z-axis at the end. (At least I think that's what happens.)

QM doesn't even allow one to say that there is a "key" in the cryptographic sense.

I'm trying to construct another example but that may have to come later.
168 posted on 12/29/2003 10:14:24 PM PST by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 165 | View Replies]

To: tortoise
Another example.

Take two universal Turing machines Ux and Uy which on being given the inputs the angles X and Y and "hidden variables" or "keys" Z, W, V...(whatever) produce the following results: as a function of X, Ux produces Cos(X)**2 and Uy produces Cos(Y)**2. This isn't too hard to do. (Of course, the names are exchangeable and finite length descriptions of X and Y are fine.)

Now, with QM, it is possible to arrange that the outputs of Ux and Uy yield Cos(X-Y)**2 in a case where Ux has no knowledge of Y and Uy has no knowledge of X; both have access to all the hidden variables. This can be done if X and Y are the angles of polarizers which are outside of each other's light cones (and thus not causally connected.)

If I have described things correctly, this is a variant of Bell's inequality.
169 posted on 12/29/2003 10:24:20 PM PST by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 165 | View Replies]

170 posted on 12/29/2003 11:16:41 PM PST by secretagent
[ Post Reply | Private Reply | To 169 | View Replies]

To: Doctor Stochastic
I will graciously defer to you on this topic. I would have to think too hard to come up with a response either way, and it is a bit outside my normal areas of expertise. If I had more time on my hands, I would love to put some additional effort into it. Maybe some other day.

Have a fine evening!

171 posted on 12/29/2003 11:43:48 PM PST by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 169 | View Replies]

To: Ichneumon
[Thunderous applause!]
172 posted on 12/30/2003 3:31:32 AM PST by PatrickHenry (Hic amor, haec patria est.)
[ Post Reply | Private Reply | To 164 | View Replies]

To: Doctor Stochastic
... the fallaxter, either through ignorance or deceit, introduces the idea that ...

Thinking I was about to learn a new word, I cut and pasted that into an online dictionary. It didn't recognize the word, but provided ten alternatives, beginning thusly:

Suggestions for fallaxter:
1. fellator

173 posted on 12/30/2003 3:32:06 AM PST by PatrickHenry (Hic amor, haec patria est.)
[ Post Reply | Private Reply | To 163 | View Replies]

To: VadeRetro; jennyp; Junior; longshadow; RadioAstronomer; Physicist; LogicWings; Doctor Stochastic; ..
You gotta see post 164 by Ichneumon!
174 posted on 12/30/2003 3:56:08 AM PST by PatrickHenry (Hic amor, haec patria est.)
[ Post Reply | Private Reply | To 172 | View Replies]

To: Ichneumon
Congratulations on your best essay yet.
175 posted on 12/30/2003 4:52:28 AM PST by Physicist
[ Post Reply | Private Reply | To 164 | View Replies]

To: PatrickHenry; Ichneumon
You gotta see post 164 by Ichneumon!

Stunning indeed. Possibly the best and clearest short exposition of the central fallacies of Behe's "IC" argument that I've seen, and this was only a part of the post.

Also, Ich's extended quote of Darwin serves to remind one of what a careful and systematic thinker the old fellow was. You read this stuff that he mostly worked out entirely on his own -- having only let a handful of friends in on his ideas, and that fairly late in their development -- and you find that there isn't, subsequently, a single major criticism of his ideas that Darwin had not previously identified and thought through. (He did add a chapter on objections to the theory to later additions of The Origin, but these were all variations on themes he'd already anticipated.)

Usually even very sharp scientists require at least some help and stimulation from colleagues in finding the most crucial potential refutations and tests of their theories. Darwin's ability at systematic self-criticism is truly astounding. (Of course even Darwin wasn't perfect. He missed or glossed over problems with his own work in a couple other instances, e.g. the parallel "roads" of Glen Roy and his theory of pangenesis.)

176 posted on 12/30/2003 5:29:03 AM PST by Stultis
[ Post Reply | Private Reply | To 174 | View Replies]

To: Ichneumon; Iris7
"Say the Earth has had life for 3,000,000,000 years"

The oldest known life on Earth (afaik) is 3.8 billion years old. HOWEVER.

We are (almost) not descended from that life.

An even occurred ~600 million years ago, termed "Snowball Earth", in which the planet froze over--pole to pole. Its albedo became so high that there was 'no way' to unfreeze it. This event killed (probably) 98% of all life.

How did it get unfroze? Build-up of volcanic gases caused a genuine "greenhouse effect" (unlike the hoax of today) which melted the ice. The greenhouse overshoot was extreme: the average surface temperature was something like 49 C. This had the effect of killing 98% of the remaining cold-adapted species. So we are the descendants of 2% of 2% of the "ancient" life on Earth. In effect, a brand-new genesis occurred as the result of Snowball Earth, and everything you see around you has had "only" 600 million years to evolve.


177 posted on 12/30/2003 8:56:43 AM PST by boris (The deadliest Weapon of Mass Destruction in History is a Leftist With a Word Processor)
[ Post Reply | Private Reply | To 164 | View Replies]

To: PatrickHenry
"1^720 Odds" Memorial Placemarker
178 posted on 12/30/2003 11:20:27 AM PST by longshadow
[ Post Reply | Private Reply | To 174 | View Replies]

To: boris
This had the effect of killing 98% of the remaining cold-adapted species. So we are the descendants of 2% of 2% of the "ancient" life on Earth. In effect, a brand-new genesis occurred as the result of Snowball Earth, and everything you see around you has had "only" 600 million years to evolve.

Does not follow. Regardless of how often or how severely the tree of life has been "pruned" throughout its history, it has still accumulated 3+ billion years of evolution advances on any branch still living.

In your example, life accumulated 3.2 billion years of evolutionary advancement before the die-off due to the freezing Earth, and regardless of how few species survived that crisis, the ones that squeaked through *still* had 3.2 billion years of change in their genes. Fast-forward to the "hothouse" time, and they'd have accumulated another X million years of change, which the survivors of *that* crisis would not have lost either. Their distant descendants (us and all other modern life) kept piling more evolutionary change on top of the "foundation" the "hothouse" survivors had built, and thus we still have 3.8 billion years of evolutionary accumulation in our genes, despite the fact that many other *side* lineages had gone extinct during various mass extinctions.

179 posted on 12/30/2003 12:02:49 PM PST by Ichneumon
[ Post Reply | Private Reply | To 177 | View Replies]

To: tortoise
Psst... Confucious say, "Uphill battle to argue randomness with someone named 'Doctor Stochastic'". ;-)

sto·chas·tic   Audio pronunciation of "stochastic" ( P )  Pronunciation Key  (st-kstk)
  1. Of, relating to, or characterized by conjecture; conjectural.
  2. Statistics.
    1. Involving or containing a random variable or variables: stochastic calculus.
    2. Involving chance or probability: a stochastic stimulation.

180 posted on 12/30/2003 12:10:10 PM PST by Ichneumon
[ Post Reply | Private Reply | To 171 | View Replies]

Navigation: use the links below to view more comments.
first previous 1-20 ... 141-160161-180181-200201 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794 is powered by software copyright 2000-2008 John Robinson