Posted on 02/19/2002 2:59:38 PM PST by Cameron
The God Hypothesis:
Discovering Design in our "Just Right" Goldilocks Universe
by Michael A. Corey
(Rowman & Littlefield, 256 pp., $27)
GOD'S EXISTENCE is not required by the premises of quantum mechanics or general relativity, the great theories of twentieth-century physics --but then again, it is not contravened by their conclusions either. What else can we do but watch and wait?
The agnostic straddle. It is hardly a posture calculated to set the blood racing. In the early 1970s Jacques Monod and Steven Weinberg thus declared themselves in favor of atheism, each man eager to communicate his discovery that the universe is without plan or purpose. Any number of philosophers have embraced their platform, often clambering onto it by brute force. Were God to exist, Thomas Nagel remarked, he would not only be surprised, but disappointed.
A great many ordinary men and women have found both atheism and agnosticism dispiriting--evidence, perhaps, of their remarkable capacity for intellectual ingratitude. The fact remains that the intellectual's pendulum has swung along rather a tight little arc for much of the twentieth century: atheism, the agnostic straddle, atheism, the agnostic straddle.
The revival of natural theology in the past twenty-five years has enabled that pendulum to achieve an unexpected amplitude, its tip moving beyond atheism and the agnostic straddle to something like religious awe, if not religious faith.
It has been largely the consolidation of theoretical cosmology that has powered the upward swing. Edwin Hubble's discovery that the universe seemed to be expanding in every direction electrified the community of cosmologists in the late 1920s, and cosmologists were again electrified when it became clear that these facts followed from Einstein's general theory of relativity. Thereafter, their excitement diminished, if only because the idea that the universe was expanding suggested inexorably that it was expanding from an origin of some sort, a big bang, as the astronomer Fred Hoyle sniffed contemptuously.
In 1963 Arno Penzias and Robert Wilson inadvertently noticed the background microwave radiation predicted by Big Bang cosmology; when Robert Dicke confirmed the significance of their observation, competing steady-state theories of creation descended at once into desuetude. And thereafter a speculative story became a credible secular myth.
But if credible, the myth was also incomplete. The universe, cosmologists affirmed, erupted into existence fifteen billion years ago. Details were available, some going back to the first three minutes of creation. Well and good. But the metaphoric assimilation of the Big Bang to the general run of eruptions conveyed an entirely misleading sense of similarity. The eruption of Mount Vesuvius took place in space and time; the Big Bang marks the spot at which time and space taper to a singularity and then vanish altogether.
It follows that the universe came into existence from nothing whatsoever, and for no good reason that anyone could discern, least of all cosmologists. Even the most ardent village atheist became uneasily aware that Big Bang cosmology and the opening chapters of the Book of Genesis shared a family resemblance too obvious profitably to be denied.
Thereafter, natural theology, long thought dead of inanition, began appearing at any number of colloquia in mathematical physics, often welcomed by the same physicists who had recently been heard reading its funeral obsequies aloud. In "The God Hypothesis: Discovering Design in our "Just Right" Goldilocks Universe," Michael A. Corey is concerned to convey their news without worrying overmuch about the details. His message is simple. There is a God, a figure at once omnipotent, omniscient, eternal, and necessary. Science has established his existence.
How very embarrassing that this should have been overlooked.
AT THE very heart of revived natural theology are what the physicist Brandon Carter called "anthropic coincidences." Certain structural features of the universe, Carter argued, seemed finally tuned to permit the emergence of life. This is a declaration, to be sure, that suggests far more than it asserts. Structural features? Finely tuned? Permit? When the metaphors are squeezed dry, what more is at issue beyond the observation that life is a contingent affair? This is not a thesis in dispute.
Still, it often happens that commonplace observations, when sharpened, prompt questions that they had long concealed. The laws of physics draw a connection between the nature of certain material objects and their behavior. Falling from a great height, an astrophysicist no less than an airplane accelerates toward the center of the earth. Newton's law of gravitational attraction provides an account of this tendency in terms of mass and distance (or heft and separation). In order to gain traction on the real world, the law requires a fixed constant, a number that remains unchanged as mass and distance vary. Such is Newton's universal gravitational constant.
There are many comparable constants throughout mathematical physics, and they appear to have no very obvious mathematical properties. They are what they are. But if arbitrary, they are also crucial. Were they to vary from the values that they have, this happy universe--such is the claim--would be too small or too large or too gaseous or otherwise too flaccid to sustain life. And these are circumstances that, if true, plainly require an explanation.
Carter was a capable physicist; instead of being chuckled over and dismissed by a handful of specialists, the paper that he wrote in 1974 was widely read, Fred Hoyle, Freeman Dyson, Martin Rees, Stephen Hawking, Paul Davies, Steven Weinberg, Robert Jastrow, and John Gribbin all contributing to the general chatter. Very few physicists took the inferential trail to its conclusion in faith; what is notable is that any of them took the trail at all.
THE ASTRONOMER Fred Hoyle is a case in point, his atheism in the end corrected by his pleased astonishment at his own existence. Living systems are based on carbon, he observed, and carbon is formed within stars by a process of nucleosynthesis. (The theory of nucleosynthesis is, indeed, partly his creation.) Two helium atoms fuse to form a beryllium intermediate, which then fuses again with another helium atom to form carbon. The process is unstable because beryllium intermediates are short-lived.
In 1953 Edwin Salpeter discovered that the resonance between helium and intermediate beryllium atoms, like the relation between an opera singer and the glass she shatters, is precisely tuned to facilitate beryllium production. Hoyle then discovered a second nuclear resonance, this one acting between beryllium and helium, and finely tuned as well.
Without carbon, no life. And without specific nuclear resonance levels, no carbon. And yet there he was, Hoyle affirmed, carbon based to the core. Nature, he said in a remark widely quoted, seems to be "a put-up job."
INFERENCES now have a tendency to go off like a string of firecrackers, some of them wet. Hoyle had himself discovered the scenario that made carbon synthesis possible. He thus assigned to what he called a "Supercalculating Intellect" powers that resembled his own. Mindful, perhaps, of the ancient wisdom that God alone knows who God is, he did not go further. Corey is, on the other hand, quite certain that Hoyle's Supercalculating Intellect is, in fact, a transcendental deity--the Deity, to afford Him a promotion in punctuation.
And Corey is certain, moreover, that he quite knows His motives. The Deity, in setting nuclear resonance levels, undertook his affairs "in order to create carbon based life forms."
Did He indeed? It is by no means obvious. For all we know, the Deity's concern may have lain with the pleasurable intricacies of nucleosynthesis, the emergence of life proving, like so many other things, an inadvertent consequence of his tinkering. For that matter, what sense does it make to invoke the Deity's long term goals, when it is His existence that is at issue? If nothing else, natural theology would seem to be a trickier business than physicists may have imagined.
AS IT HAPPENS, the gravamen of Corey's argument lies less with what the Deity may have had in mind and more with the obstacles He presumably needed to overcome. "The cumulative effect of this fine tuning," Corey argues, "is that, against all the odds, carbon was able to be manufactured in sufficient quantities inside stellar interiors to make our lives possible." That is the heart of the matter: against all the odds. And the obvious question that follows: Just how do we know this?
Corey does not address the question specifically, but he offers an answer nonetheless. It is, in fact, the answer Hoyle provides as well. They both suppose that something like an imaginary lottery (or roulette wheel) governs the distribution of values to the nuclear resonance levels of beryllium or helium. The wheel is spun. And thereafter the right resonance levels appear. The odds now reflect the pattern familiar in any probabilistic process--one specified outcome weighed against all the rest. If nuclear resonance levels are, in fact, unique, their emergence on the scene would have the satisfying aspect of a miracle.
It is a miracle, of course, whose luster is apt to dim considerably if other nuclear resonance levels might have done the job and thus won the lottery. And this is precisely what we do not know. The nuclear resonance levels specified by Hoyle are sufficient for the production of carbon. The evidence is all around us. It is entirely less clear that they are necessary as well. Corey and Hoyle make the argument that they are necessary because, if changed slightly, nucleosynthesis would stop. "Overall, it is safe to say"--Corey is speaking, Hoyle nodding--"that given the utter precision displayed by these nuclear resonances with respect to the synthesis of carbon, not even one of them could have been slightly different without destroying their precious carbon yield." This is true, but inconclusive. Mountain peaks are isolated but not unique. Corey and Hoyle may well be right in their conclusions. It is their argument that does not inspire confidence.
THE TROUBLE is not merely a matter of the logical niceties. Revived natural theology has staked its claims on probability. There is nothing amiss in this. Like the rest of us, physicists calculate the odds when they cannot calculate anything better. The model to which they appeal may be an imaginary lottery, roulette wheel, or even a flipped coin, but imaginary is the governing word. Whatever the model, it corresponds to no plausible physical mechanism. The situation is very different in molecular biology, which is one reason criticism of neo-Darwinism very often has biting power. When biologists speculate on the origins of life, they have in mind a scenario in which various chemicals slosh around randomly in some clearly defined physical medium. What does the sloshing with respect to nuclear resonance?
Or with respect to anything else? Current dogma suggests that many of the constants of mathematical physics were fixed from the first, and so constitute a part of the initial conditions of the Big Bang. Corey does not demur; it is a conclusion that he endorses. What then is left of the anthropic claim that the fundamental constants have the value that they do despite "all odds"? In the beginning there was no time, no place, no lottery at all.
MATHEMATICAL physics currently trades in four fundamental forces: gravity, electromagnetism, and the strong and weak forces governing the nucleus and radioactive decay. In general relativity and quantum mechanics, it contains two great but incompatible theories. This is clearly an embarrassment of riches. If possible, unification of these forces and theories is desirable. And not only unification, but unification in the form of a complete and consistent theoretical structure.
Such a theory, thoughtful physicists imagine, might serve to show that the anthropic coincidences are an illusion in that they are not coincidences at all. The point is familiar. Egyptian engineers working under the pharaohs knew that the angles of a triangle sum to more or less one hundred and eighty degrees. The number appears as a free parameter in their theories, something given by experience and experiment. The Greeks, on the other hand, could prove what the Egyptians could only calculate. No one would today think to ask why the interior angles of a Euclidean triangle sum to precisely one hundred and eighty degrees. The question is closed because the answer is necessary.
THE GRAND HOPE of modern mathematical physicists is that something similar will happen in modern mathematical physics. The Standard Model of particle physics contains a great many numerical slots that must be filled in by hand. This is never counted as a satisfaction, but a more powerful physical theory might show how those numerical slots are naturally filled, their particular values determined ultimately by the theory's fundamental principles. If this proves so, the anthropic coincidences will lose their power to vex and confound.
Nonetheless, the creation of a complete and consistent physical theory will not put an end to revived natural theology. Questions once asked about the fundamental constants of mathematical physics are bound to reappear as questions about the nature of its laws. The constants of mathematical physics may make possible the existence of life, but the laws of mathematical physics make possible the existence of matter. They have, those laws, an overwhelmingly specific character. Other laws, under which not much exists, are at least imaginable. What explanation can mathematical physics itself provide for the fact that the laws of nature are arranged as they are and that they have the form that they do? It is hardly an unreasonable question.
Steven Weinberg has suggested that a final theory must be logically isolated in the sense that any perturbation of its essential features would destroy the theory's coherence. Logical isolation is by no means a clear concept, and it is one of the ironies of modern mathematical physics that the logical properties of the great physical theories are no less mysterious than the physical properties of the universe they are meant to explain. Let us leave the details to those who cherish them.
The tactic is clear enough. The laws of a final theory determine its parameters; its logical structure determines its laws. No further transcendental inference is required, if only because that final theory explains itself.
This is very elegant. It is also entirely unpersuasive. A theory that is logically isolated is not necessarily a theory that is logically unique. Other theories may be possible, some governing imaginary worlds in which light alone exists, others worlds in which there is nothing whatsoever. The world in which we find ourselves is one in which galaxies wink and matter fills the cup of creation. What brings about the happy circumstance that the laws making this possible are precisely the laws making it real? The old familiar circle.
ALL THIS leaves us where we so often find ourselves. We are confronted with certain open questions. We do not know the answers, but what is worse, we have no clear idea--no idea whatsoever--of how they might be answered. But perhaps that is where we should be left: in the dark, tortured by confusing hints, intimations of immortality, and a sense that, dear God, we really do not yet understand.
----------------------------
David Berlinski is a senior fellow of Discovery Institute and the author of "A Tour of the Calculus" and "The Advent of the Algorithm." His most recent book is Newton's Gift (Free Press).
Maybe, but they can revert to pure blue at any moment.
And this proves that if things were different then they wouldn't be the same. For example, if earth hadn't been shattered by asteroids at just the right intervals, wiping out just the right species and allowing just the right successors to flourish -- well then we might have two head and brains the size of peas, or something.
All this confirms the inerrancy of the King James translation, or something.
All flames and kidding aside, the absence of a complete description of reality by science neither promotes nor detracts from belief. But the argument from fine tuning says nothing. It merely affirms that one's legs are long enough to reach the ground.
Fortunately, we are not so limited unless we limit ourselves.
Shalom.
BTW, there are "genetic" computer programs whose algorithmic construction is determined by a set of "genes" that are forced to evolve through random mutations and selection. There are some problems that can't be reasonably solved any other way. The principle is real, and exists.
Isn't it true that the act of theorizing is the act of predicting? And that what we know as the most personal component of our personalities is that which we perceive as undetermined, or un-caused? So that, by definition, a theory of nature, as a description of a matrix of causes and effects, cannot by definition perceive personality?
The personal slips out of the theory, epistemologically, like water in the fist.
But, then, the hand is, afterwards, wet.
You are claiming that DNA does not have linear coding instructions (i.e., one after another along a path)?!
Of course. Human programs are precisely analogous to DNA. That was my initial point.
Ask yourself why the programmer ever does anything to his program. He is responding to the suggestions of the users, his employer, benchmarking of competitors' programs, market studies, Computer science journals, etc., etc. These are the selection pressures. He is just the guy that does for the computer code what it cannot yet do for itself (i.e., produce descendants with modification). Those codes that work and are profitable are copied and used until they are replaced by better ones. This is an evolutionary process that depends on lots of shared information from lots of sources. There is no one all knowing programmer (Bill Gates notwithstanding) directing the development of programs. Most are copycats working with simple algorithms and effecting rather unimportant changes (witness the proliferation of games).
In fact, every programmer's own brain is a kind of a computer with a program of its own. About two million years ago in human evolution the brain began to be capable of storing more information than the genome. From that point cultural evolution began its ascent leading eventually to this moment where our two brains are clearly possessed of widely divergent sets of program instructions. Both are functional but one clearly has mistakes which need correcting. Fortunately there are vast libraries of scientific literature available to effect a significant programming improvement should you be so interested.
You are confusing the storage format and the execution format. Apples and oranges.
You are confusing the definition of "random" here. Random in this case should mean "arbitrary", but you are using it as though it means "non-deterministic" (a correct definition, but not correct used here). The programmer IS an arbitrary selector (there is an infinite number of ways to write any piece of code). A programmer doesn't add any value to the process if the selection process isn't deterministic.
This is true, but the comparison isn't relevant. We find it cheaper to manufacture systems that aren't fault tolerant in most cases though we do no how. It is an economic decision. Therefore, in practice computer code is FAR more fragile than DNA which has numerous redundancies and fault tolerance mechanisms built in (once you boot up an organism, you can't shut it down or -HUP the process). The bottom line is that it serves living organisms well to have a code that functions well when subject to a high error rate. Life does not need deterministic results to function. If something non-deterministic happens in a computer system or an accidental code mutation occurs, we WANT the system to crash. The onus is on computers to give the right answer every time or they are useless for our purposes.
No, that is false with respect to the purpose of the instruction set. Clearly if you give me a sequence of three nucleotide bases I can tell you exactly which amino acid they code for. I can also identify initiator sequences and stop sequences. On the other hand, if I give any programmer an isolated sequence of 1's and 0's it would mean nothing to him. Every change to the computer code at the binary level will have consequences if it is a part of the instruction set (not the data set which may possibly be changed at will without consequence to the continued operation of the program). Generally such instruction set changes are fatal in that the intended purpose ceases to be accomplished. Whether or not the program continues to operate depends on the effectiveness of the error trapping routines. These facts are not germane to the argument.
The programmer works in an abstract coding language which is translated into binary code by a compiler which is itself a program. Having run several hundred batch jobs in Fortran with thousand card decks I know all too well what a single typing mistake can do to stop compilation or abort the run. This detail is also irrelevant to the argument that DNA codes are facile, redundant, and robust while computer codes in general are not.
The code is linear but the transfer functions are not. DNA codes for proteins. Proteins have primary through quaternary structure. Multiple combinations of multiple variants of tertiary structure peptides with enzymatic properties and extraordinary numbers of cellular, tissue, and whole organism feedback controls means that the instruction set will be expressed differently from place to place and over time.
A single gene may code for parts of a dozen or more different functional proteins. Which proteins occur in which cells is a function of the history of the cell line and environmental influences. In other words, the code produces a facultative response, tolerates a broad range of conditions and insults, and does so with very high tolerance of variation in the code itself.
No, I'm consistently referring to timing sequence in both DNA and human programs, not storage geometry.
No, there was a program before there was ever a first user. Clearly programmers can design something without an external selection process, but the same can't be said of evolutionary theory.
" ...This is an evolutionary process that depends on lots of shared information from lots of sources."
No, it is an intelligently controlled process that functions in a way that evolution would work if evolutionary theory could be applied to either human programming or DNA coding.
That's a total non-sequitor. It doesn't even matter whether or not DNA code or human programming is more fragile, as neither answer is germane to this discussion. What matters is whether useful programming can happen without any form of Intelligent Intervention (i.e. a simulated lifeless, primal, unintelligent environment).
For human programming, we know for a fact that useful programs will never form on their own in a computer, no matter how much static or noise is in the computer environment. For DNA coding, we know for a fact that we've failed to get useful DNA mutations in natural, unaided lab environments (or in the wild, for that matter).
Thus, the only tangible evidence which we have in hand are human programs formed via intelligent intervention rather than through natural, unaided events. One can speak of "possibilities" all day long, but science requires both evidence as well as repeatability. Clearly we can repeat ad nausium the intelligent creation of useful human programs, but the scientific jury is still "out" on wether useful programs can self-form without intelligent intervention.
That's incorrect to try to apply that logic here. It doesn't matter that you can give a name to any sequence of codons but can not do the same for any sequence of binary data/code. Just because you know the name of a sequence of three codons doesn't even mean that you can know from a random string of codons whether three in that series serve to code part of a gene for a finger or a toe (but hey, you can give 'em a name - chuckle).
What does matter is that DNA codons and binary code are both used to create larger subroutines (e.g., genes, API's). It also matters that rearranging codons (i.e. gene-splicing) and rearranging binary code will create new end products. Clearly both human programs as well as human life can be usefully modified by an intelligent intervention.
Yet no one can cite a single unaided, non-intelligent example of either...
That's my point entirely, thanks. DNA programming is superior to Man's current level of computer programming.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.