Posted on 11/30/2004 6:21:11 PM PST by betty boop
On Plato, the Early Church, and Modern Science: An Eclectic Meditation
By Jean F. Drew
God, purposing to make the universe most nearly like the every way perfect and fairest of intelligible beings, created one visible living being, containing within itself all living beings of the same natural order.
Thus does Plato (d. 347 B.C.) succinctly describe how all that exists is ultimately a single, living organism. At Timaeus20, he goes on to say:
There exists: first, the unchanging form, uncreated and indestructible, admitting no modification and entering no combination second, that which bears the same name as the form and resembles it and third, space which is eternal and indestructible, which provides a position for everything that comes to be.
And thus we find a description of the universe in which Being and Existence (Becoming) the one God and the multiplicity of things are bound together as a single living reality whose extension is mediated by Space (which for us moderns implies Time).
Our aim in this essay is to define these ideas and their relationships, and trace their historical development from the ancient world to the present. Taking a page from the late Eric Voegelin (19011985, philosopher of history specializing in the evolution of symbolization), we will follow a history-of-ideas approach to these issues. Along the way we will find that not only philosophy and cosmology, but also theology and even modern science can illuminate these seminal conceptions of Platonic thought. We must begin at the beginning, that is, with God who is absolute Being in Platos speculation, of whom the cosmos itself is but the image (eikon) or reflection.
When Plato speaks of God (or when Aristotle does for that matter, as in e.g., Nicomachean Ethics), he is not referring to the Olympian gods, to Zeus, Hera, Athena, Poseidon, and the rest of the gang of immortals. For the Olympians are like man in that they are creatures of a creating God. Not only that, but they are a second generation of gods, the first having reigned in the antediluvian Age of Chronos; which is to say that the Olympians rule or law is not everlasting, but contingent. Thus they are not self-subsistent, but dependent (contingent) on a principle outside of themselves. We might say that the central difference between Platos God and the Olympians consists in the fact that the latter are intracosmic gods, and the former is extracosmic, that is, transcending all categories and conditions of space-time reality. In contrast, the intracosmic gods are subject to change, to contingency; and so, though they may truly be said to exist in some fashion, cannot be said to possess true Being. (More on these distinctions in a minute.)
It is clear that for Plato, God is the Beyond of the universe, or in other words, utterly transcendent, perfectly self-subsistent Being, the uncaused cause of all the multiplicity of existents in the universe. In yet other words we can say that, for Plato, the cosmos is a theophany, a manifestation or presence of the divine Idea in Christian parlance, the Logos if I might draw that association in the natural world.
As Wolfgang Smith notes, Christian teaching is based upon the doctrine of the Logos, the Word of God, a term which in itself clearly suggests the idea of theophany. Moreover, what is implicit in the famous Prologue of St. John [In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not any thing made that was made. In him was life; and the life was the light of men. And the light shineth in darkness; and the darkness comprehended it not. (John 1:15)] is openly affirmed by St. Paul when he declares that the invisible things of Him from the creation of the world have been clearly seen, being understood by the things that are made, even His power and Godhead (Rom. 1:20) The indisputable fact is that at its deepest level Christianity perceives the cosmos as a self-revelation of God. [Wolfgang Smith, Cosmos and Transcendence, 1984]
Being and Existence (Becoming)
Being is a concept so difficult that it comes close to eluding our grasp altogether. It is utterly beyond space and time; imperishable; entirely self-subsistent, needing nothing from outside itself in order to be complete; essential; immutable; and eternally perduring. Contrast this with the concept of existence, regarding which Plato asks how can that which is never in the same state be anything? And this is the clue to the profound difference between being and existence: The existing things of this world are mutable and transient.
We must in my opinion begin by distinguishing between that which always is and never becomes from that which is always becoming but never is. The one is apprehensible by intelligence with the aid of reasoning, being eternally the same, the other is the object of opinion and irrational sensation, coming to be and ceasing to be, but never fully real. In addition, everything that becomes or changes must do so owing to some cause; for nothing can come to be without a cause. [Timaeus, 3:28]
Smith writes of the existing or becoming things that
they come upon the scene, we know not from whence; they grow, change, and decay; and at last they disappear, to be seen no more. The physical cosmos itself, we are told, is a case in point: it, too, has made its appearance, perhaps some twenty billion years ago, and will eventually cease to exist [i.e., finally succumbing, we are told, to thermodynamic entropy or heat death]. What is more, even now, at this very moment, all things are passing away. Dead is the man of yesterday, wrote Plutarch, for he dies into the man of today: and the man of today is dying into the man of tomorrow. Indeed, to be in time is a sure symptom of mortality. It is indicative, not of being, but of becoming, of ceaseless flux.
All the multiplicity of existents in the universe are in a state of becoming and passing away. But Platos great insight is that all things in the state of becoming that is, all existing things are whatever they are because they are participations in Being. That is to say, we perceive the trace of being in all that exists, writes Smith, and that is why we say, with reference to any particular thing, that it is. Existence, in other words, is contingent on Being.
But we wonder: In what way is this possible? And if existents participate in being, what is that Being in which they participate?
In Exodus 3:14 Moses has experienced a theophany: While tending his flock on Mount Horeb, suddenly he hears the voice of God issuing from a burning bush: God is speaking to him! Reverentially, Moses inquires of God what is His name (meaning: what is His nature or character).
And God said unto Moses, I AM WHO AM: and He said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you.
God has told Moses: that He is Being (I AM). And the strong implication is that there is no other being: I alone AM. For I is plainly singular in form.
Smith draws the crucial point, God alone IS. But how are we to understand this? It seems to me, writes St. Gregory of Nyssa, that at the time the great Moses was instructed in the theophany he came to know that none of those things which are apprehended by sense perception and contemplated by the understanding really subsist, but that the transcendent essence and cause of the universe, on which everything depends, alone subsists. But why? Does not the world exist? Are there not myriads of stars and galaxies and particles of dust, each existing in its own right? And yet we are told that the transcendent essence alone subsists. For even if the understanding looks upon any other existing things, the great theologian goes on to say, reason observes in absolutely none of them the self-sufficiency by which they could exist without participating in true Being. On the other hand, that which is always the same, neither increasing nor diminishing, immutable to all change whether to better or to worse (for it is far removed from the inferior and has no superior), standing in need of nothing else, alone desirable, participated in by all but not lessened by their participation this is truly real Being.
Smith continues: In the words of St. Gregory, that which is always the same, neither increasing nor diminishing, immutable to all change is truly real being. As concerns existing things, on the other hand, the teaching implies that these entities are always changing, always in a state of flux, so that their very existence is in a way a process of becoming, in which however nothing is actually produced. This has been said time and again, beginning with Heraclitus and the Buddhist philosophers. And there can be little doubt that it is true: even modern physics, as we can see, points to the same conclusion. Only there is another side to the coin which is not always recognized. Existent things the very flux itself presuppose what Gregory and the Platonists have termed a participation in Being. The point is that relative or contingent existences cannot stand alone. They have not an independent existence, a being of their own. In Him we live, and move, and have our being, says St. Paul .
St. Augustine confirms the Platonic insight this way:
I beheld these others beneath Thee, and saw that they neither altogether are, nor altogether are not. An existence they have, because they are from Thee; and yet no existence, because they are not what Thou art. For only that really is, that remains unchangeably.
Space
Space is the third essential term of the Platonic cosmology: It is the matrix in which living things and all other existents participate in Being. Platos creation myth the Myth of the Demiurge in Timaeus elucidates the Platonic conception of Space.
For Plato, the God of the Beyond is so beyond that, when it came time for creating the Cosmos, he didnt even do it himself. He sent an agent: the Demiurge, a mythical being endued by God to be in divine likeness of Gods own perfect love, truth, beauty, justice, and goodness. The embodiment of divine perfections, the Demiurge wishes to create creatures just as good and beautiful as himself, according to the standard of the divine Idea a direct analog, it seems to me, of the Logos theory of the ancient Church. Indeed, Eric Voegelin sees in the Demiurge the symbol of Incarnation [Order and History Vol. 3: Plato and Aristotle, 1957]:
The Demiurge is the symbol of Incarnation, understood not as the result of the process but as the process itself, as the permanent tension in reality between the taxis of form or idea and the ataxia of formlessness.
Similarly to the Christian account, the Demiurge in a certain way creates ex nihilo that is, out of Nothing. At first glance, Plato is seen specifying, not a pre-existing material but a universal field of pure possibility called Chora, Space. Perhaps we may find in this concept a strong analogy to Isaac Newtons concept of Absolute Space (see below).
Chora seems to indicate the idea of an eternal, universal field of pure stochastic potentiality that needs to become activated in order to bring actual beings into existence. In itself, it is No-thing, i.e., nothing. This activation the Demiurge may not effect by fiat: He does not, for instance, command to Let there be Light! The main tool at his disposal is Peitho, persuasion.
And if Chora is not so persuaded, it will remain in a state of nothingness. It will remain unformed, in the condition of ataxia. Of itself it is Nothing; by itself, it can do nothing. It cannot generate anything out of itself, not even matter in primaeval form.
And thus Plato introduces the figure of the Demiurge into his creation myth, symbolizing form or idea the principle of (formative) taxia that draws (formless) ataxia into existence. We moderns might be tempted to describe the Demiurge as constituting an information set together with an energy source, who persuades the pure stochastic potentiality of formless, absolute, empty space into actualized form, and thus existence. From the cosmic standpoint, he makes unity out of multiplicity, in harmony and geometrical proportion:
The best bond is the one that effects the closest unity between itself and the terms it is combining; and this is best done by a continued geometrical proportion. [Timaeus, 4]
Thus the Demiurge is a kind of divine geometer, producing the forms (or mathematical ideas) that Chora can be persuaded to conform to, and thus come into existence.
But the Demiurge does more than just get things started: As bearer of the divine Idea as pure love and beauty and goodness and truth he continues always persuading Chora to generate creatures as like himself as possible (i.e., reflecting his own divine qualities at whatever generic stage), throughout all eternity. Thus creation is a continuous process in space-time. Moreover, it is the source and driver of evolution as a universal natural process.
Through the ongoing activity of the Demiurge, men and the world are constantly being informed and renewed by the divine Idea; and thus a unified cosmic whole, a One Cosmos, a universal order comes into being at the intersection of time and timelessness, of immanent and transcendent reality, in the medium of Space (and Time).
Compare the Platonic creation myth with the philosophy of Dionysius the [Pseudo-]Areopagite, said to be the Greek converted by St. Paul in Acts, 17:34. For Dionyius, the names of God the divine qualities are goodness, being, life, wisdom, power, and justice. Joseph Stiglmayr writes [Cath. Encycl. at the entry for Dionysius the Pseudo-Areopagite], that for Dionysius, God is
the One Being (to hen), transcending all quality and predication, all affirmation and negation, and all intellectual conception, [Who] by the very force of His love and goodness gives to beings outside Himself their countless gradations, unites them in the closest bonds (proodos), keeps each by His care and direction in its appointed sphere, and draws them again in an ascending order to Himself (epistrophe) all created things [proceed] from God by the exuberance of being in the Godhead (to hyperpleres), its outpouring and overflowing and as a flashing forth from the sun of the Deity. Exactly according to their physical nature created things absorb more or less the radiated light, which, however, grows weaker the farther it descends. As the mighty root sends forth a multitude of plants which it sustains and controls, so created things owe their origin and conservation to the All-Ruling Deity . Patterned upon the original of Divine love, righteousness, and peace, is the harmony that pervades the universe . All things tend to God, and in Him are merged and completed, just as the circle returns into itself, as the radii are joined at the centre, or as the numbers are contained in unity.
The Platonic resonances seem unmistakeable in these lines. It appears that both Platonic speculation and the Logos doctrine of the ancient Church as articulated by Dionysius are in agreement that Creator must be beyond Creation in order to resonate with it which resonance is what makes the universe to be alive i.e., a living universe.
C. A. Dubrey points out [Cath. Encycl. at the entry Teleology], that the theology of St. Thomas Aquinas makes it clear that, Intrinsic finality [we are to think of this as a blend or merger of efficient and final causes in the Aristotelian sense] consists in the fact that every being has within itself a natural tendency whereby its activity is directed towards the perfection of its own nature . St. Thomas does not hesitate to speak of natural appetite, natural inclination, and even intention of nature, [we moderns might be tempted to add instinct to this list] to mean that every being has within itself a directive principle of activity. Accordingly, God does not direct creatures to their ends from outside, but through their own nature . The Divine plan of creation is carried out by the various beings themselves acting in conformity with their nature.
When, however, this finality is called immanent, this expression must not be understood in a pantheistic sense, as if the intelligence which the world manifests were to be identified with the world itself, but in the sense that the immediate principle of finality is immanent in every being . Thus the unconscious finality in the world leads to the conclusion that there must be an intelligent cause of the world. [Emphasis added.]
Aquinas insight, and also Platos, evokes a reconsideration of Isaac Newtons concept of Absolute Space. Possibly this may be understood in the following terms. First, Absolute Space is empty space. Second, it is not a property of God, but an effect of His Presence; i.e., we advert to theophany again. The question then arises, in what where or when does this theophany take place? Perhaps Newtons answer would be: In the beginning, and continuously thereafter. Second, it has been suggested that Newton intends us to understand Absolute Space as the sensorium Dei: God constitutes space and time through his eternity and omnipresence [ existendo semper et ubique, durationem et spatium consitutit: Philosophiae Naturalis Principia Mathematica, 3d ed., 1726]. Wolfhart Pannenberg writes,
Now there are a number of good reasons suggested by both philosophical and scientific thought to consider time and space as inseparable. Einsteins field concept comprises space, time, and energy. It takes the form of a geometrical description, and this seems to amount to a spatialization of time. The totality of space, time, and energy or force are all properties of a cosmic field.
Long before our own age a theological interpretation of this subject matter had been proposed, and it was Isaac Newton who offered this proposal. It too referred everything to space or, more precisely, to the correlation of force as in the case of a force like gravitation acting at a distance. Newtons well-known conception of space as sensory of God (sensorium Dei) did not intend to ascribe to God an organ of sense perception, the like of which God does not need, according to Newton, because of divine omnipresence. Rather, Newton took space as the medium of Gods creative presence at the finite place of his creatures in creating them. [Wolfhart Pannenberg, Toward a Theology of Nature, 1993]
Thus the infinite takes priority over every finite experience, including intellectual experience a position decisively argued by Descartes, as Pannenberg avers, in his thesis that the idea of God is a prior condition in the human mind for the possibility of any other idea, even that of the ego itself.
* * * * * *
The Influence of Platonic Speculation on the Early History of the Church
D. Edmund Joaquin, an insightful and gracious Christian friend, writes, We understand that the universe is created and sustained by the Word [the Logos], and not only that, but by the Word sounding. God sustains the universe consciously and actively. He has not gone away and left us. In fact, He reveals Himself to us, and His final revelation is in the person of Christ [the Logos]. Christ is not an abstract aspect of God, like wisdom. He is God. He is God incarnating in the world that He himself has made.
Joaquin further observes that [the Gospel of] John is written to the Greeks and put into words that they could understand. It seems theres a mystery buried in here somewhere. Consider: Socrates was the teacher of Plato, who was the teacher of Aristotle, who was the teacher of Alexander and Alexander spread Greek culture throughout Eurasia, the Middle East, and the Indian subcontinent. Add to this the fact that the great evangelist, St. Paul, had some difficulty converting the Jews to the Christian faith; but he converted the Greeks in droves. Not only St. John, but also St. Paul speaks in terms the Greek mind could readily grasp, as when he says God is He in Whom we live and move and have our being. These historical connections do not appear to be accidental, coincidental, nor incidental to the spread of the early Christian Church.
According to The Catholic Encyclopedia, the Greeks strongly responded to Christianity for its moral beauty as well as its truth. A case in point is St. Justin Martyr. He was a man of Greek culture, born in Palestinian Syria about the year 100 A.D, who converted to the faith around 130 A.D. Justin became one of Christianitys earliest and most powerful apologists, and ended up condemned by the Roman authority for refusing to sacrifice to the pagan gods, for which offense he was summarily executed by the Imperium, along with several other of his refusnik co-religionists. The official record of their martyrdom is extant:
The Prefect Rusticus says: Approach and sacrifice, all of you, to the gods. Justin says: No one in his right mind gives up piety for impiety. The Prefect Rusticus says: If you do not obey, you will be tortured without mercy. Justin replies: That is our desire, to be tortured for Our Lord Jesus, and so to be saved, for that will give us salvation and firm confidence at the more terrible universal tribunal of Our Lord and Saviour. And all the martyrs said: Do as you wish; for we are Christians, and we do not sacrifice to idols. The Prefect Rusticus read the sentence: Those who do not wish to sacrifice to the gods and to obey the emperor will be scourged and beheaded according to the laws. The holy martyrs glorifying God betook themselves to the customary place, where they were beheaded and consummated their martyrdom confessing their Saviour.
Jules Lebreton writes (at the entry for St. Justin Martyr in Cath. Encycl.) Justin tries to trace a real bond between philosophy and Christianity: according to him, both one and the other have a part in the Logos, partially disseminated among men and wholly manifest in Jesus Christ.
Yet for all their apparent similarities and resemblances in many respects, there is a profound difference between Platonic insight and the Christian one: and this pertains to the relations between God and man.
Both Plato and Justin proclaim the transcendent God. Yet for Plato, God is so beyond as to be almost impossible of human grasp. Yet Plato felt the divine pulls in his own nature. These Plato thought could be accounted for and articulated by an act of pure unaided intellect, that is by nous, in a state of intense contemplation.
Contrast this position with Justin Martyrs, who insisted that human wisdom was impossible without the testimony of the Prophets (whom God himself had informed and instructed) and the action of the Holy Spirit. For Plato, mans relations with God consist of operations of the mind. For Justin, they are operations of the heart, of the Spirit. For Justin, God is not a mental abstraction: He is real Personality with whom one can have direct personal relations, in the Spirit.
A later writer, John Scotus Eriugina (ninth century) elaborates the Justinian position, in the process noting that there is a downward tendency of the soul towards the conditions of animal existence, and that this has only one remedy: Divine grace, the free gift of the Holy Spirit. By means of this heavenly gift, writes William Turner [at the entry for Scotus in the Catholic Encyclopedia], man is enabled to rise superior to the needs of the sensuous body, to place the demands of reason above those of bodily appetite, and from reason to ascend through contemplation to ideas, and thence by intuition to God Himself.
The pull of animal nature is an idea we also find in Plato, and also the countervailing pull from the divine Beyond. Man lives in the metaxy, in the in-between reality constituted by the two. Mans task is to resolve this tension, and establish the proper balance that expresses the highest and best development of his human nature. But man must do this entirely by himself by means of nous or reason. There is no spiritual help extra to the human psyche available to facilitate this process.
In contrast, as Lebreton points out, Justin Martyr
admits that the soul can naturally comprehend what God is, just as it understands that virtue is beautiful but he denies that the soul without the assistance of the Holy Ghost [Spirit] can see God or contemplate him directly through ecstasy, as the Platonic philosophers contended. And yet this knowledge of God is necessary for us: We cannot know God as we know music, arithmetic, or astronomy; it is necessary for us to know God not with an abstract knowledge but as we know any person with whom we have relations. The problem which it seems impossible to solve is settled by revelation; God has spoken directly to the Prophets, who in their turn have made Him known to us . It is the first time in Christian theology that we find so concise an explanation of the difference that separates Christian revelation from human speculation. [Emphasis added]
* * * * * *
Natural Law, Contingency, and the Scientific Method
The Platonic model encourages us to recognize that the universe is zoon empsychon ennoun, a living creature endowed with soul and intelligence. The myth of the Demiurge describes the world process as a type of incarnation, a dynamic relation of absolute being and contingent becoming evolving in space and time in a manner expressing a perduring taxiaataxia relation. The Cosmos itself the totality of all existing things like its constituents, for example man and even the stars, is an eikon of being-in-becoming, a reflection or image of the divine Idea. Time itself is but a moving image of eternity. The life of the cosmos is wholly dependent, contingent on the Idea from which it manifests.
It is a lawful, orderly universe, yet one in which new occurrences are always arising. These new events are coming from, as it were, a sea of contingency analogous to Platos conception of Space, that is Chora the infinite field of unformed, pure potentiality.
The immediately foregoing ideas, of course, are not scientific ones strictly speaking. Still, there are elements here that perhaps science would do well to consider, in order to maintain the integrity of its own method. For one thing, it seems science itself, in its disclosure of the regularities of nature, seems to have an in-built tendency to overlook contingency. We may define an event as contingent if a description of it is neither self-evident nor necessary, if it could have happened differently, as Ted Peters puts it in his Preface to Pannenbergs Towards a Theology of Nature.
C. A. Dubray writes [Teleology, Cath. Encycl.], The fact that the world is governed by laws, far from giving any support to the mechanistic conception, is rather opposed to it. A law is not a cause, but the expression of the constant manner in which causes produce their effects. In other words, natural laws are expressions of observable regularities that occur in the world of existent phenomena in ordinary space-time reality. Thus, the laws themselves have no force as causes: they are descriptions.
Yet the focus on regularity inevitably masks the particularity and contingency of unique events. As Ted Peters notes, it is here that we run into a problem of focus in the scientific community, because virtually all the theoretical attention is given to the regularity of natures laws, while the contingency of natural events slips into the nearly invisible background. Peters continues:
What researchers concentrate on are the uniformities that can be expressed in timeless equations. A dictionary of equations describing these uniformities allegedly constitutes scientific knowledge . A closer examination, however, reveals that the applicability of these equations to concrete cases of natural processes requires certain initial and marginal conditions, conditions that in every case are contingent. Only when contingent conditions permit can we expect a natural law to operate as expected.
To the extent that the scientific method of inquiry is premised on an If/Then logical construction which seems ever to be the case the method itself is an exercise in contingency, yet nonetheless one in which Determinacy gets thematized, whereas contingency gets ignored. Arguably this is a serious bias having epistemological implications; for e.g., if the laws of classical dynamics are in principle temporally reversible, the actual course of natural events from which those laws have been abstracted is not. The reality of nature is first and foremost a historical reality.
Pannenberg suggests a corrective for this bias, acknowledging: That modern science so easily lends itself to abuse cannot be prevented in principle. It is one of the risks involved in the abstract study of regularities that either are inherent in nature itself or can be imposed on natural processes [e.g., as in ideological, technical, or engineering solutions]. This risk cannot be met on the level of scientific description itself but must be met first on the level of philosophical reflection on the work of science. It is on this level that the abstract form of scientific description must be considered with special attention to what it is abstracted from and what is methodically disregarded in the abstract formulas of science.
And so contingent conditions i.e, initial and boundary conditions must be restored to their proper place in our deliberations, for they are required for any formula of natural law to be applied. They are contingent at least in that they cannot be derived from the particular formula of law under consideration. The mathematical formula of a natural law may be valid without regard to time. The physical regularity that is described by such a formula is not independent of time and temporal sequence. But it is only that physical regularity which makes the mathematical formula a law of nature. This suggests that the laws of nature are not eternal or atemporal because the fields of their application, the regularities of natural processes, originate in the course of time. Thus it also becomes understandable that new patterns of regularity emerging in the sequence of time constitute a field of application for a new set of natural laws .
We may recognize that the total process of natural events presents itself to observation as a mesh of contingency and regularities. It is the task of science to pursue thematically the aspect of regularity. But, asks Pannenberg, can science ever succeed in bringing into view the entirety of nature as determined in all details by a number of laws that are in any case not infinitely complex? This would mean at the same time that a stage of research is conceivable from which nothing more could be discovered. Many natural scientists have had this nightmare because of the successes of their own research. Fortunately it probably is not a truthful dream.
For, says Pannenberg, laws always uncover what is necessary superimposed on what is contingent. Given the undeniable contingency of occurrences in natural events, can we recognize in their special character as occurrences [that] regularity as their own element in such a way that the presence of regularity can be thought together with the contingency of occurrences, not only under abstraction from the contingency of occurrences? [Emphasis added]
Which is why Pannenberg advocates an opening up of new viewpoints in scientific research, not because physical hypotheses or insights can be derived from them but because they open up and enlarge the intellectual space on which the formation of physical hypotheses depends . In physics also, horizons of questioning have to be opened up first of all in order that hypotheses that arise in them can be examined by experiment and classified theoretically.
Perhaps we need a greater appreciation of the fitness of the scientific method to engage the truly great questions of life, which ever seem to involve the relations of law and contingency. Leibniz propounds two great questions of perennial interest to the human mind: (1) Why are things the way they are and not some other way? (2) Why does anything exist at all?
Such questions, scientists will readily tell you, are beyond the purview of the scientific method. But does that mean such questions have no force or meaning such that they should not be asked at all?
Perhaps the incapability of the scientific method to answer such questions owes to the fact that all the great physical laws are acknowledged to be time-reversible; but we know that existence in space and time is not a time-reversible process. As Pannenberg states, it is a historical process. We might even say it is an evolutionary process.
Which suggests an analogy that might enlighten these questions, sharpen their meanings, and suggest additional questions: an analogy to direct human experience. Pannenberg writes of human beings, who do seem to live in a time-irreversible, that is historical process:
Human beings never live only in the now. Rather, they experience their present as heirs of the past and as its active change. They anticipate the future in fear, hope, and planning; and in the light of such anticipation of the future they return to their present and the heritage of their past. The fact that we know of historical continuity is at least also conditioned by this peculiarity of human experience with time. If there is a new event, then it modifies the context of our consciousness of time which is already found present. It throws light back on earlier occurrences which have become a part of our experience already. In the same way, ideas that occur to us throw light on our previous expectations and plans in justifying, fulfilling, modifying, or disappointing and thwarting them. Thus the contingent event always enters already into a context of experience or tradition . The future, beginning in the present happenings, is thus the origin of the perspective in which the past occurrences are put by every new experience.
Worldviews and Paradigm Shifts
It is perhaps a truism that we tend to find what were looking for by screening out any and all potential elements which do not fit the pattern of our expectation. Arguably, the scientific method may be said inherently to suffer exposure to potential danger from this side, as suggested in the above remarks. Indeed, Schröedingers theory of wavefunction seems to predict this. Consider these remarks from Stephen M. Barr [Modern Physics and Ancient Faith, 2003]:
In quantum theory, as traditionally formulated, there are systems and observers. Or rather, in any particular case, there is the system and the observer. The observer makes measurements of the system. As long as the system is undisturbed by external influences (that is, as long as it is isolated), its wavefunction which is to say its probability amplitudes will evolve in time by the Schröedinger equation . However, when a measurement is made of the system the observer must obtain a definite outcome. Suddenly, the probability for the outcome that is actually obtained is no longer what the mathematics said it was just before the measurement, but jumps to 100 percent. And the probabilities for all the alternative outcomes, the ones that did not occur, fall to 0 percent.
Thus we might say that the reality we humans experience ever involves a moving goal-post. And as the mover of this goal-post, the human agent is most indispensably involved in this process.
Faced with such indeterminacy regarding the foundations of experience, it is not surprising that people usually have recourse to mediating worldviews, or organized frames of ideational reality that constitute the conceptual space in which active experience is engaged and accordingly analyzed and interpreted. Certainly Plato has offered such a model. And so has Nobel laureate Jacques Monod [in Chance and Necessity, 1971]:
Chance alone is the source of every innovation, of all creation in the biosphere. Pure chance, absolutely free but blind, is at the very root of the stupendous edifice of evolution. The central concept of biology is today the sole conceivable hypothesis, the only one compatible with observed and tested fact. All forms of life are the product of chance .
Needless to say, these two models are polar opposite conceptualizations. Yet having received each on good authority, which do we choose?
Such are not idle considerations; for as James Hannam points out [The Development of Scientific and Religious Ideas, 2003], grand theories often suffer death by detail where it is found that up close the situation is too complicated for the theory to handle . [Yet] in the end, after it has changed the course of the river of enquiry, the theory can end up as a mortlake cut off from the general flow .
Hannam cites historian Thomas Kuhn, who documents an historical process he terms paradigm shift, describing a situation in which the findings of authoritative science move out of science and into practically every other field of human endeavor. Once a given, albeit partial or even defective theory becomes dominant, writes Hannam, far from being thrown out, a falsified theory is enhanced to deal with new information until such time as it finally collapses under the weight of anomalous results. Then, after a chaotic period, a new theory emerges that can deal with the anomalies and normal service resumes . A paradigm refers to but one field, say classical mechanics or health policy whereas the ideology/worldview is the general background that underpins all the paradigms.
The worldview (or ideology, if you prefer), for better or worse, implicitly shapes the background knowledge of thinking agents to which new experiences constantly are being conformed. Hannam says that worldview is often so deeply embedded in the psyche that it is very rarely considered explicitly except by specialists, but that nonetheless, the worldview is seen as [a] self-confirming fact of life and hence it is not strictly rational . The existence of a dominant worldview does not mean that a particular individual is unable to think outside the box but rather that his ideas are unlikely to fall on fertile ground. Unless new ideas can be stated in a language that makes them comprehensible to his peers, his intention in writing will not be met.
Which is the not-too-subtle way to put the fact that every man has a worldview, without exception, whether articulate or inarticulate; and that somehow, for the intention of writing to be met that is, for accurate and meaningful (i.e., successful) communication of ideas to take place some deeper, common ground of shared truth must first be accessed, for the purpose of providing a more capacious intellectual space in which the human pursuit of knowledge and wisdom might unfold or evolve from its present point of attainment.
But where today in our modern world is such a common ground or field to be found? Hannam proposes the examination of the history of ideas as a possibly useful method in the search for common ground. He writes,
To examine the history of ideas the only fair way to proceed would seem to place before ourselves the evidence and authority that the historical agents had before them and assume they acted rationally on that basis. Otherwise, there is no hope of ever tracing intellectual development because cause and effect assumes some sort of logical causality that is impossible with non-rational agents. The best that could be hoped for would be a catalog of mental positions, with no way to say how one led to another except by being pushed by blind exterior forces. This might be precisely what determinists are advocating but they would have to give up any hope of finding causes and restrict themselves to explanations.
Perhaps we moderns would do well to reconsider the common assumption that people living before our own time were somehow inferior in knowledge, experience, and observational powers as compared with our own status as enlightened individuals. Arguably, the ancient world produced some of the most powerful thinkers in the history of mankind, formulating ideas that were, in the words of Hannam, the fruits of unfettered metaphysical speculation that inevitably hits on the right answer occasionally.
Democritus, for example, proposed a theory predicting the atom as the ultimate constituent of matter, more than two-thousand years before the technical means existed to isolate atoms experimentally or, as Hannam notes, any useful applications for them could be found. Then it was discovered that the atom itself is an ordered constellation of even finer parts. There seems to be an historical progression of ideas here, the new building up on a framework originally laid up in the past, modifying it, improving on it in light of new insights and technical capabilities.
Hannam gives another example of more recent vintage: Copernicus needed Nicole Oresmes solution as to why we do not feel the movement of the Earth even though in Oresmes time it was just a curiosity as no one thought the Earth actually was moving each new idea, once accepted, shifts the boundaries of the worldview and makes it possible for further new ideas to be accepted into the pale.
We can extend the examples even further. Reimann constructed a geometry, apparently because his mind could grasp the logic and beauty it revealed for its own sake. But at the time, it had no apparent external referent in the field of nature. It was a beautiful and glorious abstraction until Einstein came along, and picked it up off the shelf as it were, to become the very language of relativity theory.
Thus it might be said that the evolution or progress of science depends on successive enlargements of the conceptual space it requires to do its work. In other words, science inherently is a participation in the historicity of the world.
Whatever our personal worldview, perhaps it would be well to recall that science is an historical process. Perhaps this understanding could open up additional, needed conceptual space that science itself requires in order to advance.
Actually, we're on a techno-math sidebar at the moment trying to nail down some definitions, but I'm sure we'll be leaving the formulae behind soon when we get back to the issues.
At any rate, I do understand the "overload" feeling - it happens to me everytime tortoise and Doctor Stochastic start posting - I have to rethink everything with their added insight, go back and pull up the source documents, ponder, etc.
Jeepers, but it is a wonderful learning experience! And we're getting for free better than what lots of students are paying top dollar to get!
I rarely say this about a post as long as yours, in fact I can't recall ever saying it before, but ... I agree with everything you said.
You said: I disagree and I'm waiting to read tortoise's later post at greater length to comment further, but I think "Shannon Information is a state within a continuous process" would be a better way of putting this. It is vital to distinguish between the discrete measures of randomness calculated as entropy values and the continuous case of "Shannon Information" that is a "stateful" result of the subtraction of the ensuing randomness from the preceding. It is only within the discrete measurements that we can discuss the origins of DNA and RNA nucleotide sequences, without which we can never arrive at any conclusion about abiogenesis and the way information relates to it. This is why the distinction between the discrete and continuous cases are of the utmost importance in my opinion.
IOW, Shannon's theory pertains to the change of states, the action; not either the before or after state, the condition.
His entropy calculation describes the state (before or after) and is either discrete or continuous depending on the probability distribution. From the Chowder Society (emphasis mine):
The most important concept of Shannon's theory is the "entropy function". It is expressed in the discrete form by the equation
This function represents the lower limit on the expected number of symbols required to code for the outcome of an event regardless of the method of coding, and is thus the unique measure of the quantity of information. It is the amount of information that would be required to reduce the uncertainty about an event with a set of probable outcomes to a certainty. As derived by Shannon it is the only measure of information that simultaneously meets the three conditions of being continuous over the probability, of monotonically increasing with the number of equiprobable outcomes, and of being the weighted sum of the same function defined on different partitions of the probable outcomes. In the discrete and continuous forms, the uncertainty corresponds to the entropy of statistical mechanics and to the entropy of the second law of thermodynamics, and it is the foundation of information theory. Shannon's work has found application in computer science, in communication engineering, in biological information systems including nucleic acid and protein coding, and hormonal and metabolic signaling, in linguistics, phonetics, cognitive psychology, and cryptography.
But to move to the discussion of the the origins of DNA and RNA nucleotide sequences, without which we can never arrive at any conclusion about abiogenesis and the way information relates to it, I would like to offer a review for Lurkers of how Shannon information [reduction of uncertainty in the receiver] applies by looking at how the encoded DNA sequences are communicated and decoded. IMHO, that might be helpful to those following our discussion of how the encoding might have originated.
Following is an introductory review written by Schneider with helpful graphics and examples. I've excerpted a few paragraphs here and there to give Lurkers an idea of its content though I didn't try to format graphics or formula:
Level 1 theory [Schneider, 1991a] explains the amazingly precise actions taken by these molecules. For example, the restriction enzyme EcoRI scans across double helical DNA (the genetic material) and cuts almost exclusively at the pattern 5' GAATTC 3', while avoiding the 46 - 1 = 4095 other 6 base pair long sequences [Polisky et al., 1975,Woodhead et al., 1981,Pingoud, 1985]. How EcoRI is able to do this has been somewhat of a mystery because conventional chemical explanations have failed [Rosenberg et al., 1987]. According to level 1 theory, molecular machines such as EcoRI are constrained in their operations by their ``machine capacity'', which is closely related to Claude Shannon's famous ``channel capacity'' [Shannon, 1949]. So long as one does not exceed the channel capacity, Shannon's theorem guarantees that one may have as few errors in communication as desired. It is this theorem which has led to spectacularly clear telephone communications and compact disk (CD) sound recordings. The equivalent statement for molecular machines is that, so long as a molecular machine does not exceed its machine capacity, it may take actions as precise as may be required for evolutionary survival. We will sketch the proof of this amazing result.
Level 2 theory [Schneider, 1991b] deals with the ancient problem of Maxwell's demon [Leff & Rex, 1990] and shows that there is an energetic cost to molecular operations: at least joules must be dissipated to the surroundings for every bit of information gained by the machine (where is Boltzmann's constant and T is the temperature in kelvin). In the recent literature some authors often claim that is not a limit at all or that information loss (rather than gain) is associated with dissipation [Landauer, 1991]. However, this relationship is merely (!) a restatement of the Second Law of Thermodynamics [Schneider, 1991b], so those who contest it are unlikely to be correct. We will investigate the source of this confusion.
2. Level 0 Molecular Machine Theory: Nucleic Acid Binding Molecules as Practical Examples
A ribosome is a collection of proteins and RNAs which reads messenger RNA and uses that information to construct proteins. This translation process starts in a region called the ribosome binding site [Gold et al., 1981,Stormo et al., 1982b]. One problem facing ribosomes is to locate the binding sites. The cell's survival depends on how well this is done. Some genes are crucial because the translated protein is required for an important cellular function. Other proteins are needed for efficiency, and so the loss of them would put the organism at a competitive disadvantage. If a protein were unnecessary, mutations in its gene would eventually destroy it, and the ribosome binding site at the start of the gene would atrophy. Likewise, if the ribosome were to start translation in places that it shouldn't, the cell would waste energy making useless proteins. Thus it would make biological sense if the only places ribosome binding sites exist is in front of functional genes. However, the conventional wisdom of biologists and chemists says that this is not possible because chemical reactions can have many side products, and only a portion of the substrate becomes product. Thus in the chemists way of thinking, the ribosome can begin translation just about anywhere, with a smooth gradation between real sites and other places. In contrast, information theory says that precise choices (distinguishing sites from non-sites) can be made by the appropriate combination of many ``sloppy'' components. Whether such precise recognition is actually done is a question which can only be answered experimentally. The bacterium Escherichia coli has approximately 2600 genes [Schneider et al., 1986], each of which starts with a ribosome binding site. These have to be located from about 4.7 million bases of RNA which the cell can produce [Kohara et al., 1987]. So the problem is to locate 2600 things from a set of possibilities, and not make any mistakes. How many choices must be made? ...
There is just enough pattern at ribosome binding sites ( Rsequence) for them to be found in the genetic material of the cell ( Rfrequency). These data imply that there is no excess pattern, and no shortage of pattern. Because Rfrequency is fixed by physiology and genetics, this result also implies that Rsequence must evolve toward Rfrequency, a rather subtle result! For further discussion of the evolution of binding sites, see the references [Schneider et al., 1986,Schneider, 1988].
3. Level 1 Molecular Machine Theory: Channel Capacity and Precision
Basically, the before sphere reflects the velocities the ribosome could obtain from the potential energy it carries before binding. In the same way, we speak of the potential energy of a ball sitting calmly on a table as being equal to the kinetic energy it would have at the moment it hits the floor after having rolled off the edge.
When the machine is in an after state, it has energy from the thermal noise, Ny joules. (The subscript y refers to the velocity space, which is called Y-space.) The energy of a mass is proportional to its velocity squared, so Ny = (rafter)2. (We normalize to remove the proportionality constant.) Turning this around, it means that the radius in velocity space is related to the square root of the energy: In the before state the machine has additional energy Py joules (which it will dissipate during the machine operation), so the total energy before is Py + Ny. That means that the radius of the before sphere is ...
The molecular operation that the ribosome must perform is to select the after sphere that represents its binding sites and to avoid all the other after spheres that represent different patterns. As we showed above, the before sphere encloses all of the after spheres [Schneider, 1991a]. How many choices can the ribosome make for a given energy dissipation? In other words, how many after spheres can fit into the before sphere? A gumball machine is a good model for the situation (Fig. 8)...
Because we use the same sphere geometry that Shannon did (see Appendix 2 of [Schneider, 1991a]), we can immediately apply his theorem, which he proved geometrically [Shannon, 1949]. When translated into molecular biology and applied to ribosomes, the theorem says:
The discrete versus continuous cases makes no real difference for information theory. The discrete case uses sums of terms and the continuous case uses integrals; these can be unified by treating sums as integrals using "distributions" (the Dirac delta function being the relevant example.)
A Gaussian supported on the entire line is no different conceptually from a die roll supported on (1,2,3,4,5,6).
information - Information is measured as the decrease in uncertainty of a receiver or molecular machine in going from the before state to the after state. Information is usually measured in bits per second or bits per molecular machine operation.
b. molecular machine: The definition given in Channel Capacity of Molecular Machines is:
d. after state (after sphere, after): the low energy state of a molecular machine after it has made a choice while dissipating energy. This corresponds to the state of a receiver in a communications system after it has selected a symbol from the incoming message while dissipating the energy of the message symbol. The state can be represented as a sphere in a high dimensional space.
1. Rfrequency: The amount of information. needed to find a set of binding sites out of all the possible sites in the genome. If the genome has G possible binding sites and Y; binding sites, then Rfrequency = log2 G/Y; bits per site. Rfrequency predicts the expected information in a binding site, Rsequence.
b. recognizer: A general term for a macromolecule that recognizes a specific pattern on a nucleic acid. This includes proteins such as transcription factors and protein/RNA complexes such as ribosomes and spliceosomes.
3. sequence pattern: A sequence pattern is defined by the nucleotide sequences of a set of aligned binding sites or by a common protein structure.
Explanation expressed in the language of biochemistry
The above addresses much of the explanation in the language of biochemistry, but heres more:
molecular machine operation: The thermodynamic process in which a molecular machine changes from the high energy before state to a low energy after state. There are four standard examples:
The restriction enzyme EcoRI selects 5' GAATTC 3' from all possible DNA duplex hexamers. The operation is the transition from being anywhere on the DNA to being at a GAATTC site.
The molecular machine operation for rhodopsin, the light sensitive pigment in the eye, is the transition from having absorbed a photon to having either changed configuration (in which case one sees a flash of light) or failed to change configuration.
The molecular machine operation for actomyosin, the actin and myosin components of muscle, is the transition from having hydrolyzed an ATP to having either changed configuration (in which the molecules have moved one step relative to each other) or failed to change configuration.
thermal noise: Thermal noise is caused by the random motion of molecules at any temperature above absolute zero Kelvin. Since the third law of thermodynamics prevents one from extracting all heat from a physical system, one cannot reach absolute zero and so cannot entirely avoid thermal noise. In 1928 Nyquist worked out the thermodynamics of noise in electrical systems and in a back-to-back paper Johnson demonstrated that the theory was correct.
Shannon sphere: A sphere in a high dimensional space which represents either a single message of a communications system (after sphere) or the volume that contains all possible messages (before sphere) could be called a Shannon sphere, in honor of Claude Shannon who recognized its importance in information_theory. The radius of the smaller after spheres is determined by the ambient thermal noise, while that of the larger before sphere is determined by both the thermal noise and the signal power (signal-to-noise ratio), measured at the receiver. The logarithm of the number of small spheres that can fit into the larger sphere determines the channel capacity (See: Shannon1949). The high-dimensional packing of the spheres is the coding of the system For a molecular machine containing n atoms there can be as many as 3n-6 independent components (degrees of freedom) so there can be 3n-6 dimensions. The velocity of these components corresponds to the voltage in a communication system and they are disturbed by thermal noise. Thus the state of a molecular machine can also be described by a sphere in a high dimensional velocity space.
To contributing Freepers: please let us know how much further any of you would like to go with the above before we move on to the next topic - and also, whether the following is an accurate summary of where we stand.
To review where we are for Lurkers:
2. Information (the reduction of uncertainty in the receiver or successful communication) is that which distinguishes the living from the non-living. An example is a live skin cell v a dead skin cell, e.g. the DNA and chemicals are as good dead as alive.
3. We have agreed that the Shannon-Weaver model does not address the value or the meaning of the message being communicated, e.g. DNA, RNA but rather gives a measure for information which translates to biological systems.
4. There is so far no known origin for information (the successful communication) in space/time. We have not, however, considered string theory or geometric physics. But we have excluded Maxwells demon among others.
5. The words that we are using although defined with much precision on the thread may appear loaded to the reader, e.g. biased toward one conclusion or another. We have been unable to negotiate a glossary for replacement terms and definitions.
6. We realize that to explore the abiogenesis subject thoroughly we will also need to discuss:
b. how the symbols arose (semiosis, encoding/decoding)
c. how complexity arose (but first we must agree on which types of complexity apply)
I'm still here, and still hanging in. But totally lost. Please carry on. Your energy is admirable.
Hopefully we'll be crossing the techno-mountain shortly and be back into a good place for broad debate.
I just want to address briefly your objection to "irreducible complexity". The term is a hot button, I suspect, more because of its origin (Intelligent Design supporters) than its meaning.
The way it is used, it looks backwards at the evidence to point out a place where a single mutation would not produce the resulting organism that is observed. That would mean either intelligent design or evolution by chunks rather than bit by bit (which now takes on a greater meaning with Shannon).
But the concept is not really that different from "punctuated equilibrium" and thus I have tried to parallel the terms (one looking forward, the other backward) - so that we can truly discuss whether or not such "chunks of gain in information content" are viable.
If one does not like paralleling the two terms, it would alternatively be possible to parallel irreducible complexity with functional complexity, whose "chunks" may be smaller, i.e. function rather than organism oriented.
At any rate, I would like to leave "irreducible complexity" in the mix while we discuss complexity because so many on the forum are drawn to the concept and if we don't address it at all, it would be dismissive of their concerns. On the other hand, we owe it to betty boop and everyone reading not to permit a flame war to erupt.
How say all of you?
Fine with me, but establishing "irreducibility" to everyone's satisfaction is pretty much a hopeless task. Remember, before atomic fusion was discovered, no one could figure out how the sun could have existed for as long as geologists were saying was the age of the earth. Eventually, the method was discovered. So what may appear irreducibly complex" today doesn't mean that something is, in fact, irreducibly complex.
At any rate, I strongly suspect that we will end up with just a few broad categories of complexity and our own jargon for them. For instance, we may find:
Functional complexity as a broad formulation which includes irreducible complexity or punctuated equilibrium and specified complexity as sub-types.
2) ...the (minimal) amount of time it takes to create the system.
Entropy is a continuous measure of discrete state, continuous insofar as there is no theoretical finite bound on the number of discrete states being used to compute the measure. If the probability distribution of states stays isotropic, entropy will increase with the number of states. For a sequence of a discrete states with anisotropic probability distributions, entropy as a function of the number of discrete states will generally be log normal in character, converging on the minimum description length of the underlying algorithm (inductive systems in a nutshell). It is very common for people to ascribe far more information content to an anisotropic system than it actually contains based the growth in entropy on the frontside of the lognormal curve.
For molecular biology, you will always have anisotropic probability distributions due to the nature of the system. To make things slightly more complicated, all distributions will look fairly isotropic until you make it over the log-normal hump; you can neither predict where the peak will be or if one even exists until the entropy starts to converge after some number of discrete states. Because you have to keep track of the probabilities of every sub-pattern in the sequence, this has very high complexity in both space and time.
I do not know if this helps or not.
You object to Alamo-Girls representation of Shannon information as implying an action not a condition. . . , and have offered an alternative suggestion: that it pertains to a state within a continuous process. Your suggestion strikes me as most compatible with Boltzmanns attempted reduction of the second law of thermodynamics to a law of disorder describing systems near-equilibrium that can be addressed only by means of statistical analysis. Yet the biological world at the level of organism and even of the biosphere itself -- is seemingly characterized by intentional dynamics driving to increasingly higher states of order, versus the physical view that holds, a la Boltzmann, that everything is winding down to disorder, that the entropy of any system inexorably increases and spreads out into the uniformity of the equilibrium state in which nothing further can happen. This hypothesis appears to be inimical to an understanding of what life is, or how it got started. Which I why I believe A-G is correct to say that Shannon information implies a dynamic action (e.g., a choice, a decision), not the passivity of a state function (which leaves a role for a passive observer, but not a deciding actor). It has been observed (e.g., by Erwin Bauer at al.) that life in all its forms runs counter to the expected direction predicted by the second law in its Boltzmannian form.
What follows is more by way of deep background than an on-point elaboration of the instant case were grappling with; but maybe can provide a broader conceptual framework that might be helpful in our investigation.
As Rod Swenson points out (in Thermodynamics, Evolution, and Behavior, The Encyclopedia of Comparative Psychology, G. Greenberg and M. Haraway (Eds.), New York: Garland Publishers, Inc. 1997), Ronald Fisher one of the founders of neo-Darwinism, wrote about the apparent incommensurability between living things and their environments, between biology and physics, or, more particularly, between evolution and thermodynamics, entropy changes lead to a progressive disorganization of the physical world ... while evolutionary changes [produce] progressively higher organization.... Swenson continues:
Fisher wondered out loud about the unification of the two opposite directions apparently taken by evolution and thermodynamics under a deeper more general principle. Although this did not happen in Fisher's lifetime, now, at the end of this century we can perform such a unification. It can now be shown that the active, end-directed, or intentional dynamics of living things, their reciprocal relation to their environments, and evolution as a general process of dynamically ordered things that actively work to bring more order into the world is the production of an active order-producing world following directly from natural law.
Contemporary Darwinists of whatever sect all agree that natural selection is the fundamental explanation or true cause of the progressively higher organization that we recognize as biological evolution. While little serious doubt has been cast on the fact of natural selection per se, it has been increasingly recognized over the course of the past 25 or so years that natural selection has no explanation for (1) the active, end-directed striving of living creatures that has been termed the fecundity principle, or (2) the fact of planetary evolution a special case involving the problem of evolution of a population of one.
It has been argued by Swenson and others that natural selection is not the cause of the evolution of species, but is itself the result or consequence of the fecundity principle the active, end-directed, dynamical striving for existence and persistence that can be observed among all living creatures. Swenson writes:
Darwin, who did not intend to address these issues with his theory took the active properties of the living to have been breathed into dead matter by the Creator. The contemporary view has been that the active properties of the living came into the dead world of physics by an astronomically improbable accident that would only have to happen once (e.g., Dawkins, 1989). Given enough time, the argument goes, even an astronomically or infinitely improbable event can occur. Such an explanation which is really no better than Darwins is unsatisfying for a number of reasons. For one thing such infinitely improbable accidents would have had to have happened not once but repeatedly to produce the evolutionary record we see.
Even one infinitely improbable accident had only some 14 billion or so years to happen (since the world had a beginning that is, it is not infinite). Penrose opines this hypothetical happenstance is a virtual statistical impossibility; if that is so, then piling up a multiplicity of such infinitely improbable accidents to explain the evolutionary record would appear to be a non-starter.
Swenson has an interesting hypothesis regarding the emergence of life. He says that order (e.g., as in living species) typically arises as soon as it gets the chance: life arose on Earth and persisted, not after some long period of lifeless time, but as soon as the Earth was cool enough to keep the oceans from evaporating -- as soon as it had the chance, as soon as some constraint is removed or some minimal threshold reached. The urgency towards existence expressed in the fecundity principle is seen in the evolutionary record writ large, opposite on both counts with respect to the second law of thermodynamics as a law of disorder. On my reading, this would seem to imply that order is in-built paradigmatically from the inception of the cosmic system; thus the role of evolution is the successful, successive manifestation of the original paradigmatic order in space and time.
Swenson points out, after e.g., Cloud, 1988; Margulis & Lovelock, 1974; Schwartzman, et al., 1994; Swenson &Turvey, 1991; Vernadsky, 1986/1929, that one of the most important empirical facts recognized in recent decades is that the Earth at the planetary level evolves as a single global entity. This is our problem of the evolution of a population of one. He continues,
The present oxygen rich atmosphere, put in place and maintained by life over geological time, is perhaps the most obvious prima facie evidence for the existence and persistence of the planetary entity. With the shift of the Earths redox state from reducing to oxidative some two billion years ago evolution undeniably became a coherent planetary process ... the progressive emergence of more highly ordered forms [is] a function of increasing levels of atmospheric oxygen. Studies with shapes of things and their metabolic and respiration capacities (e.g., Runnegar, 1982) suggest that order seems to come into being as soon as minimal thresholds, in this case oxygen, are reached. Both the progressive increase in atmospheric oxygen and the production of increasingly more highly ordered states constitute an increasing departure of the global system from equilibrium, again, as Fisher noted, running opposite to that generally assumed to be the predicted direction for physical evolution according to the second law.
Thus life is an opportunistic phenomenon: as soon as evolving conditions permit, or constraints removed, it will spontaneously take hold to occupy all possible, then-available niches. The hypothesis of abiogenesis isnt compatible with this scheme. For although spontaneous, the abiogenetic view seeks to avoid the problem of paradigmatic order altogether, leaving matter to bootstrap itself into life as a consequence of purely random processes i.e., the inception of life is regarded more or less as an accident which, once taking hold, thereafter spontaneously produces and reproduces biological order notwithstanding that matter must obey the physical laws, and is especially subject to the second law as generally understood (i.e., Boltzmann's view of it). In short, abiogenesis really doesnt explain anything; for there is nothing in it that can account for the origin and action of the fecundity principle, which runs contrary, or in opposition, to the direction of thermodynamic equilibrium heat death.
But I digress. Let us return to the problem of the evolution of our population of one, which poses an enormous obstacle or challenge to Darwinian evolutionary theory. Consider: If the earth evolves as a single living global entity (as seems to be the case on well-documented empirical grounds), then the fact would be that the evolution and persistence of all the higher-ordered living states that have been the typical objects of evolutionary study (e.g., sexually reproducing animals) are dependent on a rich and steady supply of atmospheric oxygen [that] makes them dependent upon the prior evolution and persistence of life at the planetary level for their existence. More precisely, they are internal productions of the larger planetary process or, in Vernadsky's words they are regular functions of the biosphere. This suggests that the study of evolution at the planetary level is the study of the most fundamental entity of terrestrial evolution without an understanding of which all the other living things that are effectively component productions will never be understood. Yet this poses a major problem for Darwinian theory because the planetary system as a whole cannot, by definition, be considered a unit of Darwinian evolution Darwinian theory which defines evolution as the product of natural selection cannot address or even recognize planetary evolution because there is no replicating or reproducing population of competing Earth systems on which natural selection can act....
The Earth evolves as a population of one. Living systems within it are subsystems of it, just as living cells, organs, etc., are subsystems of discrete living beings such as you and me.
Swenson further observes that, although the problem of the population of one is most strikingly noticeable at the level of planetary evolution, it has far more general application. Many things that evolve are also populations of one as, for instance, a forest ecosystem undergoing succession, a human being (together with all component subsystems) whose lifetime evolves from birth through death, or the evolution of culture seen as the successive agglomeration of autonomous units (tribes, states, empires, the global economy). Such processes are internal component processes within the planetary system as a whole in a manner that is beyond the ontology and explanatory framework of evolution following from natural selection. Natural selection is seen to be a process internal to the evolution of a population of one, and it cannot explain the systems to which it is internal. [Itals added]
Swenson has some fascinating things to say about ancient anticipations (in Thales, Heraclitus, et al.) of Newtons two great laws that are strikingly germane to our present topic. Even though Id love to discuss them here, I see I have run out of space-and-time! Must leave off for now.
Just some food for thought.... Thanks so much for writing, StJacques!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.