Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

A Freeper Research Project: Combinatorics, Probability Theory, and the Observer Problem
Self plus all interested Freepers | 30 June 06 | betty boop and Alamo-Girl

Posted on 06/30/2006 7:12:21 AM PDT by betty boop

Combinatorics, Probability Theory, and the Observer Problem

 

Combinatorics and probability theory are distinct yet related branches of mathematics.

Combinatorics is the evaluation of a (usually) finite set of objects. It asks questions like, “How likely is it the next card I draw will be a seven of hearts?” Or, “What is the likelihood of my winning the five-ball Lotto tonight?” Next questions might be, “How many cards are in a standard deck?” Or “What are the possible combinations of numbers for a winning lottery ticket?” Extending the method from questions of this type, Combinatorics might ask, “How many particles are there in the universe?” or “How many possible combinations of amino acids form a typical protein?”

Combinatorics was the first step in probability theory. Its first formalization is thought to date back to third-century India, with its fuller, modern development beginning in the 17th century. Probability theory, or statistics, takes off from Combinatorics, with the additional insight that the distribution set being examined displays meaning whose assessment is important to the determination of the likely outcome of selected events.

The Frequentist and Baysean interpretations are the main branches of probability theory.

Frequentists use actual data without “weighting.” Though it is an oversimplification to say this approach involves the quantification of the normal or Gaussian (“bell”) curve applicable to the particular distribution set under examination, for present purposes the statement will have to do as a rough rule of thumb: The subject area is abstruse and its details beyond the scope of this essay.

Bayesians — named for Thomas Bayes (1702–1761), an English theologian and mathematician who was first to apply inductive methods to probability assessments — employ a “learning curve” that purports to enable increasingly better predictions regarding the likelihood of a future event. As such, the Bayesian approach is usually the method chosen in studies of artificial intelligence. More recently it has been employed in the design of e-mail spam filters.[1] The underlying assumption is that with gains in knowledge about the systems we study, our predictions about their future development become increasingly reliable.

Bayesians are often characterized as subjective in their approach to statistical theory: Their theory, after all, involves degrees of belief, which are based on prior knowledge about the system under study.

Frequentists often make the claim that, unlike Bayesians, they are rigorously objective, because their results all derive “from the numbers.” Yet Bayesians might retort that there is an unavoidable subjectivity involved in the Frequentist approach, in that it involves a prior choice of the appropriate sampling plan — which to Bayseans implies a subjective assessment of the system under study.[2]

This issue became topical in the Dialogue when Froggie[3], a Frequentist it seems, said:

Let’s hypothetically say that, through sound methods, it is determined that the odds of a universe having been ‘created’ is determined to be something impossibly small, say 1 in 1080, for example. This doesn’t prove the existence of an intelligent cause from any scientific standpoint. The reason? You would also have to consider all the other possible universes with only a 1 in 1080 chance that didn’t happen, many of which could have been equally (or more) complex and wonderful.”

Froggie’s “hypothetically sound methods” are what effectively constitute his “appropriate sampling plan.” We are given no further details. However, a choice of sampling plan has been made, and a choice can only arise in a subject; or as we may say, in an observer.

Froggie here was contradicting the speculation of his friend that, at bottom, the universe is “informed at its root” — that is to say, its origin and organization are information-driven. Which Froggie took to mean “the creation of an intelligent designer” — though the remark in context need not have been interpreted that way at all. And Froggie concluded:

Once you’ve said there’s only a 1 in 1080 chance for an event to have occurred, that’s it, the science is done — that’s your conclusion; there’s nowhere to go from there with the scientific method: That is where science ends, and where philosophy and faith begin.”

A commendable observation on the one hand perhaps. But on the other, Froggie’s Frequentist approach seems to identify the scientific method with Combinatorics theory — another subjective choice.

Combinatorics is best suited to a finite set consisting of equiprobable members or elements — which in effect is to say that all set elements are in “random” distribution, being all equiprobable. If indeed the universe has such a nature, then Combinatorics should be able to describe it. If not, then not. But that’s the very thing we’re trying to find out.

In short, by such methods we seem to get no further along with the question of whether the universe is “informed at its root” or not.

Consider another example: As Gerald Schröeder points out[4], a single typical protein is a chain of 300 amino acids, and there are 20 common amino acids in life; which means that the number of possible combinations that would lead to the actualization of a typical protein would be 20300 or 10390. In this way Combinatorics theory specifies the global problem.

But as Schröeder further describes the problem:

“It would be as if nature reached into a grab bag containing a billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion billion proteins and pulled out the one that worked and then repeated this trick a million million times.”[5]

Combinatorics theory does not seem to shed much light on what we can say about the actual formation of a typical protein — though it tacitly acknowledges that the protein process must begin and unfold in finite time. Pragmatically, it seems any reliable statement about the origin of proteins requires us to narrow the field from all possibilities to those that are more likely to occur — not least because the people who observe and describe such things are finite themselves.

Schneider’s problem is one that suits Bayesian methods. To put this into perspective, let’s consider a less theoretical example:

A utility company seeks to establish a fair rate to charge its customers in a way economically consistent with its current and expected — the latter is part of what’s being analyzed in the event — ongoing operating- and capital-expenditure needs.

As a usual first approximation to the rate-setting problem, the actual consumption of customers in the full geographic area served over a specific time period is studied; e.g., the kilowatt hours of electricity consumed per customer tallied at levels, rounded off to hundreds, thousands, etc. This is the global distribution of what’s going on.

To the Frequentist, the distribution of the actual number of customers in terms of actual levels of consumption is what ought to inform the rate analyst as to where rate breaks ought to occur. The global description of total power consumption is what concerns the Frequentist; he does not “weight” the consumption histories of actual customers. In contrast, for the Bayesian, a core “belief” might be that the standard of living of customers is a primary driver of their actual consumption rates. And this recognition has both short-term and long-term practical effects.

A Bayesian might recognize, for instance, that more affluent customers are more likely to be using devices such as microwaves, freezers, heating/cooling systems, and so forth; and thus they routinely consume more power than other customers. This actual experience —which captures the amount of electricity actually consumed in particular socioeconomic/geographic areas in an actual given time period — is then quantified and “built into” the power company’s data base.

Thus the Bayesian approach to the rate-setting problem is to use actual history, weighted in subsets of the total population along with a method of forecasting to project future consumption appropriate to each subset of the total population.[6]

Both the Frequentist and the Bayesian recognize that the utility company’s rates cannot be determined by Combinatorics alone because the electrical power consumption per month per customer is not equiprobable. But the Bayesian is not satisfied looking only at actual consumption history. He seems to recognize that “the devil’s in the details,” and plans accordingly.

Consider an instance where the Frequentist approach may not give the best results. There is a certain “penalty” to customers built into the rate structure that the utility company must pass along to compensate for the cost of providing enough capacity to handle “peak load” conditions. Even though the overall system is not required to function at peak load all the time, customers would be very disgruntled (or even angry) if “brown-outs” or loss of service were to occur. So the utility company must build additional capabilities to answer this need.

If the rate analyst were of Bayesian persuasion, he would try to “get inside the head” of the customer — to look at each subdivision in the company’s service area, for instance, in order to determine the prevailing standard of living of each.

But even that assessment might not tell him everything he wants to know: The experience of customers may be subject to change. For instance, a poorer area might be on the verge of buying microwaves or air conditioners en masse — perhaps because such devices have become more affordable. A Bayesian rate-setter might use information like this to weight his data to reflect the expectation that the wealthier subdivisions have peaked in their consumption patterns, while the demand from less-affluent areas is on the rise.[7]

He might also consider the devices themselves: high-demand, low-consumption devices (such as microwaves) put a particular strain on power plants — a big pull when the device is turned on, followed by very little actual consumption. This could put a serious drain on the system during peak-load conditions especially. So the rate analyst might consider whether customers tend to use their microwaves within a block of time more or less concurrent with when people get home from work. He might further consider the weather (temperature, humidity) vs. consumption history to project heating and cooling consumption.

From the weighted information, he would project operating, maintenance, and future construction costs and then look at the frequency distribution comprised of divers actual customers to determine a fair rate structure.

At the end of the day, both Frequentists and Bayesians rely on Combinatorics, on the supposition that combinations are the simplest way to look for the likelihood of a thing happening. Yet Frequentists view probability as a quantitative measurement only, while Bayesians look for underlying meaning — a qualitative measurement — in what they see in the distribution under study. Thus Bayesians and Frequentists will often reach different conclusions when looking at such important issues as the probability of the existence of this particular universe or of biological life.

It appears that both approaches are equally “subjective” in the final analysis: The “observer problem” — the problem of a subject intending an object — seemingly cannot be obviated under any experimental conditions.

Yet it seems Bayesians are not so much “subjectivist” as empiricist in their approach to statistical theory. In comparison, we might say the Frequentist approach is formalist: It thinks the observer can be “left out.” As if the origin of the formalism did not have a human mind — an “observer” — to “think it into existence” in the first place.

Or consider some other examples where Frequentist methods do not seem to rise to the occasion:

“The Boston Red Sox will win the World Series in 2007.”

How do we evaluate the probability that such a prediction actually accords with a real outcome? Since it is clearly a statement about a future event, no one today can state with certainty whether it is true or not. And yet there are people who have a stake in the actual outcome for one reason or another (e.g., sports bettors and their bookies among others); and so they may try to evaluate the probabilities. To do so, they must rely on knowledge that they have about the Red Sox team, relative to what team they think the Sox’s likely competitor would be in 2007: That’s really all they have to go on, subject to new information coming on-stream between now and then. Thus “the odds will change” over time.

Now let’s say John, an interested party in a Red Sox’s victory in 2007, bases his prediction on his knowledge of the current team and its past track record. His pal Jim, another interested party, may have “insider knowledge” that certain valuable Red Sox players will be traded away in 2006. Or that a key pitcher has a bad elbow, etc. And so he predicts the Red Sox will not win.

In both cases, each is establishing his prediction on the basis of the current state of his knowledge, however imperfect.

Each man has a subjective belief about the future that is based on some body of knowledge. And the interesting thing is both men could have the same knowledge base and still disagree about the outcome of the 2007 World Series. Even with people having an identical data set, opinions may differ as to what the data mean. It seems the Frequentist approach cannot get at this dimension of the problem.

It seems the Red Sox problem is not, nor can it be, a problem for the Frequentist approach to statistical probability, in which “the numbers” apparently must speak for themselves. For how would one derive “numbers” applicable to this situation?

Then again, how about this prediction:

“Hillary Clinton has a 60% chance of being elected president of the United States in 2008.”

Any person making this prediction could do so only on the basis of his current state of knowledge — which may not be completely adequate and in any case is subject to change, as events on the ground change between now and November 2008. Such a statement, however, would be unengageable on the basis of the Frequentist approach to probability assessment. The Frequentist interpretation requires a very large number of trials to establish the frequency of occurrence on which the probability distribution ultimately depends. But the 2008 presidential election is a singular event: It only happens once.

At bottom Bayes’ theorem seeks to provide a method by which unknowns crucial to any investigation, that is, the uncertainties between the now — the current state of our knowledge, and our “degree of belief” in the adequacy of our present knowledge as the “now” of the question under consideration — can be made to productively answer for the “then” of the future event or state we’re trying to quantify.

The coin-toss problem might shed some light on the relevant issues:  It goes straight up the Frequentist alley, so to speak, at least on first consideration.

A Frequentist expects that any number of tosses of a fair coin will produce a result in which roughly half the time the coin will land “heads,” and half the time, “tails” — that is, there is a 50-50 probability: For any number of tosses, half the time the result will be heads, and the other half tails.

But a Bayesian might say: Hold on a minute! In the first place, no one can toss a coin a (virtually) infinite number of times to prove the theory. So in effect this expectation is based, not on experimental outcomes, but on “a degree of belief.” And in the second place, the expectation is based on the assumption that the coin is “fair”— that is to say, the coin is perfectly symmetrical on both sides; and it has not been tampered with.

A Bayesian would be inclined to allow for the possibility of there being a hidden variable — e.g., the two sides of the coin aren’t perfectly symmetrical, or maybe the coin has been tampered with — that may actually be the driver of the results obtained. To try to control for such factors, a Bayesian would take an incremental, iterative approach based on the results of actual experimental trials, and thereby hope to gain increased knowledge of the system so that predictions would become better approximations of actual future outcomes over time.

On Bayes’ definition, the probability one assigns to event X taking place in the future effectively boils down to a person’s degree of belief in that event.[8] On first hearing, this may strike us as a very strange saying. No wonder Frequentists regard Bayesians as subjectivists!

But if they are, then it seems Bayesians are working overtime to elucidate and quantify — to make explicit and objective — the “belief elements” of their “subjectivity” — seemingly ever much more so than Frequentists do these days: Bayesian probability “is a formalism that allows us to reason about beliefs under conditions of uncertainty.”[9]

For anyone interested in this formalism:

Bayes’ theorem is a general rule to correlate variables. Let us assume two variables: A (an unknown variable) and B (the data you have collected so far in experiments).[10]

In Bayesian notation, for these two variables A and B the general theorem reads:

P(A | B) = P (B | A) * P(A) / P(B)

Let P stand for probability, A for the prior probability (the suspected  “hidden driver” before you did the experiment), and B for the results you’ve accumulated so far from prior experiments. The Bayesian wants to assess the posterior probability of A, after the result of the present experiment, and then compares the two.

In the coin-tossing experiment, where it’s uncertain whether the coin is true or fair, you can represent the system as two variables: X is the state of the coin — heads or tails? — upon landing from a toss (result); and O, the balance or fairness of the coin. At the conclusion of the coin-toss experiment, the specific, actual outcome or “assignment of state” for the variable O can be written using the lower-case letter o to signify the actual result obtained.

The theorem then would write:

P(o | D) = P(D | o) * P(o) / P(D),

which states the posterior probability of A. In the case of the subject coin, given the first formula, A is O (the coin’s fairness) and B is transposed as D (data) in the second formula, the result of collected outcomes of X (recorded states of the coin, heads or tails, from past trials). In the second formula, the lower-case o stands for the result you actually got in this particular experimental trial. That is, it transposes the hidden variable O into an actual observation. In this context, D is “a normalizing constant that we can ignore” for present purposes.[11]

It has been observed that, depending on your knowledge of the coin before you experiment with it (do you have any reason to believe the coin has certain properties?), you can figure P(o).[12]

“The [Bayesian] theorem becomes interesting when you know that A is the variable that drives the behavior of your system. In that case, you can express
P(B | A) as the probability of the results you got from the experiments, assuming you knew A.

“So when you know the causal relationship between A (the driving variable or cause) and B (the driven variable or effect), you can revert that knowledge to infer A from B. This means you can find the posterior probability for the hidden variable given its prior probability and some experimental results, if you know how the hidden variable influences the experiments.”[13]

It has been pointed out that the Bayesian interpretation of probability has direct relevance to quantum theory. As John Baez writes:

It turns out that a lot of arguments about the interpretation of quantum theory are at least partially arguments about the meaning of probability! For example, suppose you have an electron in a state where the probability to measure its spin being “up” along the z axis is 50%. Then you measure its spin and find it is indeed up. The probability now jumps to 100%. What has happened? Did we “collapse the wave function” of the electron by means of some mysterious physical process? Or did we just change our prior [probability] based on new information? Bayesianism suggests the latter. This seems to imply that the “wave function” of the electron is just a summary of our assumptions about it, not some[thing] we can ever measure. Some people find this infuriating; I find it explains a lot of things that otherwise seem mysterious….

I believe the frequentist interpretation just isn’t good enough for understanding the role of probability in quantum theory. This is especially clear in quantum cosmology, where we apply quantum theory to the entire universe. We can’t prepare a large number of identical copies of the whole universe to run experiments on![14]

But that seems to be the very approach that Froggie had in mind (see above).

It may very well be that the Frequentist approach to probability theory is an elegant generalization of all conceivable cases that give rise in four-dimensional space/time — that is, in the universe as defined by Newtonian mechanics, given its foundations in mathematics. On the other hand, such a generalization precludes the possibility of “subjective” insights about the system.

But if subjective elements have effect in our world (as clearly they seem to do), then a method is needed whereby such might be quantified. Once again, the “observer problem” appears to be front of stage — in quantum theory, relativity theory, and even in the macroworld so ably described by Newtonian mechanics.

As it turns out, observers themselves are naturally irreducible to mere “numbers.” Thus it seems their subjective experiences need to be accounted for, quantified in a way that science can deal with.

Since there is truthful observation on both sides of statistical theory, Frequentist and Bayesian, one is struck by the thought that here we are not dealing with mutually exclusive entities, but with complementaries. The analogy here goes to quantum superposition: Is it a particle, or is it a wave?

Here it seems we’re encroaching yet again on the famous “observer problem.” Which in the final analysis seems not to depend on “true/false” answers. It seems for many questions, it’s not a matter of true/false, either/or, yes/no, zero/one answers, but of both answers being true in some way in a dynamic world; and eerily, each in a way directly responsive to its opposite. Which is directly analogous to the Frequentist and Bayesian approaches to probability theory, and also to problems of quantum superposition. 

Perhaps Aristotle’s Law of the Excluded Middle needs to be rethought with respect to natural complementaries as well as to the problem of quantum superposition.[15]

 



[1] See unsigned article at http://abelard.org/briefings/bayes.htm#bayes

[2] Ibid.

[3] Froggie is a character in a yet-unpublished manuscript that Alamo-Girl and betty boop have been collaborating on for the past year, and hope to see through press soon.

[4] Gerald Schroeder, “Evolution: Rationality vs. Randomness,” 2000. http://www.geraldschroeder.com/evolution.html

[5] Ibid.

[6] Weighting data is a statistical technique. For details see http://npts.ornl.gov/npts/1995/courseware/Useable_Nav3_7_27.html

 

[8] http://www.dcs.qmul.ac.uk/~norman/BBNs/Baysean_approach_to_probability.htm

[9] Ibid.

[10] http://blog.monstuff.com/archives/000108.htm

[11] Ibid.

[12] Ibid.

[13] Ibid.

[14] John Baez, “Bayesian Probability Theory and Quantum Mechanics,” September 2003, at http://math.ucr.edu/home/baez/bayes.html

 

[15] Aristotle posited three laws as basic to all valid thought: the law of identity — A is A; the law of contradiction — A cannot be both A and not A; and the law of the excluded middle — A must be either A or not A. See http://www.bartleby.com/65/lo/logic.html for further details.


TOPICS: Philosophy
KEYWORDS: bayesianism; combinatorics; frequentism; observerproblem; quantumtheory; statistics
Navigation: use the links below to view more comments.
first 1-2021-4041-6061-75 next last
Often on science threads we encounter issues of statistics, probability theory -- and the observer problem. We thought it might be interesting and helpful to delve into these subject areas here, and to invite our fellow Freepers to share their knowledge and expertise.

Thanks in advance to all participants!

1 posted on 06/30/2006 7:12:26 AM PDT by betty boop
[ Post Reply | Private Reply | View Replies]

To: All
Oooooopppsss! Footnote 7 got lost somehow. So let me give it to you here:

7 Weighting data is a statistical technique. For details, please see http://npts.ornl.gov/npts/1995/courseware/Useable_Nav3_7_27.html]

2 posted on 06/30/2006 7:15:56 AM PDT by betty boop (The universe is not only queerer than we suppose, but queerer than we can suppose. -J.B.S. Haldane)
[ Post Reply | Private Reply | To 1 | View Replies]

To: betty boop

If a tree falls in the forest what is the probability that a eco-whacko will be in it?...........


3 posted on 06/30/2006 7:16:08 AM PDT by Red Badger (Follow an IROC long enough and sooner or later you will wind up in a trailer park..........)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Alamo-Girl; marron; hosepipe; PatrickHenry; Doctor Stochastic; tortoise; js1138; Dimensio; ...

"Observer problem" ping here, just in case you have an interest!


4 posted on 06/30/2006 7:18:23 AM PDT by betty boop (The universe is not only queerer than we suppose, but queerer than we can suppose. -J.B.S. Haldane)
[ Post Reply | Private Reply | To 1 | View Replies]

To: betty boop

I work with this stuff every day.


So what's the point of the article?


5 posted on 06/30/2006 7:19:21 AM PDT by Al Gator (Refusing to "stoop to your enemy's level", gets you cut off at the knees.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Al Gator
I work with this stuff every day.

So what's the point of the article?

Not everybody does work with this stuff everyday, Al Gator. We write for those people. If there's nothing here for you, just pass it by.

6 posted on 06/30/2006 7:29:58 AM PDT by betty boop (The universe is not only queerer than we suppose, but queerer than we can suppose. -J.B.S. Haldane)
[ Post Reply | Private Reply | To 5 | View Replies]

To: betty boop

Oh I will, but I still don't see how this stuff is NEWS AND ACTIVISM!

Should be in general chat or something.


7 posted on 06/30/2006 7:33:52 AM PDT by Al Gator (Refusing to "stoop to your enemy's level", gets you cut off at the knees.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: betty boop
For those occurences when an infinite probability space is encountered (i.e., not a finite space) ... say, such as considering when a "fair" coin is tossed and lands on its edge ... P(head)+P(tail)+P(edge)=1 ... the number of tosses being countably infinite (Cantor's definition) ...

Multiple by zero and then add the answer.

8 posted on 06/30/2006 7:35:58 AM PDT by jamaksin
[ Post Reply | Private Reply | To 1 | View Replies]

To: betty boop
It appears that both approaches are equally “subjective” in the final analysis: The “observer problem” — the problem of a subject intending an object — seemingly cannot be obviated under any experimental conditions. Yet it seems Bayesians are not so much “subjectivist” as empiricist in their approach to statistical theory. In comparison, we might say the Frequentist approach is formalist: It thinks the observer can be “left out.” As if the origin of the formalism did not have a human mind — an “observer” — to “think it into existence” in the first place.

The entire essay is compellingly and clearly written, and I intend to return to it for additional study. But the above-comment is especially trenchant and insightful, I think. The observer effect is not trivial. It is real. It has profound effect. But it lies obscured within a lacuna inherent in the scientific method, so it is trivialized or simply denied.

9 posted on 06/30/2006 7:45:26 AM PDT by JCEccles
[ Post Reply | Private Reply | To 1 | View Replies]

To: betty boop

Thank you oh so very much for posting this! I'll be checking in and commenting now and again.


10 posted on 06/30/2006 8:01:14 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 1 | View Replies]

To: JCEccles; Alamo-Girl
The observer effect is not trivial. It is real. It has profound effect. But it lies obscured within a lacuna inherent in the scientific method, so it is trivialized or simply denied.

Thanks for your kind words JCEccles. I certainly agree with your remarks, above. I'm looking forward to hearing from you again!

Thanks so much for writing!

11 posted on 06/30/2006 8:01:18 AM PDT by betty boop (The universe is not only queerer than we suppose, but queerer than we can suppose. -J.B.S. Haldane)
[ Post Reply | Private Reply | To 9 | View Replies]

To: Red Badger

LOLOL!


12 posted on 06/30/2006 8:01:43 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 3 | View Replies]

To: Al Gator
Seems to me that the observer problem - which is particularly noticeable in probability theory - leads to significant misunderstanding in the science debates, especially around here. This essay is a sincere effort to improve our dialogue.

Please help us unravel the issue by sharing your insights, Al Gator!

13 posted on 06/30/2006 8:04:32 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 5 | View Replies]

To: betty boop

Bump for later...


14 posted on 06/30/2006 8:05:10 AM PDT by Sopater (Creatio Ex Nihilo)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Al Gator
Should be in general chat or something.

A lot of science threads end up in chat (or worse, many end up in the smoky backroom). But I believe this one is appropriate for the main forum because it is a "Freeper Research Project" and a legitimate Freeper essay.
15 posted on 06/30/2006 8:08:18 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 7 | View Replies]

To: jamaksin
Multiple by zero and then add the answer.

Could you explain this a bit?
16 posted on 06/30/2006 8:10:25 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 8 | View Replies]

To: betty boop
[ Combinatorics, Probability Theory, and the Observer Problem ]

I'm in...
How do we interface this thought into the mix?...

"The difference between fiction and reality is that fiction has to make sense" - Tom Clancy

The Observer is prone to observe things that make sense or to make sense of things that don't seem to make sense by formulae.... Mathematics/physics could be a parlor game for observers.. until those observers get added to them more "dimension" to observe with.. You know like the Bible says..

Could be a thoughtful discourse here..

17 posted on 06/30/2006 8:11:24 AM PDT by hosepipe (CAUTION: This propaganda is laced with hyperbole..)
[ Post Reply | Private Reply | To 1 | View Replies]

To: JCEccles
The observer effect is not trivial. It is real. It has profound effect. But it lies obscured within a lacuna inherent in the scientific method, so it is trivialized or simply denied.

Very well said. Very true. Thank you, JCEccles!
18 posted on 06/30/2006 8:11:25 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 9 | View Replies]

To: hosepipe
"The difference between fiction and reality is that fiction has to make sense" - Tom Clancy

LOLOL! But also very true.

The Observer is prone to observe things that make sense or to make sense of things that don't seem to make sense by formulae...

Indeed. That is part of what we need to explore.
19 posted on 06/30/2006 8:16:26 AM PDT by Alamo-Girl
[ Post Reply | Private Reply | To 17 | View Replies]

To: Red Badger

Corollary: If a French soldier walks on to a battlefield, and there is no one to surrender to, does he still suffer a horrible defeat?


20 posted on 06/30/2006 8:23:16 AM PDT by Hegemony Cricket (Rugged individualists of the world, unite!)
[ Post Reply | Private Reply | To 3 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-6061-75 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson