Posted on 12/27/2003 12:44:51 AM PST by bdeaner
Only if the science was inadequate to properly characterize the system, or the guys doing the math were idiots and did the math incorrectly.
I would also note that for many kinds of engineering, complex mathematical models without experimental verification is how MOST of the engineering work is done i.e. it will go from design to production without ever existing in the real world. And this includes systems for which a convenient mathematical solutions don't exist e.g. an unsolvable system of differential equations. It is worth pointing out that we use "unsolvable" approximate expressions primarily because the correct solvable expressions are intractrable in application.
If you've kept up in science and engineering at all, you know that more and more of all the science and engineering that is done is mathematically derived rather than experimentally determined. Not only is it more accurate in practice, but often cheaper as well since computing power doesn't cost much these days. It is why all sciences now have a subfield called "computational field", which is slowly taking over many laboratory functions of science. We still do experiments occasionally to see if the science was right on the fringe where the science is uncertain, but for most well-studied areas of science one does not need to verify a mathematical derivative.
I'm still waiting for an example where science is not subject to mathematical derivation. But since that would mean that mathematics was fundamentally flawed, I'm not holding my breath. Mathematics only produces garbage if the science that uses it is garbage.
Yes. Cards with velcro that works for some combinations but not others. After a few shuffles, you get the ordered sorting you're looking for. That example was originally from jennyp.
No. What I'm showing [for those who are slow-or-unwilling to grasp concepts] is that when you have component parts that already exist, you can assemble larger structures without the need to simultaneously re-invent each sub-component. So if you factor in the use of previously existing sub-assemblies, as nature does, your model collapses.
No. Most probabilistic models are continuous. You should study Itô's work to see what is really going on.
It's generally accepted I think that quantum phenomena are random.
It is only treated this way statistically for many practical purposes. Nothing in our universe is inconsistent with a purely deterministic model, and certain properties of the universe are only expressed in deterministic systems which lends some credence to the concept. Quantum phenomenon in particular have been formulated as expressions of deterministic processes (whether those specific formulations map to reality is unknown -- they only prove the possibility). Papers have been published on this.
There are many classes of simple finite state systems that cannot be perceived as anything but random even if you had an intelligent machine with the full state space of a finite universe at your disposal. For example, strong cryptography is premised on this fact and uses algorithms with exactly this property.
Solomonoff induction is one of the most brutally limiting concepts in mathematics, and somewhat analogous to the incompleteness theorem but in systems theory. There are a great many things about any finite state system that can never be known from within that same system. Quantum phenomena my very well fall under this umbrella such that even if we can know that it is deterministic in fact, we can never treat it as such as a practical matter because we cannot measure the state of any particular instance.
Demonstrating that a process is finite state to extremely high certainty is cheap and trivial. Determining the actual state of the same process is typically intractable.
No, that's not correct. The state evolves determinstically according to the theory but the state is not the observable. The observable phenomena are "generated" from the state in a random manner, again according to the theory. It is not a matter of practicality - there is currently no better description.
Ermmm, you almost said what I said (I probably wasn't clear). I'll rephrase.
We can mathematically test that the system is extremely likely to be deterministic i.e. not mathematically random. However, we are (perhaps just currently) incapable of measuring or reverse engineering the state for most systems. Without knowledge of the state the system will appear random, not because it necessarily IS random but because induction is intractable, as it often is.
Strong PRNGs are good classical examples of this. Cryptographically strong PRNGs are generally very simple deterministic processes, yet there exists no possible machine in our universe that can discern the deterministic nature of these processes without knowledge of the internal state (for the good ones anyway). As a result, we have to accept these processes as "random" for all practical purposes when they are not random by definition.
Our inability to see inside the state of quantum processes forces us to model them as "random", yet there is substantial evidence that these are in fact deterministic processes that are merely intractable from the standpoint of Solomonoff induction. Therefore, we treat them as "random" even if we know they probably are not from a strictly technical standpoint.
The importance of the distinction is that "random" and "deterministic" have VERY different consequences from a theoretical standpoint. It does not matter that we cannot discern the state of quantum processes, merely knowing whether or not they are deterministic is immensely important and powerful. More so than most people imagine. It is what puts hard limits on what is possible in our universe.
Having quantum processes that are truly random describes a universe that is wildly different from quantum processes that are deterministic but merely beyond the predictive limits of our machinery to discern.
Without knowledge of the state the system will appear randombut according to the theory we can have complete knowledge of the state of a system and (certain) measurements will still yield random results.
As to your claim that having quantum processes that are truly random describes a universe that is wildly different from quantum processes that are deterministic that seems very unlikely to me.
Therefore, computers are impossible. Or miracles.
Buzzzzzz, wrong. Do you need some instruction in how to use Google or are you just blind to the facts?
Accompanied by a large number of attempts.
All 50 proteins:...
You are misinformed, there can be wide variation in flagella's, as evidenced by the fact that there are, in fact, a wide variety of flagella extant in various creatures. Where there can be variation, there can be selection. See "Finding Darwin's God", by Miller, for a blow-by-blow account of Behe's failed predictions on this subject.
Such arguments suffer badly when forced to come to grips with the real world. Much of the functionality of our genetic heritage is really rather flexibly manifest in the architecture of our folded protein structures, which will often still be quite functional with a few random hits in the exact composition of the generative DNA chain. This means we, as a population with dominents and recessives, can end up with a toolkit of nearly-alike genes, any one of which might suddenly be heavily favored by natural selection, in a mere couple of generations after a traumatic major change in the environment.
& give some thought to what the immune system does--producing overnight a brand-spanking new protein in response to an invading virus.
The model you are working with to produce these bogus odds-calculations is an insult to the richness of the field of discourse it pretends to describe.
Remember that "random" means utterly discontinuous functions of time.
That is not what "random" means. You may be referring to the particular case of uniform continuous distributions. But if so, it still sounds kinda garbled. Distributed over time is merely one possible attribute of a random function, and it's unclear to me what it means to be "discontinuously" distributed over time. I think you might be meaning to say: NOT a function of time at all.
...which might, or might not, mean a uniform continuous distribution. If it does, then this is, I think, your strongest argument. Assuming it is, I will point out that, while mutational change appears to be random with a uniform continuous distribution (but probably isn't quite). The resultant mutated population that gets to breed is decidedly a mean distribution with a strong central tendency, because the outliers have been eliminated by natural selection.
At some point if the improbabilities become too large then theory becomes insufficient.
You weren't there, Behe wasn't there, and Dembski wasn't there. You cannot construct a meaningful calculation of the odds against an event, if you can't rigorously specify the state-space and the selection criteria--and you can't.
Given that, there is a scientific rule of thumb that says: "don't bet on miracles, it ain't paid off yet one single time". Which suggests lots of small steps with small odds against, and lots of time to throw the dice, and, we suspect, extrapolating from the behavior of the immune system, decidedly crooked dice, on top of all that.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.