Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: general_re
So I'm reading this thread and following the comments fairly well. And then I come across yours. You said, "There's no fitness check. If we posit an environment with selective pressures, where a sentence that is more like the final sentence is favored over a sentence that is less like the final product, then we will very, very quickly arrive at the final sentence..." Ok, so the process isn't so random then because some intelligence is aware of the desired outcome and is preserving and building on data from previous attempts. Is that it.

You continue, "The error in the original post is the assumption that every attempt is a fresh attempt, where we throw everything out from the last run and make a random stab in the dark...". If order is derived by chance from nothing then mustn’t we assume that each try is completely unique and in no way connected with any other attempt? Isn’t this very meaning of randomness?

I no statistician or mathematician but how can a process be both random yet retain and build upon previous data?

Thanks in advance and FReepOn
46 posted on 03/05/2002 1:48:04 PM PST by Texas_Jarhead
[ Post Reply | Private Reply | To 31 | View Replies ]


To: Texas_Jarhead
how can a process be both random yet retain and build upon previous data?

A question for your question:
Is there a random process?

53 posted on 03/05/2002 1:56:17 PM PST by RightWhale
[ Post Reply | Private Reply | To 46 | View Replies ]

To: Texas_Jarhead
You continue, "The error in the original post is the assumption that every attempt is a fresh attempt, where we throw everything out from the last run and make a random stab in the dark...". If order is derived by chance from nothing then mustn’t we assume that each try is completely unique and in no way connected with any other attempt? Isn’t this very meaning of randomness? I no statistician or mathematician but how can a process be both random yet retain and build upon previous data?

Because the theory of evolution is based on two processes: mutation (random) and natural selection (non-random).

You start with a group of organisms. Some number of them have random mutations. Most of those mutations are harmful, so those individuals die-- they don't reproduce. Only a tiny percentage of mutations are beneficial, but the individuals with those mutations survive longer and reproduce more. So the next generation has more individuals with the favorable mutation than the prior generation.

Some percentage of the next generation has mutations. Again, most of the mutations are harmful, but those individuals don't reproduce; the few with favorable mutations do reproduce, and in disproportionate numbers.

Each generation thus keeps the beneficial results, and only the beneficial results, from the previous generation's random mutations; and each set of favorable mutations builds on the prior successes.

57 posted on 03/05/2002 2:04:50 PM PST by Lurking Libertarian
[ Post Reply | Private Reply | To 46 | View Replies ]

To: Texas_Jarhead
Ok, so the process isn't so random then because some intelligence is aware of the desired outcome and is preserving and building on data from previous attempts. Is that it.

That's how it works in this example. It's not a perfect evolutionary analogy, because our example here is working towards a specific goal - a particular sentence - whereas evolution via natural selection doesn't really have a goal in mind.

If order is derived by chance from nothing then mustn?t we assume that each try is completely unique and in no way connected with any other attempt? Isn?t this very meaning of randomness?

Let's walk through it. First, we need an environment. And to experience some sort of evolutionary process, our environment has to have selective pressures - that is, some traits will be more helpful for survival, and some will be less helpful, and some will be downright dangerous for creatures that have them. Imagine a dysfunctional creature that drowns every time it rains, and you'll see what I mean.

So, for this little thought experiment, we want an environment consisting of a chains of letters, 41 letters long. And we further want an environment where chains that are more like the final product have an advantage over chains that don't. And the chains that aren't much like the final product will have a disadvantage, and will die and go away.

So, we start with a random string of letters created by spinning the big genetics wheel. Now, as this is a random process, the odds that we'll get the final product right at the start are pretty damn long, as this article rushes to assure us. But the odds are, that we'll get a string of letters out that has at least one or two letters in the right place.

Now we have a chain that has a slight resemblance to the final product. These few letters in the right place are an adaptive trait - they are preferentially replicated in the next generation. What that means is that those letters are (almost) automatically replicated in the next generation - after all, if they weren't, the offspring would die, right?

So, come the next generation, we have a chain where a few letters are already in place, and since that's an adaptive trait, those letters get passed on to the offspring - the next chain. And then we spin the big genetics wheel yet again, but not for all letters - some letters are passed on from the parents. So we spin and generate random letters in place of the non-adaptive letters. And we find that one or two of the new letters are in the right place, in addition to the one or two that we had from the last generation.

Keep this up, and after a few generations. you'll have the final sentence. And it won't take trillions and trilions of years, either. If you programmed a computer to do it for you, you'd have the final product in probably less than 60 generations, and almost certainly less than 100.

It is a random process, but some random products are more successfull than others. That's what I'm talking about, and that's why this article is dead wrong. Period.

How can a random process accrue 'data' to achieve some eventual state when said state is supposed to be an unknown?

Well, that's where the "million monkeys" analogy breaks down ;)

There's no selective pressure in monkeys typing randomly, so there's no reason for them to eventually produce "Hamlet." If we imagine a selective pressure - e.g., we reward monkeys that can produce things a little bit like "Hamlet", and shoot the monkeys that type gibberish, we'd have a selective pressure. And then we up the bar a little bit by rewarding the few monkeys that can produce something somewhat like "Hamlet," and shooting the monkeys that only produce stuff a little bit like "Hamlet." And then we up the bar again by rewarding monkeys that produce stuff that's a lot like "Hamlet" and shooting all the lesser monkeys.

Keep that up for a while, and you'll get "Hamlet" out of a monkey soon enough ;)

63 posted on 03/05/2002 2:10:30 PM PST by general_re
[ Post Reply | Private Reply | To 46 | View Replies ]

To: Texas_Jarhead
I no statistician or mathematician but how can a process be both random yet retain and build upon previous data?

The "random walk" is one example. Every step in the walk (i.e., your location) depends on preceeding steps, yet is random. There are many such statistical processes in nature.

136 posted on 03/05/2002 3:18:20 PM PST by LibWhacker
[ Post Reply | Private Reply | To 46 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson