Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: Messianic Jews Net
OK, in the cold light of this morning it doesn't look like my after-dinner to-glasses-of-wine sleepies hurt my thought processes much. The opacity objection stands.

But it also hits me I still have much of my original heartburn, even if there's a lot of new material to sift through. I'm still trying to see where I got my answers.

What were original reaction rates on Sun and Earth? How many photons and alpha particles did they respectively generate?

Never mind Setterfield's almost impenetrable paper for the moment. I spent a lot of time looking at earlier versions of it years ago but it seems mostly new and wondrous. Let's look at what I have from you.

1) Sunlight: First VR correctly observes that over time, the energy of a photon is conserved (except at quantum leaps, which can be ignored here)

I was worrying about across quantum leaps only. Because of the hc = constant balancing act, the per-photon changes are a wash. I assume that nothing happens to wavelength for in-flight photons since space is not expanding in a Setterfield universe. That's all you have to play with. E = hc / l doesn't give you anywhere much to hide anything.

The per-photon energy changes are a wash.

Again, yes. A green photon stays green forever and has the same energy forever.

Then VR correctly observes that in the past, the total energy from all photons should have been much lower: The energy of each photon has to go a lot lower or Adam is in trouble.

Only if the Sun is 6,000 years old but looks 4.5 billion because of incredibly boosted reaction rates in the past. Setterfield puts a reaction rate into several of his formulas in recognition of this change. I don't see where he puts numbers in here but the original rates have to be incredibly cranked. If you just take the two numbers 6,000 and 4.5 billion, one is 750,000 times the other one. But that's very misleading, because Setterfield's decay curve flattens dramatically early on, inverse cosecant squared or whatever. By far the lion's share of the work has to get done in the first five hundred years or so, then, because only negligle amounts of the total are getting done once the c-curve decays to the neighborhood of modern values. For that early period the ratio then looks more like nine million to one, which is why I suspect the actual multiplier will turn out to be Setterfield's c-boost of 11 million. And if the nuclear fuel burn rate is that high, there will be that many more photons and that many more alpha particles.

Here I must admit being mistaken about the cancellation coming from lower individual photon energy in the past, because I confused the present low value of old photon energy with its past high value, which does appear identical to the present high value of new photon energy.

I do not understand what you are saying you confused. What is the present value of an old photon? We agreed I think that a photon is a photon and its energy will be proportional to its wavelength, other differences cancelling.

An old photon would have much less energy now, but be emitted much more often, than a new photon, but in both cases the energy is conserved.

I think you're trying to say there were more photons then, but redder. That's what Setterfield said in at least one paper. The problem is that energy is not conserved if the redshifts are small. Vastly too many photons, only a tiny redshift. We need a factor of 11 million (only you can't do that and not go blind). We have less than two.

Where is the cancellation? The opacity computation doesn't do it for me. That hides nothing. OK, it puts in some extra redshift. But you can't do enough redshift without going blind. And the Earth becomes a red giant before you do enough.

That means the photons observed now from the remote edge of the universe have much less energy than the photons from our sun now. However, the cancellation must come from the totality of photons if it exists, rather than from individual photons.

... Setterfield's solution to this objection in 6.2 is to calculate the overall luminosity of the star to see if it is truly greater with c or if factors cancel...

And the answer is in Setterfield's luminosity calculation. I feel this to be deeply flawed, a shell game with cherry-picked formulae rather than a consistently applied model stepped through stages of time and change. In particular, he abuses stellar opacity, taking only what he wants. I assume he does the same everywhere with everything else he's doing.

You truly have 11 million times more photons back then, so far as I can still tell. They aren't very red-shifted, although yet another problem is that it's far from clear why not. The hydrogen nuclei fusing in the Sun back then have masses only a tiny fraction of the masses of modern hydrogen nuclei. A tiny fraction. That the photons emerge with a z of only 1.5 sort of beggars understanding, E = mc2, and conservation of energy in general. They have almost the same energy of modern solar photons, and where did that come from?

But take it as a gimme that they emerge with almost the same energy as now but there are 11 million of them for every single modern photon. The cancellation is the Sun's opacity, 11 million times greater then than now? Did you think about this?

459 posted on 02/20/2005 5:38:42 AM PST by VadeRetro (Liberalism is a cancer on society. Creationism is a cancer on conservatism.)
[ Post Reply | Private Reply | To 455 | View Replies ]


To: VadeRetro; Messianic Jews Net

Has anybody thought about what CDK does to observed geometric distances like SN1987A?

SN1987A was distanced at c187kly by knowing that the light from it illuminated a ring of gas surrounding it about 1 year later. It seems to me that the exponential CDK model would move SN1987A MUCH further away, off the top of my head hundreds or even thousands of times further away. This is because given that SN1987A is a lot more than 6000ly away light must have been moving much faster then, so the triangle baseline gets a LOT bigger....

Now that makes me ponder two problems.

1. If SN1987A is hundreds of times further away than mainstream theory places it then why is it (and the Mag Clouds in general) the correct brightness for its mainstream distance? (does the additional photon output that VR and MJN are discussing cancel this?)

2. Just how big does the universe seem to be in the Settlefield model? If a comparatively close object like SN1987A is hundreds or thousands of times further away than believed by mainstream physics then what about stuff like the quasars that mainstream physics already places billions of lightyears away? Is there really time for the Settlefield universe to grow that big in 6000 years, even with the high initial c value? At a guess, off the top of my head these distant objects to remain in proportion would have to move thousands of trillions of lightyears away at least.

Apologies if these points are nonsense or are addressed by Settlefield. I am not a real physicist.


461 posted on 02/20/2005 6:18:41 AM PST by Thatcherite (Conservative and Biblical Literalist are not synonymous)
[ Post Reply | Private Reply | To 459 | View Replies ]

To: VadeRetro
And the Earth becomes a red giant before you do enough.

Good news and bad news. The Earth will never become a red giant. (Whew!)

The Sun will. (Ah, crap!)

471 posted on 02/20/2005 11:59:05 AM PST by VadeRetro (Liberalism is a cancer on society. Creationism is a cancer on conservatism.)
[ Post Reply | Private Reply | To 459 | View Replies ]

To: ApesForEvolution; El Oviedo; PatrickHenry; shubi; tang-soo; Thatcherite; VadeRetro
Side issue: I do not understand what you are saying you confused. What is the present value of an old photon? We agreed I think that a photon is a photon and its energy will be proportional to its wavelength, other differences cancelling. As you pointed out, the photon energy is constant except for quantum leaps which decrease it, so a 6000-year-old photon now has much less energy than at emission. Originally it was emitted with almost the same energy as today. The old photon was redshifted slightly because decreased h in the past quantumly affected atomic structure, which sent out a shorter wavelength than photons sent today. Therefore we agree the old photon had "almost the same energy as now". I had that wrong at first.

Other side issues: In 447 I should add I may still not be correctly understanding Setterfield's statement at the beginning of Implications, because my simplification of it does not seem now to be precisely stated. For example, I was about to misstep and say theoretically h should not invert c, but once again I was struck that by observation h does invert c. Can hardly argue with that. As my understanding improves, so will my lucidity .... Thank you and Thatcherite for admitting working out the issue (464 and 466) about cancelling faster reaction rates with longer time taken to observe them .... Also, please reread Montgomery-Dolphin 1993 linked in 455 and let me know if you still have objections to the refined data-point statistics and conclusions. I'm sure there is more work since.

Main issue for now: I think you're trying to say there were more photons then, but redder. That's what Setterfield said in at least one paper. (Yes.) The problem is that energy is not conserved if the redshifts are small. Vastly too many photons, only a tiny redshift. We need a factor of 11 million (only you can't do that and not go blind). We have less than two. You are right, the energy of the sum of photons is greatly increased (but I was right, individual photon energy is conserved).

I agree "it's far from clear why" there is low redshift. It appears to me, as one who does not understand much quantum theory, that as the granularity h increases with time, the atomic structure undergoes quantum resettlings, which I regard as compactings. The quantum leaps are suggested by the observed quantized redshifts. When Setterfield analyzes the nature of atomic resettling, he specifies that the redshift times the lightspeed quantum (delta-c) where leaps occur, times the Rydberg quantum number, gives the lightspeed for that redshift. The lightspeed quantum is 63.74c (current value), and I would visualize that the number of quanta contained in any given c corresponds to something like a number of placement positions for the atomic particles (similar to electron orbital shells). The Rydberg quantum number is 72*16*pi^4, which I visualize as corresponding to a number of potential configurations for quantum particle release directions. If all that is correctly stated (unlikely), the narrative explanation for why wavelength redshift only registers about 1.5, when c and frequency and photon count go up 10^7 times, is this: the increased granularity of the universe (its high resolution) permits much more quantum positional ability for both the internal particles of the atom and the wavelengths they emit, so that the wavelengths don't get shorter proportionally. They don't need to get shorter because of increased speed, which is all directed to increased frequency; but they get a bit shorter because the atom that can put out such increased frequency has resettled only slightly more compactly or efficiently. (If all this works out someday we will be ironically calling the quantum of 63.74c-now = 1.91x10^10m/s the "Setterfield constant"!)

Where is the cancellation? The opacity computation doesn't do it for me. I still think "the decrease in density lowers the luminous energy by two factors, which are cancelled by the increase in lightspeed and the increase in total photon output." Applying your stellar structure source to the changes in stars, it stated that decreasing opacity resolves to decreasing radius and volume, but also that increasing density resolves to increasing volume, so the volume and radius changes cancel and are not a factor. I don't think that the photons are carrying "less light" so that many of them constitute one "optical photon" (neat concept, though). I read that electron scattering is dissipating (all or part of) the photon increase and that may contribute if I knew more about it.

Let me ask it this way. I see that we are both approaching this like a programmer who keeps getting a compiler error on a horrendous function involving tons of parentheses and we both keep trying to add, remove, and rearrange the parentheses trying to find out if any are missing or extra. If your mental compiler should suddenly spit out the result that the syntax was valid (all the parentheses are rightly joined and cancelled), what would be the result? Would the program run, or would some other unforeseen error manifest? I know you're not running Setterfield's work through the gauntlet just to refresh me in physics. Personally, would you prefer the theory to be right or wrong, and what would you do and believe (particularly about the earth's age) if the evidence were conclusive in either case? Have we established sufficient benefit of doubt to establish that there might be another reason for high radiometric dates? Thank you!

479 posted on 02/20/2005 9:58:27 PM PST by Messianic Jews Net ("The true light that gives light to every man was coming into the world." —John 1:9.)
[ Post Reply | Private Reply | To 459 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson