You're apparently not all that familiar with the history. Setterfield originally drew a curve through selected data points only. Later, forced to include all the data points including those below the line, he drew a squiggly damped-oscillation curve through them. That is, c actually dipped below the modern value but since then rebounded back and forth across the modern value, damping. SPROI-oi-oi-oing!
That was another clue. Figure 3 was either his original curve or he's jumped back to his old one, but the date on the document told me it's the old curve.
A look at Montgomery's Raw Data Table shows Figure 3 to be missing several data points between 1740 and 1780. Some of the missing points are below the drawn curve and one is below the modern value of c.
The "enlargement" in Figure 4 only helps show what the critics were talking about, as it shows an area where the real observations, most of which are not shown on the wide-scale Figure 3, consistently dip below the imposed curve originally drawn by Setterfield.
I believe Figure 3 with it's misdrawn curve is dead meat, not even being defended anymore as of the 1999 release of cDK. You make these announcements with such grandeur and they don't bear up.
I'm ashamed and embarrassed at my lack of inspiration. The correct rendition is below.
SPROI-oi-oi-oing!
The books would balance better if we didn't have [unexplained free energy]. If the math is right, the books do balance better with excess energy. The excess energy should be necessary to retain the same biological effects, not to torch them. Here's some fresh meat to explain that.
The short answer appears to be that the star's excess energy increase of 10^7 (maximally, 10^8) is largely absorbed into internal kinetic energy of the star.
I said that opacity varies with (proportional to) c^2; this is from Setterfield 2001 and is repeated in his Brief Stellar History. Opacity is also the mass extinction coefficient, the rate at which photons are absorbed or scattered. Opacity is dependent on mass density, which varies inverse to c^2.
Without considering VSL, all things being equal, a star 100 times less dense than another (having 100 times more volume) is also 100 times more opaque, because the photons have much more volume to overcome without being dissipated by scattering or absorption. The path out may be less sparsely populated, but there is much more non-escape volume to be deflected to, which increases photon extinction. It appears this opacity must mean 100 times fewer photons will reach the observer, which forces me to conclude the remainder are dissipated into kinetic energy within the larger volume. This is ordinary non-VSL astrophysics derived from this site.
With VSL in play, with c increased by 10, we have a star 100 times less dense without volume change (because space is less granular, higher resolution, and mass and density decrease inverse to c^2). But it also is putting out 10 times the photons moving 10 times as fast. These factors overcome the factor of 100 times fewer photons, so the photon output is the same as the original comparison star. Apparently the opacity (increased photon extinction) means 10 times as many photons are being converted into internal kinetic energy. This energy then appears free to manifest as temperature, volume, pressure, and density compensation.
So for tonight, I think the excess energy is staying within the star. Perhaps my reading of the Rydberg and c-delta constants was off and these really do relate to dissipating that kinetic energy. Perhaps it is mostly dissipated as internal pressure and heat. Some of it may increase the luminosity by a small factor instead of 10^7. The sun runs at 3.85x10^26 watts, it seems like a factor of 10^7 would spread out comfortably. Here's hoping no foot-in-mouth, perhaps you can explain if this does not seem viable yet.
For other asides, why does less rest mass generate photons with the same energy? Because the photon generators themselves are moving faster and thus have the same energy as before too.
Yes, hc is a constant, my point there is when I was tempted to abandon that thought, I was reminded that hc being constant was observed not theorized.
My incomprehensible paragraph was intended not to explain the excess energy, but the reason the excess energy does not create major redshift, which point you seem to permit.
You make a couple other statements which don't read my meaning quite rightly, but since what you're saying by them is both essentially accurate and not responsive to my position, there's nothing of consequence.
My statement about the datapoint chart was about the datapoints, not the curve. The datapoints show some kind of lightspeed variability at statistically significant levels, and statistical attempts to rebut them have been flawed, while statistical attempts to repeat them have retained significance. I was similarly going to comment on missing points from Ichneumon's chart, but whichever chart you use, whatever data you reasonably select, it ends up rejecting the constancy hypothesis. The oscillation or cosecant-squared curve are not essential to the theory; but statistical rejection of the constancy hypothesis is, and that is what the charts always show. Ordinary plots of refinements of measurement should funnel toward the value from both sides; for c and h they simply do not. (Also Montgomery points out the low 1930s measurements were mostly using stellar aberration and Kerr cells, which gave methodically low numbers, which make datapoints from those methods fit even better when refined.)
But I'm really less troubled by your evidence against VSL, and more that your own methodological dismissals are losing cogency. I accept your explanation that you're just looking for the "divide by zero" and it keeps arising, but your other statements suggest a greater bias than that. You say that changing delta-c illustrates we're "going to keep coming back forever" refining, but refining hypotheses is the essence of science, not its disproof. You see a "downhill trend in clarity" as indicative, but when one moves from statistics about one datum to a whole worldview change, that is expected. You say "propagandists never lose", but your conclusion that the theory is propaganda you seem to hold as equally inevitable. You say "I don't find my confusion to be an argument for cDK", but confusion is not an argument for a theory but an argument against a ready acceptance of old or new theory. You say "to be recognized as right, you have to be intelligible", but Einstein was recognized as right when supposedly only he and Eddington really understood the theory. You say "Messianically delusional", but as a psychologist you should recognize the cult mentality of modern evolutionists. You say "Setterfield is a crackpot", but you seem to wrap a number of methodological and epistemological (and psychological) assumptions into that conclusion. You say "it isn't playing for 'right' anymore but for 'reasonable doubt'", but that's only my reframing as reasonable doubt that you're referring to, and I'm only doing so to find out your open-mindedness. I'm disappointed that you're unwilling to accept even the ordinary proofs that work for juries (citizens, or peer reviewers) as reasonable doubt, such as statistical significance and explanatory power.
Was Montgomery-Dolphin 1993 statistically significant? If not, why and who says? If so, what does that mean? Does VSL answer the rough dozen physics puzzles I've listed above, or not? If not, why and who says, for each case? If so, what does that mean? Rather than review the list, I'll let you pick and choose. Formulating groundbreaking theories is hard work, but if you let that methodologically prevent you from considering them you get stuck with epicycles and caloric and ether. Recall that Copernicus used epicycles too, but his theory was preferred because it had fewer than Ptolemy's, and by Galileo's time there were none. Setterfield has not had his Galileo yet, nor his Huxley. If you want to discuss methodology, please reply to my suggestion that we agree on what constitutes proof, evidence, doubt, simplicity, etc.
Here's a bonus I just thought up in the shower yesterday. I believe Chandrasekhar's limit would be decreasing with c^2, which means black hole formation would be much less than an evolutionary model would predict, whether from individual stars, galactic cores, or crisis pressures on smaller masses. So I tested this by looking for evidence of unexpected lack of old black holes. Sure enough, Stephen Hawking, the unquestioned authority on black holes, says p. 127 of Brief History 1998: "One would also have expected the density fluctuations in such a [chaotic-boundary] model to have led to the formation of many more primordial black holes than the upper limit that has been set by observations of the gamma ray background"; and p. 115: "Even if the search for primordial black holes proves negative, as it seems it may, it will still give us important information about the very early stages of the universe." Hawking prefers the no-boundary model, which predicts fewer black holes, to the chaotic-boundary model, but his description of it, and his whole drift on black holes, suggests to me he is still uncomfortable with their paucity: "Further predictions of the no boundary condition are still being worked out. A particularly interesting problem is the size of the small departures from uniform density in the early universe", which would cause these old black holes. Don't think I'm painting him as supporting VSL rather than no-boundary interpretation of black hole formation; he's merely saying 1) old black holes are unlikelier than expected and haven't been found, and 2) there are still problems to work out with the no-boundary model in re black hole formation. In sum, another experimentally verified prediction of VSL that I just made up now.