Free Republic
Browse · Search
Religion
Topics · Post Article

Skip to comments.

New: Shroud of Turin carbon dating proved erroneous ( performed on non-original cloth sample)
Ohio Shroud Conference ^

Posted on 09/28/2008 8:19:34 AM PDT by dascallie

PRESS RELEASE: Los Alamos National Laboratory team of scientists prove carbon 14 dating of the Shroud of Turin wrong

COLUMBUS, Ohio, August 15 — In his presentation today at The Ohio State University’s Blackwell Center, Los Alamos National Laboratory (LANL) chemist, Robert Villarreal, disclosed startling new findings proving that the sample of material used in 1988 to Carbon-14 (C-14) date the Shroud of Turin, which categorized the cloth as a medieval fake, could not have been from the original linen cloth because it was cotton. According to Villarreal, who lead the LANL team working on the project, thread samples they examined from directly adjacent to the C-14 sampling area were “definitely not linen” and, instead, matched cotton. Villarreal pointed out that “the [1988] age-dating process failed to recognize one of the first rules of analytical chemistry that any sample taken for characterization of an area or population must necessarily be representative of the whole. The part must be representative of the whole. Our analyses of the three thread samples taken from the Raes and C-14 sampling corner showed that this was not the case.” Villarreal also revealed that, during testing, one of the threads came apart in the middle forming two separate pieces. A surface resin, that may have been holding the two pieces together, fell off and was analyzed. Surprisingly, the two ends of the thread had different chemical compositions, lending credence to the theory that the threads were spliced together during a repair. LANL’s work confirms the research published in Thermochimica Acta (Jan. 2005) by the late Raymond Rogers, a chemist who had studied actual C-14 samples and concluded the sample was not part of the original cloth possibly due to the area having been repaired. This hypothesis was presented by M. Sue Benford and Joseph G. Marino in Orvieto, Italy in 2000. Benford and Marino proposed that a 16th Century patch of cotton/linen material was skillfully spliced into the 1st Century original Shroud cloth in the region ultimately used for dating. The intermixed threads combined to give the dates found by the labs ranging between 1260 and 1390 AD. Benford and Marino contend that this expert repair was necessary to disguise an unauthorized relic taken from the corner of the cloth. A paper presented today at the conference by Benford and Marino, and to be published in the July/August issue of the international journal Chemistry Today, provided additional corroborating evidence for the repair theory.


TOPICS:
KEYWORDS: carbon14; carbon14dating; carbondating; shroud; shroudofturin
Navigation: use the links below to view more comments.
first previous 1-20 ... 241-260261-280281-300301-307 next last
To: grey_whiskers; Diamond
I don't mind going first... I have nothing to hide.

My computer is an Apple Mac. It is a Model M9020LL/A 1.6GHz 64bit 970 PowerPC G5 Processor with 1.25GB of 333 MHz PC2700 DDR SDRAM. The OS it is running OSX.5.5. Display that was used to cut from is an Apple 20" Cinema 1680 x 1050 Display powered by an NVIDIA GeForce FX 5200 Ultra with 64 MB of DDR SDRAM.

My Photoshop is Version 8.0 for Macintosh. It's a 32 Bit version. Copyright 1990-2003. Here is the splash page for it:

Note, I have obliterated my name and registration information... I guess I do have something to hide... ;^)>

281 posted on 10/05/2008 10:14:40 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 279 | View Replies]

To: grey_whiskers; js1138; Swordmaker
Did you notice I asked for that EXPLICITLY in my post #255 this thread, before the pissing match started?

Tell you what. Give us the image in #252, and a link to the original image you fed into photoshop to get that image, and the version of photoshop you used. Then tell us the exact angle you fed in, and let other people try using all that information and the original photo, if they can produce the image you did in #252.

Science is about reproducibility under controlled conditions, right?

In #268 js1138 wrote the following:

When I was asked for the settings I used I posted them. You persist in using settings that I did not use.

(SNIP)

Yes, I couldn't help but notice these exhanges. They stand out like a blinking neon sign.

Cordially,

282 posted on 10/06/2008 7:49:14 AM PDT by Diamond ( </O>)
[ Post Reply | Private Reply | To 279 | View Replies]

To: Swordmaker
Thanks for posting your Photoshop version number.

Cordially,

283 posted on 10/06/2008 7:51:10 AM PDT by Diamond ( </O>)
[ Post Reply | Private Reply | To 281 | View Replies]

To: dascallie

This is news? Sky’s blue, water’s wet, etc. Anyone who knows anything about the Shroud knew this already.


284 posted on 10/06/2008 7:52:47 AM PDT by Future Snake Eater (My freq'n head hertz...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Diamond
Yes, I couldn't help but notice these exhanges. They stand out like a blinking neon sign.

I used his settings as well. They do not produce the distorted image he posted. I posted the settings that do produce that distortion and they are not even reasonable. I physically measured the offsets and angles he used in his posted picture and replicated it. Ergo, his conclusion is wrong.

285 posted on 10/06/2008 8:18:36 AM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 282 | View Replies]

To: Religion Moderator; js1138; Swordmaker; grey_whiskers
Everyone, keep this thread on the subject and do not let it become “about” another Freeper. That is a form of “making it personal.”

This is a thread about various scientific analyses of a material object; an artifact of antiquity.

js1138 wrote in #249 :

I will point out that anyone can verify my work.
I am attempting to give another FReeper the benefit of the doubt about the veracity of his claims about his methodology. I am attempting to, as he himself put it, verify his work. I am also attempting to verify his word, because his word is part of his work, and it has been very seriously called into question.

js1138 has repeatedly asserted that the only parameter he changed is the angle. From various posts:

The only parameter changed is the angle. I looked long and hard at the effect you describe, but the Photoshop filter is significantly more complex and sophisticated than simply offsetting two images...

You persist in ignoring the fact that since my original posting I have posted three series where only the angle changes.

Now to prevent you from making a complete idiot of yourself in public, try this: take an image such as the x-ray, find settings that give a reasonable 3D effect, and then vary only the angle. That is my methodology in all of the examples I have posted. Anything else is just stupidity...

May I point out that you have failed to show a series of renderings starting with one good one and demonstrating what happens wh[e]n you change the angle and nothing else...

the only difference is the angle chosen in the Photoshop filter...

It isn't difficult to reproduce my result. I took an image from the internet, applied the Photoshop auto contrast correction, and then applied the same emboss filter that, with a different angle, produces a plausible 3D rendering...

I have suggested a means by which the discrepency might be resolved. As it stands presently and until I see otherwise, though, imo the oft repeated claims of methodology not only remain unverified, but further as having been decisively falsified and refuted by the extensively detailed evidence adduced and posted by Swordmaker.

Cordially,

286 posted on 10/06/2008 9:26:09 AM PDT by Diamond ( </O>)
[ Post Reply | Private Reply | To 280 | View Replies]

To: Diamond

I’ve been out of town on business, and I see not much has happened here. I’m going to close out my participation in this debate with a couple of non-inflammatory observations.

First of all the Photoshop filter is not an exact equivalent of the VP-8 analyzer. Whether this is important, I don’t know.

The best source of information I found on the subject asserts that digital manipulation can be exactly equivalent to the VP-8 analog device.

One source implied that Bryce-4 is similar, if not equivalent. I believe somewhere in my collection of obsolete software I have a copy of Bryce-4. Someday I will dig it out and play with it.

The images available on the internet are rather low resolution jpeg versions. The combination of low resolution and possible jpeg artifacts makes any definitive claims bogus.

I stand firm on several claims. First, the process of forming a 3D image from a single flat image is inherently interpretive. You cannot get the 3D effect simply by manipulating contrast. The “extrusion” effect is created by introducing implied light and shadow. That is what leads the human eye and mind to interpret an image as having depth. This processing is not objective. When you do this to an image you are in some sense falsifying the data.

Second, images formed by a process similar to x-rays do not have any “angle of incident light” information. For this reason you can choose any arbitrary angle in the Photoshop filter and get equally plausible results.

Third, images formed by incident light, such as the Obama image, are sensitive to angle, because light and shadow are objectively embedded in the image. The emboss filter will not produce equally plausible effects with arbitrary angles.

Fourth, my honest playing with the shroud image leads me to the conclusion that is has embedded information implying an angle of incident light. This is true even if you use smaller offset parameters. I will listen to anyone who has a complete technical description of the VP-8 algorithm or the Bryce algorithm. I’m betting that they have a parameter equivalent to the Photoshop angle.

Fifth, I have not seen anyone address the rather obvious fact that claiming the shroud image is a “graph” of the distance from the cloth to the body is nonsense. Such a graph would not produce a pronounced image of the pupil of the eye.

Sixth, I have learned a lot from arguing this controversy. I’m always willing to be wrong or partially wrong. I simply haven’t seen much to convince me that I am wrong. I have had to narrow my claims. Take that as a victory if you must.


287 posted on 10/07/2008 11:16:32 AM PDT by js1138
[ Post Reply | Private Reply | To 286 | View Replies]

To: Diamond
The only way I can see at present that js1138 was being truthful when he said the only parameter he changed is the angle...

You asked me to post my settings and I posted the exact settings. They are truthful.

Now, in the interest of the whole truth, I will discuss Swordmaker's comments. He has some valid things to say, but he is completely off base in saying I am dishonest.

First, his valid comments: He says my offset is too high on the shroud picture. He is correct that the offset is higher than anything you would use on a photograph. So how did I choose it?

The simple answer is I played with the parameters to produce an effect similar to the internet images. I didn't consider the technical aspects, only the visual similarity in results.

The parameters I used for the Obama and x-ray images are much more conservative. It is possible to get some 3D effect on the shroud image with conservative settings; it's just not as pronounced. And even with conservative settings, the shroud image is sensitive to angle, as is the Obama image. But not the x-ray.

288 posted on 10/07/2008 12:12:26 PM PDT by js1138
[ Post Reply | Private Reply | To 278 | View Replies]

To: js1138; grey_whiskers; Diamond; NYer; MHGinTN; shroudie
Such a graph would not produce a pronounced image of the pupil of the eye.

You obviously do not bother to read postings that rebut your claims. There are no images of pupils on any of the Shroud images. No one but you has claimed that the objects on the eyelids are "pupils." Since you have now been told, with proof, that what you claim are "pupils," are actually non-body part objects, your repetition of the claim is itself fraudulent and is therefore a strawman argument, repeated by you because it is easy to disprove that pupils would be imaged unless light were involved.

When I finally realized what you were referring to, I was willing to give you the benefit of the doubt, thinking you just did not know and had made a mis-interpretation of the image that Diamond posted. However, after I posted the facts, with photographs and quotations of the current state of the science and scholarship, falsifying your mistaken belief, you now repeat the claim that Diamond's image shows pupils. Either you did not read the posting, or you prefer to ignore it or misrepresent the evidence.

Nor do you acknowledge posts showing proofs invalidating your basic premise that merely changing the angle would produce your image. Proofs I posted in response to your criticism "May I point out that you have failed to show a series of renderings starting with one good one and demonstrating what happens whin you change the angle and nothing else."

You also claimed that "If you are so stupid to start with an interpretation that doesn’t look 3D, there is no point in playing with the angle.

I, however, DID start with a fairly good pseudo 3D image and merely changed the angle. I changed the angle twenty times, every 18ºs, which is a good representative sample, and failed to generate anything that looked even remotely like your fraud, or anything that did not show some "plausable" Pseudo 3D effect. I posted every one of those images. I also showed how, starting with a fairly good Pseudo 3D image, merely changing the offset, which you claim you didn't touch, would exactly duplicate your misrepresented fraud.

You challenged us. You said that ANYONE could duplicate what you did using Photoshop and, merely by changing the angles, get the same results. I accepted your challenge and proved you were not telling the truth. One major difference. I know how the emboss filter works. Obviously, you don't.

Why don't YOU start with your blurry, fraudulent image with the massive offset and work backwards using only angle and show us a valid 3D image. Post every step. Show the settings on the screen capture as I did, not your unsupported statement of what they are.

Fourth, my honest playing with the shroud image leads me to the conclusion that is has embedded information implying an angle of incident light. This is true even if you use smaller offset parameters. . . . So how did I choose it?

The simple answer is I played with the parameters to produce an effect similar to the internet images.

We KNOW you chose it. Deliberately. Yet, through 30 or 40 posts and replies you claimed you ONLY CHANGED THE ANGLE. Now, you admit you "played with the parameters."

Now, post the similar "internet images" on which YOU claim your fraud is based, that looks like your fraud. Provide a link to the legitimate website where you saw this similar image.

Here, I'll post your image, again, because the exemplars you claim are on the Internet should look just like this:


289 posted on 10/07/2008 11:11:49 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 287 | View Replies]

To: Swordmaker
What you think are "pupils" are not. They are either coins or potsherds placed on the closed eyes to keep the eyelids closed.

Would you care to cite an example of a coin that was the size of the cornea of a human eye?

290 posted on 10/08/2008 10:51:16 AM PDT by js1138
[ Post Reply | Private Reply | To 289 | View Replies]

To: js1138; grey_whiskers; Diamond; NYer; MHGinTN

The Lepton, the smallest of Roman and Greek coins struck were less than 11mm (7/16”). Some were as small as a pencil eraser.


291 posted on 10/08/2008 2:43:54 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 290 | View Replies]

To: js1138; Swordmaker; grey_whiskers
Thank you for taking the time to reply. My blood pressure has now recovered enough from the recent Presidential "debate" to be able reply to your explanations.

You asked me to post my settings and I posted the exact settings. They are truthful...

...First, his valid comments: He says my offset is too high on the shroud picture. He is correct that the offset is higher than anything you would use on a photograph. So how did I choose it?

I realize that there are inherent difficulties in internet dialog and communication, but it is still very puzzling that it took around fifty excruciating posts to extract this oblique statement of your methodology in the face of factual, persisent and pointed questions about it. Science is in large part about METHOD, and the technical aspects of the method are critical to arriving at sound scientific conclusions. Clarity and transparency about the methodology we are using is necessary and vital to avoid misunderstanding and confusion, and to draw valid conclusions.

From 287: First of all the Photoshop filter is not an exact equivalent of the VP-8 analyzer. Whether this is important, I don’t know.

1. True, and 2. It is important to distinguish between the two.

The best source of information I found on the subject asserts that digital manipulation can be exactly equivalent to the VP-8 analog device.

Yes, but not Photoshop. Photoship Emboss is not one of that category. Emboss does not work by merely plotting image intensity in the Z axis as does the VP-8. More on that in a minute.

The images available on the internet are rather low resolution jpeg versions. The combination of low resolution and possible jpeg artifacts makes any definitive claims bogus.

The images studied by the STURP scientists are not low resolution jpeg versions.

I stand firm on several claims. First, the process of forming a 3D image from a single flat image is inherently interpretive. You cannot get the 3D effect simply by manipulating contrast. The “extrusion” effect is created by introducing implied light and shadow. That is what leads the human eye and mind to interpret an image as having depth. This processing is not objective. When you do this to an image you are in some sense falsifying the data.

If you are talking about an image made with light, that is correct. But if you are talking about the Shroud that is to assume your conclusion because the Shroud image itself was not made with light.

Second, images formed by a process similar to x-rays do not have any “angle of incident light” information. For this reason you can choose any arbitrary angle in the Photoshop filter and get equally plausible results.

Third, images formed by incident light, such as the Obama image, are sensitive to angle, because light and shadow are objectively embedded in the image. The emboss filter will not produce equally plausible effects with arbitrary angles.

That's what we've been trying to tell you. Albedo images filtered with Emboss will not be isomorphically accurate in depth and height.

Fourth, my honest playing with the shroud image leads me to the conclusion that is has embedded information implying an angle of incident light.

Again, we ask, what is the angle? Can you give a degree, so that we can test your hypothesis?

It does not have or require an angle of incident light because the Shroud image itself was not made with light. As proved here, in a source that both you and I have cited: http://www.shroud.com/pdfs/orvieto.pdf

I will listen to anyone who has a complete technical description of the VP-8 algorithm or the Bryce algorithm. I’m betting that they have a parameter equivalent to the Photoshop angle.

Listen to the VP-8 developer's description of it:

The VP-8 Image Analyzer is an analog video processing device. The “isometric display” is generated on a cathode ray tube, like that of an oscilloscope. It is like a home television set, except the scanning and positioning of the video image is controlled by electrostatics (voltages), rather than by electromagnetism (currents). The picture is monochrome, or black and white, television. However, the isometric image is “shades of green” rather than “shades of gray”, due to the type of the cathode ray tube used.

The isometric display uses the changes of brightness, as they occur in an image, to change the “elevation” on the display. If something is bright, it goes up. If something is dark, it goes down. If it is some gray shade in-between, it produces an “elevation” inbetween something very bright and something very dark.

The isometric display was never intended to produce a “real-three-dimensional” display. A snow-covered peak would look like a high, flat surface, while a rock sitting on top of the snow would look like a deep hole in the high surface. Light reflecting from a stream at the bottom of a valley would appear to be a high elevation, perhaps even higher than the snow on the peak of the mountains. Dull rocks and dark vegetation would appear to be lower than the water of the stream. In other words, objects are not as tall or short, high or low, as their reflectance of light might indicate. There is no correlation between reflectance and altitude.

The purpose of the isometric display was to make it easier to follow patterns of changes in shades of gray within an image. Particularly, the light pattern changes in reflection of light from soils and vegetation near a fault line were of interest. Following patterns of soil types and vegetation types was also of interest. But in no case was there ever any indication on the isometric display of how high or low, how tall or short something was. In looking at the facial area of the ventral image of the Shroud of Turin, one observes a generally proper “ramping” of the nose, a “rounding” of the face, and “shaping” of the lips, eyes, and cheeks. The isometric display is mapping responses to light energy, but the result induced by the image is altitude-relevant. This is a unique response.

The VP-8 Image Analyzer can vary the elevation scale (Z axis) relative to the X and Y axis scale. The VP-8 cannot change the linearity of the Z axis response, unless the unit is un-calibrated or the camera is improperly operated. A change of 10 percent in the incoming light level will produce an elevation change of 10 percent on the Z axis. It is a direct, linear function. The VP-8 can change the image polarity from bright-is-up to bright-is-down, but this is simply changing photographic response from negative to positive polarity. Therefore, a photographic positive or negative can be used, if the isometric polarity control is properly selected.

The Shroud image induces a response in the isometric display of a VP-8 Image Analyzer that is unique. Each point of the Shroud body image appears at a proper “elevation”. Is this due to the distance the cloth was from a body inside it? Is this due to the density of the human body at various points in the anatomy? Is it a result of radiant energy? These questions cannot be answered by the VP-8 Image Analyzer. However, the related theories can be rightfully posed. The isometric results are, somehow, three-dimensional in nature. The displayed result is only possible by the information (“data”) contained in the image of the Shroud of Turin. No other known image produces these same results.

The VP-8 Image Analyzer’s isometric display is a “dumb” process. That means it does one process on whatever “data” is sent to it. In that regard, it is quite like Secundo Pia’s photography. The photons come from the image through a lens, onto the sensitive material in a television camera. The photons are converted to electrons, causing more voltage to be present where the picture is bright and less voltage where it is dark. The isometric display plots out bright and dark as elevation. Like a photographic negative, the process is not “involved” in the result. It is simply photons in and voltage out. The Shroud image induces the three-dimensional result. It is the only image known to induce this result.
http://www.shroud.com/pdfs/schumchr.pdf
[excerpts][emphasis mine]

If I had to sum up the difference between Photoshop emboss and the VP-8, I would say that Photoshop's pseudo 3D effects on albedo images via the emboss filter is to the VP-8 what a parrot's mimicry of human speech is to human language. There are superficial similarities, to be sure, but that's where the resemblance ends.

Sixth, I have learned a lot from arguing this controversy. I’m always willing to be wrong or partially wrong. I simply haven’t seen much to convince me that I am wrong. I have had to narrow my claims. Take that as a victory if you must.

I have learned things about the Shroud, too, that I did not know before. There nothing necessarily wrong with being wrong about something in science, as long as no one is hurt by negligent application of the error, and if the error leads to knowledge. As far as I can tell there are no injuries or fatalities that have resulted from this discussion. The only ultimate victory is Truth.

I am agnostic as to what the Shroud respresents historically, although I acknowledge that there may exist information of which I am ignorant that renders my agnosticism unjustified. I do tend to disagree with the hypothesis that the Shroud is a medieval forgery. If it is a forgery, it is certainly one of such phenomenal accuracy and detail as to provoke shock and awe at the artistic mastery of any putative medieval forger. It also seems unreasonable, in light of the uncertainty of the accuracy of the radiometric dating that was performed on the Shroud previously, to positively date the artifact as of Medieval origin when one cannot even say how the image was formed.

Cordially,

292 posted on 10/09/2008 10:10:22 AM PDT by Diamond ( </Obama>)
[ Post Reply | Private Reply | To 288 | View Replies]

To: Diamond
The isometric display plots out bright and dark as elevation.

Sorry to keep repeating this, but this is, at face value, impossible unless the analyzer skews the data in some way.

Think about it. The image already has light and dark areas. You don't see a 3D effect unless this data is somehow interpreted.

And what about the source -- a creationist source -- that says the exact result can be obtained with digital processing.

You have asked for my methodology. Let's see yours.

293 posted on 10/09/2008 10:30:31 AM PDT by js1138
[ Post Reply | Private Reply | To 292 | View Replies]

To: js1138
Sorry to keep repeating this, but this is, at face value, impossible unless the analyzer skews the data in some way.

Think about it. The image already has light and dark areas. You don't see a 3D effect unless this data is somehow interpreted.

I will post it again:

The VP-8 Image Analyzer can vary the elevation scale (Z axis) relative to the X and Y axis scale. The VP-8 cannot change the linearity of the Z axis response, unless the unit is un-calibrated or the camera is improperly operated. A change of 10 percent in the incoming light level will produce an elevation change of 10 percent on the Z axis. It is a direct, linear function.
There is a gain control that can vary the elevation scale. That's it.
The photons come from the image through a lens, onto the sensitive material in a television camera. The photons are converted to electrons, causing more voltage to be present where the picture is bright and less voltage where it is dark. The isometric display plots out bright and dark as elevation. Like a photographic negative, the process is not “involved” in the result. It is simply photons in and voltage out.

And what about the source -- a creationist source -- that says the exact result can be obtained with digital processing.

Yes indeed. The following is from 1998 and so the media described is a little quaint, but I have no problem at all in principle with reproducing digitally the analog process of photons in and voltage out as a direct, linear function.

Barrie M. Schwortz
Verbatim transcript (from audio tape) of an extemporaneous presentation made by Barrie M. Schwortz on September 7, 1998 at the Shroud meeting in Dallas, Texas:

And I'm so thrilled Mr. (Pete) Schmacher is here (in reference to the man who developed the VP-8 Image Analyzer), because what I'm going to show you is what I call "The Virtual VP-8."© It is a complete simulation of the VP-8 Image Analyzer.

One of the most frustrating areas of Shroud image study is the so-called 3-D image that was detected by the VP-8 Image Analyzer. It is without question the least understood property of the Shroud image and the VP-8 is the wonderful device that allows us to examine that property. Yet few outside the inner circle of Shroud research have ever had the opportunity to operate the device and examine the Shroud image with it. In realizing this, I wanted to give everyone the same opportunity to examine the Shroud for themselves, using the VP-8. So we created the "Virtual VP-8"© Image Analyzer simulation.

Now this is on a CD-ROM, much like the Emanuela Marinelli CD-ROM that Kevin Moran showed you. From within the simulation you can select from one of four available Shroud photographs and a normal photograph of two children to examine. Once selected, the photo moves under the camera, the lights come on and it is displayed on a black and white video monitor within the display of the simulator. Then you go to the control panel and select a function, such as "gain" (which turns up and down the 3-D effect), and now you actually use your mouse to manipulate the gain at your own pace using the provided controls, just like on a real VP-8.

As a matter of fact, this is real VP-8 imagery, thanks to Kevin and Anne Moran, who were kind enough to let me come into their home and disrupt it for about 20 hours! Poor Anne was probably a little concerned that Kevin wasn't going to get to sleep at all that night. I actually videotaped, with a broadcast BetaCam camera right off the screen, my manipulations of four of my Shroud photographs on the VP-8.

Then, we took the videotape and spent eleven months designing and building it into the multi-layered interactive computer simulation you see here, so that eventually, everyone will be able to do it. And just like a real VP-8, you can turn up and down the gain, rotate the image, or tilt it up and down from flat to vertical. And any time you want, you can click on the black and white monitor screen and it opens a detailed, scrollable written description of the image currently on the VP-8 screen. The descriptions include clues of what to look for in each of these various images. You just click on it again and it goes back to the view of the source image.

Now the other thing you can do of course, is select different images. You'll see the previous photograph go out from under the camera and be replaced by the new one. And once again you select what control you want to use. This is the tilt function and you can then tilt that image of the Shroud, in this case the face.

Now of course, everybody says, "so what?" What's the big deal? Why is that significant? Well, you have to take a normal photograph (referring to the photo of two children that is part of the Virtual VP-8 simulation), and this happens to be the Moran grandchildren, which was the nearest one available so I grabbed it off the wall, and compare it to a VP-8 Shroud image. That's when you begin to understand the differences.

For example, if you use the gain control here on the kids, you will immediately notice that, instead of getting a natural relief, the kids hair is going into his head, in the facial relief his mouth is going deep in, the eyes are going deep in. Not close to a natural relief. Why, because this is an image made by light using photography, unlike the image on the Shroud. Note that I refer to the VP-8 image of the Shroud as a "relief" and not as "three dimensional" or "3-D." It's not three dimensional, which implies 360 degrees. It is a relief image.

This virtually simulates the precise way a VP-8 works. There is a camera, there are lights, there is a monitor to see the camera image on, and that is what the top monitor is, and of course, there is the green-screen oscilloscope type monitor that shows the VP-8 image.
http://www.shroud.com/pdfs/bsdallas.pdf

Btw, the image I posted at #193 was produced by Mark Bruzon using Bryce software from an original scan of a Barrie Schwortz photograph.

Cordially,

294 posted on 10/09/2008 12:46:01 PM PDT by Diamond ( </Obama>)
[ Post Reply | Private Reply | To 293 | View Replies]

To: Diamond

Since a two dimensional image does not have a Z-axis, it is being interpolated via some algorithm. In view of your demand for documantation of method, I’d like to see the algorithm.

I have written image filters and am somewhat familiar with how bit images are stored and manipulated. It may seem trivial now, but I wrote an image rotation program in 1984 in assembly language. More recently I wrote a simple program to change the background color in menu buttons for websites while preserving anti-aliasing.

What you have presented is not an algorithm, and it says nothing about how the z-axis “information” is extracted.


295 posted on 10/09/2008 12:57:32 PM PDT by js1138
[ Post Reply | Private Reply | To 294 | View Replies]

To: Diamond
Here's an image published by a shroud site. It claims to be what you get when you use vp-8 on regular photograph.

The first thing I notice is that there is a 3d effect. It has distortions, but it is a real effect.

So the remaining claim for the shroud image is that it doesn't show the misinterpretations of depth that the above image shows.

I'll be more impressed when I can find some software and play with it. I already know you can make bad interpretations.

296 posted on 10/09/2008 1:20:55 PM PDT by js1138
[ Post Reply | Private Reply | To 294 | View Replies]

To: js1138
And what about the source -- a creationist source -- that says the exact result can be obtained with digital processing.

To a certain extent he is wrong. It may appear similar, but digital processing is never analog. It is merely an approximation, in quantum steps, of the analog information.

Sorry to keep repeating this, but this is, at face value, impossible unless the analyzer skews the data in some way.

It appears that the VP-8, being an analog device that converts input voltage intensity from the video camera into output voltage on, what is essentially, a n analog display device, there really is no "skewing" of the data. One could just as easily output the measured image intensity at every point in the camera onto a piece of graph paper in numbers. It would not be as easily seen, however. It's almost mechanical in principle. One can even do it by hand.

From my reading of the inventor's description, the VP-8 merely measures the intensity of the light being reflected from the image being scanned at each point, then converts that into a plot in the Z axis with the location on the Z axis being the percentage read from the scan compared to the most or least intense reflection point scanned, depending on whether one is using a negative or positive image.

297 posted on 10/09/2008 6:35:53 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 293 | View Replies]

To: js1138
The first thing I notice is that there is a 3d effect. It has distortions, but it is a real effect.

JS that is the whole point. If the image is created with light, it will produce light artifacts, i.e. shadows and reflections. The image you posted shows a cavitation on the image's nose's right side because the intensity of the reflected light drops off. The VP-8 plots that lessened image intensity as LOWER in the Z axis than the left side of the nose. The lips, similarly, being darker than skin, are also plotted closer to zero than the skin. Similarly, the eyebrows of the photograph, being dark, like the hair, are plotted far below the level of the skin of the forehead they actually sit on top of. Speaking of the hair, there is a light reflection on the image's right side in the hair, that is then plotted above the forehead because it reflects more light, when in a true three-dimensional rendering, it would be below the forehead..

The image on the Shroud produces none of these shadow artifacts that would have to be present if light were involved in the image's formation.

298 posted on 10/09/2008 6:51:38 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE!)
[ Post Reply | Private Reply | To 296 | View Replies]

To: Swordmaker

The difference between digital and analog vanishes if the resolution is equivalent. In the real and practical world, any image source can be converted to digital information with less loss than would occur in the distortion components of analog processing.

You keep repeating the claim that VP-8 doesn’t alter the data, but you haven’t given this claim any actual thought.

Displayed images are inherently two dimensional. There is no z-axis. When you produce a 3D effect on a two dimensional display you are altering the data.

The only way the human eye can interpret a two dimensional display as three dimensional image is if there are light and dark areas consistent with light and shadow The apparent light source has to come from some direction or another. There are lots of optical illusions based on this phenomenon, including images of mood craters that appear to be hills if the image is turned upside down.

The z-axis in the VP-8 image is an interpretation.


299 posted on 10/10/2008 10:55:10 AM PDT by js1138
[ Post Reply | Private Reply | To 297 | View Replies]

To: Swordmaker

The difference between digital and analog vanishes if the resolution is equivalent. In the real and practical world, any image source can be converted to digital information with less loss than would occur in the distortion components of analog processing.

You keep repeating the claim that VP-8 doesn’t alter the data, but you haven’t given this claim any actual thought.

Displayed images are inherently two dimensional. There is no z-axis. When you produce a 3D effect on a two dimensional display you are altering the data.

The only way the human eye can interpret a two dimensional display as three dimensional image is if there are light and dark areas consistent with light and shadow The apparent light source has to come from some direction or another. There are lots of optical illusions based on this phenomenon, including images of mood craters that appear to be hills if the image is turned upside down.

The z-axis in the VP-8 image is an interpretation.


300 posted on 10/10/2008 10:55:26 AM PDT by js1138
[ Post Reply | Private Reply | To 297 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 241-260261-280281-300301-307 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Religion
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson