I'm no expert, but here's the way I understand it.
Information theory is based on the concept that there is a difference between information and random signals. This is axiomatic since it can not be proven. The closest thing we have to a proof is that we have not yet identified any non-sentient process which generates information. Nor have we actually posited the possible existence of one.
Based on this idea, any signal is processed to determine whether it exhibits the behavior of a random signal or information. This is what is used in the SETI project to mask the noise in the universe from a possible signal.
Now here is the issue where proof comes in. I can not prove to you that there is no natural process anywhere in the universe that can generate a signal that would be considered information according to information theory. I can only say that none has yet been discovered.
But I would insist that, until such a process is discovered, or at least theoretically described, the burden of proof is on the one who posits such a process.
One note here: I'm not an information scientist. I don't know by what mathematical magic they do what they do. I don't even know how WinZip compresses files. But information theory has been around as long as we have had signal intelligence and I think it is pretty valid. If not, don't start posting math flames at me. Just tell me how it has been proven invalid, and on what basis SETI is still going on.
Shalom.
This is not random by definition. You can see a pattern. But some people cannot see the information.
"Based on this idea, any signal is processed to determine whether it exhibits the behavior of a random signal or information. This is what is used in the SETI project to mask the noise in the universe from a possible signal."
The way you've stated it is not exactly true. For example, information in the form of a string of 1's and 0's can be turned into a random signal by "randomly" switching each successive bit. The information can still be extracted if the recipient knows the sequence of random switches. To anybody else, the signal is just noise.
Part of information theory addresses how much information can be packed into a given signal. Turns out (almost intuitively) that if a signal is fully loaded, i.e. it has as much information as it can carry, it is indistinguishable from a random signal. That's because if you can detect any regularity or non-randomness in the signal, you can exploit that regularity to pack in more information. Our own electronic communications are becoming more and more noise-like -- just listen to your modem.
SETI depends in part on the aliens either being too primitive to exploit the full information carrying capacity of their signals, or deliberately trying to attract our attention. I don't have a lot of hope for either -- among humans, at least, there will be a window of only a hundred years or so in which our "inadvertent" signals differ significantly from noise, and I don't see anybody scrambling to beam intential beacons at hypothetical aliens.
Bottom line is that absense of evidence of a signal (or of Intelligent Design) can't be construed as evidence of its non-existence. Since there is no way to falsify hypothesis of Intelligent Design, it's not science.
This isn't correct, as you make a distinction that doesn't exist. "Randomness" is just a description of the entropy for a given datastream and tells you absolutely nothing about the subjective value of the information content of the datastream. What "randomness" tells you is the relationship between the size of the datastream and the information content it contains.
There is no mathematical distinction between subjectively interesting data and subjectively uninteresting data with the same information content. To do so would be analogous to asserting that Swahili is "random noise" because you don't understand Swahili. Information theory tells you how much information is there, not how to interpret it. It is a common fallacy to ascribe more value to a pattern that you recognize than one that you don't.
Entropy has no relation to apparent complexity, at least as "complexity" is used in normal jargon (it does have a relationship to Kolmogorov Complexity, but that is something else). Humans aren't particularly good at discerning if something is actually complex in an information theoretic sense or not, particularly since most people really aren't familiar with how mathematical complexity is actually measured. In an information theoretic sense, many things that appear "complex" to humans aren't if a rigorous evaluation of information content is actually made.