This isn't correct, as you make a distinction that doesn't exist. "Randomness" is just a description of the entropy for a given datastream and tells you absolutely nothing about the subjective value of the information content of the datastream. What "randomness" tells you is the relationship between the size of the datastream and the information content it contains.
There is no mathematical distinction between subjectively interesting data and subjectively uninteresting data with the same information content. To do so would be analogous to asserting that Swahili is "random noise" because you don't understand Swahili. Information theory tells you how much information is there, not how to interpret it. It is a common fallacy to ascribe more value to a pattern that you recognize than one that you don't.
Entropy has no relation to apparent complexity, at least as "complexity" is used in normal jargon (it does have a relationship to Kolmogorov Complexity, but that is something else). Humans aren't particularly good at discerning if something is actually complex in an information theoretic sense or not, particularly since most people really aren't familiar with how mathematical complexity is actually measured. In an information theoretic sense, many things that appear "complex" to humans aren't if a rigorous evaluation of information content is actually made.
As I understand it, when pulsars were first discovered the folks at SETI were excited because they thought they had found a meaningful signal. Then IT was applied and it was determined to be not-meaningful. Is that the case?
Shalom.