There is always a difference between recordings and live sound. It cannot be avoided. There is an interplay between beat frequencies that is lost in recordings that is always present in live music that cannot be recreated with the limited means of reproducing recordings. Our ears can pick up these beat frequencies that change with the positioning of each instrument. . . and each voice. When I sang lead bass with a large chorale, it was amazing what a different sound could be had by merely changing who sang next to each other. . . because the voice mixture changed. Same with instrument placement. This is lost in recordings. Even inaudible high frequency overtones can create beat frequency tones that are in the audible range that add to the ambiance of live music that can be missing from live music recorded in studio. I had a long discussion about this with our conductor. . . and he demonstrated it with a super-high quality multi-channel analog tape recording that recorded into inaudible ranges had been mastered to a CD. There was a huge jaw-dropping difference. He said a lot gets lost in the digitizing.One correction comment on the oversampling part...one uses oversampling to provide a means by which the low pass filtering that occurs after the digital to analog conversion to be done with digital filtering and less of an analog brick wall filter. When one uses the 44.1 kHz samples, the analog filtering must provide a very steep drop off around 20 kHz in order to avoid spurious things appearing in the sound. This so-called brick wall filtering introduces phase and amplitude anomalies which some folks feel contribute to digitals rather steely sound. - Da CoyoteYes, I think that is a serious issue. At the same time, the Nyquist Theorem says that if you have no frequency components above the Nyquist frequency, you can reproduce the whole waveform perfectly by passing the sampled data through a filter with a sin(x)/x impulse response (which of course you cant do perfectly since you cant implement the infinite range of x required theoretically). But if you oversample at, say, four times the final sample rate you can implement a digital filter to pretty thoroughly wipe out the frequency content in the two octaves above your final Nyquist frequency. Subsequently throwing out three samples for every one sample you transmit to the customer becomes your data compression.The subsequent issue is then how particular you are at reproducing the waveform you had originally, to the extent the compression allows it. IOW, how high a sample rate do you insist on in your output to the hearer, and how faithfully you implement the ideal sin(x)/x filter between the reduced sample rate of the recording and the hearers ears. By means of a cheap DSP chip it would be possible to implement a digital filter with an output which would give a very good approximation of the waveform which you would have obtained by digital-analog reproduction of all the samples you had before you threw out 3/4 of them. That DSP filter could jack up the sample rate by ten times, not just four times, if desired.
My ear is probably worse than that of many, but when our chorus master did the matching of voices I was not blown away by the result. I make no doubt, tho, that the effect is real. I sang with a chorus as a lowly chorus member - a mere bass at that - for a couple dozen seasons. That chorus performs The Messiah about 3 times each December. You will know that tenors capable of the register required by The Messiah are at a premium in a volunteer chorus . . .
My theory on that is the premium that was placed on high male voices in the dark ages and into the renaissance. . . too many tenor candidates became castrati and the genes for tenors were, shall we say bred out of the gene pool or rapidly learned to sing baritone. . . or completely forgot how to sing at all.