The DESTINATION SURFACE is all one resolution and either a 24 or 32 bit color pixel depth. It can "represent" 4 x larger pixels by using 4 pixels of it's surface to represent each pixel of the image. It can "represent" binary bit map images by turning all 24 bits on (White) or off (Black). The destination surface resolution is a CONSTANT. It ought not "create" a 4x normal pixel resolution on a background smudge it doesn't recognize. It ought to simply render to the surface the image that is loaded.
Now you seem to be making the argument that on a supposedly black and white ORIGINAL document, the scanner (and software) cannot distinguish sufficient contrast between the BLACK of the letter, and the WHITE of the page to recognize it as anything but the background, yet our eyes can easily distinguish that it is not?
(As I'm sure you realize, if you scan a black-and-white photo at the same settings as you would use for a color photo, you get a file the same size as if it were a color photo.
It *IS* a color photo. It's colors are gray scale renderings of the three primary colors as represented by the binary bits in the memory surface allocated for this purpose.
The computer doesn't "understand" that gray isn't really a color--unless you tell it so.) If the background is downsampled, the 'R' is downsampled along with it.
Downsampled? A new term for "Deus ex machina? Yeah, when I don't recognize something, I make the resolution four times worse, rather than just leave it alone.
The important thing is that the computer doesn't know it's an 'R'. We can recognize it, but the software just thinks it's a gray smudge.
If you are making a copy, the computer doesn't need to know what it is, n'est-ce pas?
I have thought about this a bit. A better argument for you would be that the Adobe program is using a MPEG type compression algorithm on image tokens somehow deemed by the software to need less detail. You could further argue that this is a benefit in applications OTHER than creating exact copies, for which this software might be used most of the time. (rapid video rendering comes to mind.) Then the question becomes why some moron thought it was a good idea to do this instead of making an exact copy? (It still doesn't explain the "Halos" around each letter either.)
Our eyes are much better at discerning meaningful shapes than computers are. That's why CAPTCHAs work.
Downsampled? A new term for "Deus ex machina? Yeah, when I don't recognize something, I make the resolution four times worse, rather than just leave it alone.
It's the term Adobe uses (and others too, I imagine). I could have sworn it appeared in a previous post of mine. Anyway, first, the point is that the computer didn't recognize it. And second, because of that, it treated it the same way it handled the rest of the background--which was to "downsample" it, i.e., lower its resolution.
If you are making a copy, the computer doesn't need to know what it is, n'est-ce pas?
No, it doesn't. It would have been better if they'd turned off whatever routines caused these anomalies and just scanned the dam thing as a TIFF.
(It still doesn't explain the "Halos" around each letter either.)
Actually, I think that's an argument for the whole thing being a program process rather than intentional copying and pasting. If the latter, there would be no reason for halos, and certainly no reason that when you hid the text "layers," the background would be white behind them. In fact, I don't see any way to explain that in a copy-and-paste scenario.