Polarik starts out saying that the resoloution of an electronic image is based on dots per inch.
However, resolution of an image is simply the number of pixels that it contains.
Should I read past that error?
As wikipedia points out:blockquote> Note that the use of the word resolution here is misleading. The term "display resolution" is usually used to mean pixel dimensions (e.g., 1280×1024), which does not tell you anything about the resolution of the display on which the image is actually formed (which would typically be given in pixels per inch (digital) or number of lines measured horizontally, per picture height (analog)). To confirm this on Photoshop, just try resizing an image...it lists height, width, and resolution--the latter of which is measured in pixels per inch (or similar units).
Actually resolution is 1/(dots per inch) on the original. Think of it as a sampling length. If you had two features on the original separated by less than the resolution they could not be discerned as separate features.
The number of pixels in an image is (physical size in inches)* (dots per inch). You could have a physically large document scanned at low resolution be the same size in pixels as a smaller image scanned/sampled at a higher resolution.
Being able to see a feature smaller than the resolution depends on the details of the sampling/scanning. If the pixel is a true sample, then you may miss such a small feature, more likely it's some kind of average over the area represented by the pixel, in which case a high contrast small feature may be observable as a pixel of lessor contrast. Depending the details it may even "bleed" into other nearby pixels.
For images from cameras rather than scanners, the analysis needs to look at the pixels per inch in the "image" plane of the camera.