Real World Digital Photography

Digital Photography Fundamentals: Understanding Resolution and Bit Depth

Adapted from Real World Digital Photography, 3rd Edition (Peachpit Press)

By Katrin Eismann, Sean Duggan, Tim Grey





PPI vs. DPI

Purists will draw a careful distinction between ppi and dpi. Pixels are the dots on your monitor, and dots are, well, the dots on paper. The distinction between the two is subtle, but we feel it’s important to use the most clear and accurate terminology. Therefore, we use the term ppi when referring to pixels on a digital camera or display device and dpi when referring to dots in printed output.

If you hear people using dpi as a general term for resolution, understand that they may mean ppi when they say dpi (and then recommend that they buy and read this book!).

Resolution is one of the most important concepts to understand in digital imaging and especially in digital photography. The term resolution describes both pixel count and pixel density, and in a variety of circumstances these concepts are used interchangeably, which can add to misunderstanding.

Camera resolution is measured in megapixels (meaning millions of pixels); both image file resolution and monitor resolution are measured in either pixels per inch (ppi) or pixel dimensions (such as 1024 by 768 pixels); and printer resolution is measured in dots per inch (dpi) (see below). In each of these circumstances, different numbers are used to describe the same image, making it challenging to translate from one system of measurement to another. This in turn can make it difficult to understand how the numbers relate to real-world factors such as the image detail and quality or file size and print size.



Different devices use different units for measuring resolution, which can cause some confusion for photographers. Understanding how resolution is represented for each device will help you better understand the capabilities of each in your workflow.

The bottom line is that resolution equals information. The higher the resolu- tion, the more image information you have. If we’re talking about resolution in terms of total pixel count, such as the number of megapixels captured by a digital camera, we are referring to the total amount of information the camera sensor can capture, with the caveat that more isn’t automatically better. If we’re talking about the density of pixels, such as the number of dots per inch for a print, we’re talking about the number of pixels in a given area. The more pixels you have in your image, the larger that image can be reproduced. The higher the density of the pixels in the image, the greater the likelihood that the image will exhibit more detail or quality.

The biggest question to consider when it comes to resolution is—How much do I really need? More resolution is generally a good thing, but that doesn’t mean you always need the highest resolution available to get the job done. Instead, you should match the capabilities of the digital tools you’re using to your specific needs. For example, if you are a real estate agent who is using a digital camera only for posting photos of houses on a Web site and printing those images at 4 by 6 inches on flyers, you really don’t need a multi-thousand- dollar, 22-megapixel digital camera to achieve excellent results. In fact, a 4–6-megapixel point and shoot camera would handle this particular need, although having more image information would be beneficial in the event you needed to crop an image or periodically produce larger output.

Megapixels vs. Effective Megapixels
Digital cameras are identified based on their resolution, which is measured in megapixels. This term is simply a measure of how many millions of pixels the camera’s image sensor captures to produce the digital image. The more megapixels a camera captures, the more information it gathers. That trans- lates into larger possible image output sizes.

However, not all the pixels in an image sensor are used to capture an image. Pixels around the edge are often masked, or covered up. This is done for a variety of reasons, from modifying the aspect ratio of the final image to measuring a black point (where the camera reads the value of a pixel when no light reaches it) during exposure for use in processing the final image. Because all pixels in the sensor aren’t necessarily used to produce the final image, the specifications for a given camera generally include the number of effective megapixels. This indicates the total number of pixels actually used to record the image rather than the total available on the image sensor.

Where Resolution Comes into Play

Resolution is a factor at every step of the photo-editing process. The digital camera you use to record the scene, the monitor you use to view those images, and the printer you use to produce prints all have a maximum resolution that determines how much information they are able to capture, display, or print. Understanding how resolution affects each of these devices will help you determine which tools are best for you and how to use them.

Camera Resolution
Camera resolution defines how many individual pixels are available to record the actual scene. This resolution is generally defined in megapixels, which indicates how many millions of pixels are on the camera sensor that is used to record the scene. The more megapixels the camera offers, the more information is being recorded in the image.

Many photographers think of camera resolution as a measure of the detail captured in an image. This is generally true, but a more appropriate way to think of it is that resolution relates to how large an image can ultimately be reproduced. The table below shows the relationship between a camera’s resolution and the images the camera can eventually produce. If sensor size was the only thing that defined image quality and detail, it would be child’s play to pick out a camera—with bigger being better—but sadly this simple formula will not serve you well. In addition to sensor size, image detail and quality are affected by such factors as lens quality, file formats, image processing, and photographic essentials such as proper exposure.



Megapixel Decoder Ring

Monitor Resolution
The primary factor in monitor resolution is the actual number of pixels the monitor is able to display. For desktop or laptop LCD displays there is a “native resolution” that represents the actual number of physical light-emitting pixels on the display, and this is the optimal resolution to use for the display. Any other setting will cause degradation in image quality.

The actual resolution is often described not just as the number of pixels across and down, but also with a term that names that resolution. For example, XGA (Extended Graphics Array) is defined as 1024 pixels horizontally by 768 pixels vertically. SXGA (Super Extended Graphics Array) is 1280 by 1024 pixels. There are a variety of such standard resolution settings. In general it’s best to choose a monitor with the highest resolution available so you can see as much of your image as possible at once. However, keep in mind that the higher the resolution, the smaller the screen’s interface elements will appear on your monitor.



The monitor resolution you use determines how much information can be displayed. A high-resolution display of 1920x1200 (left) shows more information than a lower-resolution display of 1024x768 (right).

72 PPI?
One of the most common misconceptions about monitor resolution is that monitors display at 72 ppi and that all Web graphics need to be set to 72 ppi. This simply isn’t the case. Back in the early days of personal computing, Apple had a 13-inch display that did indeed operate at 72 ppi. Most monitors these days display at a range between about 85 ppi and 125 ppi. The exact number depends on the pixel dimension resolution and the physical size of the monitor. Again, this number is a measure of pixel density, so it relates to the overall image quality of the display. The higher the ppi value, the crisper the display and the more capable the monitor is of showing fine detail without as much magnification.

In terms of Web display, the monitor that is used to view the Web page determines the final pixel distribution, meaning you don’t need to make sure that every image you post to the Web is set to 72 ppi.

Printer Resolution
For photographers, the digital image’s defining moment is when ink meets paper. The final print is the culmination of all the work that has gone into the image, from the original concept, the click of the shutter, and the optimization process to the final print. The quality of that final print is partly determined by the printer resolution.

Once again, just to keep things confusing (we mean interesting) there are two numbers that are often labeled “print resolution”—output resolution and print resolution, and this is a major source of confusion. The marketing efforts of printer manufacturers only contribute to this confusion (see the following section, “Marketing Hype”).

Output resolution in the image file is simply a matter of how the pixels are spread out in the image, which in turn determines how large the image will print and to a certain extent the quality you can obtain in the print.

Printer resolution is the measure of how closely the printer places the dots on paper. This is a major factor in the amount of detail the printer can render—and therefore the ultimate quality of the print. Note that the printer’s resolution is determined by the number of ink droplets being placed in a given area, not by the number of pixels in your image. Therefore, there won’t necessarily be a direct correlation between output and printer resolutions, because multiple ink droplets are used to generate the individual pixels in the image.

Each type of printer, and in fact each printer model, is capable of a different resolution.

Marketing Hype

It’s bad enough that the term resolution is defined in a variety of ways, making it potentially a confusing topic. But to us it seems as if the manufacturers of digital photography tools are trying to confuse everyone even further. It doesn’t help that there aren’t definitive definitions for many of the terms being used to describe various products. For example, the term resolution is used to describe the quality a printer is capable of. However, the number by itself is meaningless because so many other factors go into the final image’s quality. Therefore, the only way to use the number is to compare it with the numbers presented for competing printers and to put those numbers in context with other factors, such as the number of inks, the accuracy of dot placement, the dithering patterns, and more. What manufacturers are really trying to do is convince you that their numbers and products are the best, but as we all know the truth can get lost in the hype.

Digital camera manufacturers are quick to boast about the number of megapixels a camera is able to capture. This is an important factor to consider, but other important factors also impact final image quality. Keep in mind that you may not necessarily need the camera with the most megapixels for your particular needs. Also, you can generally push digital images to large output sizes even if you don’t have the highest resolution to begin with. Other factors, such as lens quality and choices, sensor quality, and special features—such as video or a high ISO or frame rate—are also important when you’re deciding which camera is best for you or whether to upgrade from your current camera.

The most marketing hype seems to revolve around photo inkjet printers. There is a constant barrage of claims of how many dots per inch the latest printer can produce. Repeated testing has convinced us that photographic output above 1440 dpi as set in the printer driver produces no benefit in terms of image quality.

By understanding what the various specifications mean and determining which factors are important to you, you’ll be able to see through the hype and make the best purchasing decision.


Bit depth describes the number of bits used to store a value, and the number of possible values grows exponentially with the number of bits.



Bit Depth

A single bit can store two values (ostensibly zero and one, but for our pur- poses it is more useful to think of this as black or white), whereas 2 bits can store four possible values (black, white, and two shades of gray), and so on. Digital image files are stored using either 8 or 16 bits for each of the three color (red, green, blue) channels that define pixel values, and HDR (high dynamic range) images are processed and stored as 32-bit images.



As the bit depth increases, the number of possible tonal values grows exponentially.

8 Bit vs. 16 Bit

The difference between an 8-bit and a 16-bit image file is the number of tonal values that can be recorded. (Anything over 8 bits per channel is generally referred to as high bit.) An 8-bit-per-channel capture contains up to 256 tonal values for each of the three color channels, because each bit can store one of two possible values, and there are 8 bits. That translates into two raised to the power of eight, which results in 256 possible tonal values. A 16-bit image can store up to 65,536 tonal values per channel, or two raised to the power of 16. The actual analog-to-digital conversion that takes place within digital cameras supports 8 bits (256 tonal values per channel), 12 bits (4,096 tonal values per channel), 14 bits (16,384 tonal values per channel), or 16 bits (65,536 tonal values per channel) with most cameras using 12 bits or 14 bits. When working with a single exposure, imaging software only supports 8-bit and 16-bit-per-channel modes; anything over 8 bits per channel will be stored as a 16-bit-per-channel image, even if the image doesn’t actually contain that level of information.

When you start with a high-bit image by capturing the image in the RAW file format, you have more tonal information available when making your adjustments. Even if your adjustments—such as increases in contrast or other changes—cause a loss of certain tonal values, the huge number of available values means you’ll almost certainly end up with many more tonal values per channel than if you started with an 8-bit file. That means that even with relatively large adjustments in a high-bit file, you can still end up with perfectly smooth gradations in the final output.

Working in 16-bit-per-channel mode offers a number of advantages, not the least of which is helping to ensure smooth gradations of tone and color within the image, even with the application of strong adjustments to the image. Because the bit depth is doubled for a 16-bit-per-channel image relative to an 8-bit-per-channel image, this means the actual file size will be double. However, since image quality is our primary concern we feel the advantages of a high-bit workflow far exceed the (relatively low) extra storage costs and other drawbacks, and thus recommend always working in 16-bit-per-channel mode.



Don‘t miss the next Insight article on Graphics.com. Get the free Graphics.com newsletter in your mailbox each week. Click here to subscribe.


Excerpted from Real World Digital Photography, 3rd Edition by Katrin Eismann, Sean Duggan, Tim Grey. Copyright © 2011. Used with permission of Pearson Education, Inc. and Peachpit Press.