8-bit vs. 16-bits: What's The Big Deal?

If you’ve used Adobe Photoshop for a while you know that your files will be either 8-bits or 16-bits.  But what’s a bit and why does it matter?

Your camera is a very powerful computer. Yes, you have optics to focus the images but in it’s core your trusty Digital SLR is a powerful, albeit small, computer system. And image files are simply data files that contain all the information necessary for your computer to generate an image. The smallest unit of data is a bit. Deep inside your computer or camera that bit is really just a positive or negative electrical charge.

For normal data files, 8-bits are used to represent a single letter, number or symbol. To produce the letter “A”, for example, the computer uses 8-bits of data. For photographs we need to remember that images are made up from a collection of pixels. Let’s take a black and white image first.  8-bits of data are used to define the tone of each pixel in an image.  

Remember that each bit is either + or -, represented by either “0” or “1”.  The total number of unique combinations for 8-bits is 256. (You knew that math from school would come in handy.) The range of tonal values in an 8-bit image begin at black with all bits being “0”, all the way to white with all bits being “1” - 256 different shades from end to end.   

With 256 tones from black, to white, we perceive continuous tones.  Any less than that and we would notice bands of different tones, a phenomenon called posterization. 

If you shoot JPG images, they will be saved as 8-bit files.  The JPG file format does not support 16-bit.  If you instead shoot RAW files, the image data is saved in a format greater than 8-bits, although they are not all true 16-bit files. The digital processor inside a DSLR will generate RAW file data in either 10-bit, 12-bits or 14-bits. These are converted to 16-bit files by programs like Lightroom, Adobe Camera Raw or the proprietary RAW file converters.  

So, now comes the question.  If my lab prints my images using 8-bit printers, why mess with 16-bit files? That’s a good question and not one that is intuitively obvious.  

You shoot RAW to take advantage of all the additional data collected by camera.  If you convert the file to an 8-bit file, you’re throwing away a lot of that data. If all you do is prepare images for the Web then 8-bits is plenty of information but you’ll want to process in 16-bits for just about any other scenario.  

Whenever you make any tonal adjustments you stretch or compress different parts of the histogram.  The more you adjust, the greater the risk of having gaps in your tonal range that become visible.  By processing with 16-bit files, you maintain all the information necessary to ensure quality in all portions of the histogram.  Jeff Schewe and Bruce Fraser, in their book Real World Camera Raw, refer to this as “editing headroom”. They also point out that “making a [color space] conversion on a 16-bit image can often avoid problems such as banding in skies or posterization in shadows that suddenly appear after an 8-bit conversion.”

Finally it should be pointed out that the greater bit-depth of a 16-bit file does not extend the color gamut of an image.  Nor does it extend the dynamic range of an image.  Essentially it gives you more slices of information within these ranges.  

As with many things with photography, the file bit-depth is a double edged sword. The file size is considerably larger but you have a much greater amount of data in these files. 

John Paul Caponigro has said that “creating 16-bit files is largely about generating the best 8-bit data”. But our industry is changing rapidly and we are now seeing printers and monitors that take advantage of the information beyond 8-bits. Maintaining a 16-bit workflow also gives you much greater latitude for making changes to tone and color.  When you shoot RAW images your camera provides you with a tremendous amount of information.  Don’t throw it away.