RESOLVING RESOLUTION AND COLOR DEPTH©2010 Dale MarksA pixel-count war is raging between still photography camera manufacturers in which each is trying to out-do the other in terms of "quantity" of resolution. The battle has spilled into the digital cinematography arena with hyper-HD cameras, such as the Thompson Viper, the Dalsa Origin, the Arri Alexa, the Vision Research Phantom line, the Elphel 353 and the Red One. Sufficient resolution in a camera sensor is vital for creating sharp images, but, of course, boosting resolution demands greater bandwidth. Hyper-HD strains the resources currently available to many independent productions, in spite of the widespread push for more pixels. Consequently, the minimum resolution/sharpness needed for digital cinematography has recently been the subject of substantial argument. One important side effect of resolution that often gets neglected in these debates is the fact that resolution strongly influences color depth.
COLOR DEPTH = RESOLUTION x BIT DEPTH Color depth is the number of possible colors in an image. Here's how resolution affects color depth: the higher the resolution, the greater the color depth. This basic relationship between resolution and color depth holds true for all photographic media (digital image sensors, film emulsion, print emulsion, printing, video monitors, etc.). With digital imaging, the "bit depth" also dramatically affects color depth. Bit depth is the possible number of shades from each pixel. Normally, a single pixel is either red or green or blue, in a color camera sensor. Three adjacent pixels of red, green and blue comprise an "RGB cell."† Because most people cannot discern these individual pixels at normal viewing distances, the differing shades of the red, green and blue pixels actually blend within the viewer's eye, to form the single color of each RGB cell. Bit depth is usually expressed as an exponent of "2." For example, a color camera sensor with a bit depth of 10-bits allows for 210 (or 1024) possible shades per pixel (red or green or blue). A 10-bit camera sensor is also referred to as having 10 bits per "color channel" (red or green or blue). Keep in mind that bit depth and color depth are actually two different properties. For the most part, color depth is a product of both bit depth and resolution.
MORE RGB CELLS IN A GIVEN AREA MEANS MORE COLORS Imagine that we have two sensors with identical dimensions. However, one of the sensors has a higher resolution than the other -- it has squeezed two RGB cells into the same space in which the other one contains only one RGB cell. So, for every single RGB cell in the lower resolution sensor, we have two adjacent RGB cells blending to form a color in the higher resolution sensor. Now, suppose that two adjacent cells in the higher res sensor have RGB shadings levels (R:G:B) of, say, 22:0:0 and 23:0:0. These two cells will blend to form a color with an "in-between" RGB shading of 22.5:0:0. The lower res sensor can't produce this in-between color in the same space (in which it has just one RGB cell) -- it can only give either 22:0:0 or 23:0:0. Of course, the pair of adjacent cells in the higher res sensor can still make the 22:0:0 and 23:0:0 colors, so the in-between shade is an extra color. Let's see what happens if we add one more cell to each group of two RGB cells on the higher res sensor. Now, we are blending three RGB cells in the same space in which the low res sensor has only one. Adjacent cells of 22:0:0 and 22:0:0 and 23:0:0 blend to make the RGB color 22.333:0:0, while cells of 22:0:0 and 23:0:0 and 23:0:0 blend to make 22.666:0:0. We have two, extra, in-between shades per cell group, compared to none with the single cells of the low res sensor. Thus, as the resolution rises, so does the number these possible in-between colors. More possible colors equals greater color depth. Note that these examples are oversimplified in that we are only blending shades in a single color channel (the red channel). Once we start adding shades from the blue and green channel, the number of new, in-between colors jumps dramatically.
HOW MUCH DOES RESOLUTION INFLUENCE COLOR DEPTH? This increase in the number of possible colors can be calculated with precision, given the resolution and bit depth. The math is fairly easy, but explaining it is a little involved. For the sake of brevity, a detailed breakdown of the calculations is included separately, below. Suffice it to say, it only takes a small boost in the resolution of a 10-bit camera to yield a dramatic expansion in color depth. Raising the resolution by a factor of only four produces color depth that is 64 times greater. Furthermore, if the increments between pixel shades are inconsitent and not uniform or if the imaging sytem is analog, a four-times increase in resolution can produce an astronomical increase in color depth. So, resolution can very strongly influence color depth. By the way, color depth is not diminished by optical softness in the capture or viewing of an image. In fact, the color depth might actually increase if the viewing system introduces softness, because it would cause more color blending.
INCREASE RESOLUTION FOR THE SAKE OF COLOR DEPTH? The mentioned 64-times increase in color depth looks more subtle than the numbers make it seem. This subtlety is due to the fact that a 10-bit camera already starts out with a lot of color depth -- 1,073,741,824 possible colors per RGB cell (see math section, below). So, the question is whether or not it is worthwhile and wise to lift resolution just to obtain this perceptually slight improvement in color depth. The answer is highly subjective, of course, but there are some basic facts to consider. A 4-times increase in the resolution of an HD sensor camera is practical and easily achievable. It was several years ago when the first image sensors appeared that had four times the resolution of full HD. Unfortunately, sobering trade-offs accompany such an easy hike in resolution: daunting bandwidth demands; reduced dynamic range; and reduced sensitivity. Increasing the resolution by a factor of four quadruples both the bit rate and the video file size. Smaller pixels suffer from a shrunken dynamic range (and higher noise per pixel), and smaller pixels area catch fewer photons, which results in less sensitivity. Naturally, the bandwidth also swells if we just boost the bit depth to get greater color depth, which is why we usually use only a 10-bit bandwidth from sensors capable of 14-bit depth. However, a higher bit depth does not suffer a reduced dynamic range nor extra noise nor lower sensitivity. Incidentally, this 14-bit-to-10-bit conversion is more efficient if we select and keep the most "desired" shades from the 14-bit range. The "raw" shading information that comes from each pixel in an image sensor is usually evenly-distributed on a linear scale, in which the increments between shades are consistent and uniform. If we redistribute this linear, 14-bit shading information in a logarithmic scheme and fit it into a linear 10-bit range, our 10-bit signal retains a greater number of useful shades in the darker shadow areas, which were present in the 14-bit signal. This transformation technically doesn't give more bit depth than 10-bits, but it does provide more valuable image information where it is needed, which allows a more forgiving dynamic range and greater color correction flexibility. On the other hand, higher resolution can mean fewer moire' problems, which can reduce the need for low pass filtering. Having the 1,073,741,824 possible colors from a 10-bit camera is probably more than sufficient for most cinematography, especially if the pixel shades are requantized logarithmically from a 14-bit sensor. A lot of big films and TV shows have been shot with such a setup, and nobody complained about the color. With the current state of imaging technology, it probably is more advantageous to have the freedom that comes from shooting/editing at the lower bandwidth of standard HD. Furthermore, the larger pixels add the benefits of extra sensitivity, wider dynamic range and higher signal-to-noise ratio. In all honesty, if one can't create compelling images with 1,073,741,824 possible colors, perhaps it would be wise to sell one's equipment now, while the prices are still high!
† Most camera sensors currently utilize a Bayer filter pattern, in which an RGB cell contains one red pixel, one blue pixel and two green pixels.
+ + + + +THE COLOR DEPTH MATHBefore we proceed, let us determine the number of colors possible from a 10-bit RGB cell, just so we have a reference. Each red, green or blue pixel in a 10-bit camera has 210 (or 1024) possible shades. In an RGB cell, every combination of pixel shades is unique (no redundancy). So, to find the total number of available colors from one RGB cell, we simply cube the number of shades possible from each pixel: 210 x 210 x 210 = 230 = 1,073,741,824 This number is the color depth of a single, 10-bit, RGB cell. If we want to squeeze multiple RGB cells in to the same space as a single RGB cell, finding the resulting color depth is more complex. The variable that contributes the most to this complexity is the number of unique pixel shade intervals. With linear, uniform shading intervals, the equation is rather simple: C = (B x R)N "C" is the color depth of a single RGB cell. "B" is the bit depth of a single pixel within an RGB cell. "R" is the resolution increase (the number of cells within the given group). "N" is the number of pixels within a single cell (which is always "3" for an RGB cell). Here's what we get if we enter the numbers from group of four, 10-bit RGB cells: (210 x 4)3 = (1024 x 4)3 = 40963 = 68,719,476,736 So, by increasing the resolution by four times on a 10-bit sensor, we now have 68,719,476,736 possible colors where we once had 1,073,741,824 possible colors. That is a 64-times increase in color depth from only a 4-times increase in resolution. Again, this simple formula applies only to "linear" digital imaging systems that have uniform increments between pixel shades. If the shading intervals are uneven, the formula becomes more complex and the potential color depth increase can be astronomical. However, there do not seem to be any digital imaging systems in which the shading intervals are intentionally uneven. The logarithmic shading system (mentioned earlier) does not qualify as an uneven shading scheme, because shades from the 14-bit are merely mapped to new positions on a linear, 10-bit system -- the interval between adjacent shades remains identical, throughout the 10-bit linear scale. |
Please send comments to: imaging "at" marks "." org