Introduction Over the years, the electronic market has gone through many changes. Digital technologies are replacing the older generation analog counterparts rapidly. Digital cameras are also becoming more and more popular on the consumer market. These cameras no longer consume films that were once used in traditional film cameras. Instead, digital cameras convert conventional analog information into digital information. Like a conventional camera, a digital camera still has a series of lenses that focus the light collected to create an image, but how the light is processed on a digital camera is fundamentally different. This technical report will explain how a digital camera works. Design A digital camera does not require mechanical or chemical processing to form the image as an analog camera does. Every digital camera has a processing computer in it. When a picture is taken, a number of lenses focus the light onto a semiconductor device that records the light intensity electronically. The information is then processed to form tiny colored dots called pixels that make up the image. The images can be store either on a removable memory stick or a built-in memory depends on the camera. Most digital cameras look fairly similar to traditional film cameras from the outside. Just like the older generation, a digital camera may have several buttons that perform different functions, a flash light, and a lens. The most obvious difference for many digital cameras is probably the LCD screen. Most digital cameras on the market today have a built-in LCD screen that allows quick access of viewing the photos. Despite the slightly different appearance, what makes a digital camera different lies within the machine. Inside a Digital Camera A digital camera may have the same set of lenses that can be seen on a conventional camera. The purpose of these lenses is to collect and focus light when a picture is taken. Instead of using film, a digital camera has a digital sensor that converts light into electrical charges. The most common type of image sensor is the charge coupled device, CCD. The sensor transports the reading of electrons across the chip to an analog-to-digital converter, ADC, where the input numbers are turned into digital values. There is another less common type of sensor available called complementary metal oxide semiconductor, CMOS. It uses several transistors at each colored dot (pixel) to move the charge using traditional wires. Since the charges are transferred as electrical signals already, CMOS does not require an analog-to-digital converter. CCD is much more commonly used in digital cameras since it creates better
quality images and the technology is more mature. Nevertheless, both CCD and CMOS serve the same general purpose: to turn light into electrical signals (Tracy, 2006). Image Quality and Pixels The image quality of digital camera is often defined as a certain number of mega pixels. Pixel is the unit of resolution, which represents the amount of detail that a camera can capture. Each pixel is represented by a tiny colored dot on an image. A typical camera on a cellular phone is one megapixel, which can produce a highest resolution of 1216*912. In contrast, some high-end cameras can produce images with over 12 million pixels (Tracy, 2006). The digital image on the right shows a low number of pixels. The man’s face is formed up by individual square pixels, each with its own color. Color The colors on a photo are actually all made up of three primary colors: red, green, and blue. The light sensor mentioned earlier in the report can only keep track of the intensity of the light collected. In order to get the correct colors, most cameras also have a sensor that uses filtering to determine the intensity of light in the three primary colors. Filter for any one of the colors blocks out the other two while letting its own color to pass through (Tracy, 2006). Since the sensor only needs to collect visible light, there is also a filter that blocks out the invisible light, infrared light, in order to properly balance the three primary colors. The filter blocks out most infrared light before the light reaches the coloring filter and the sensor (Light, July 5, 2006).
The photo on the left only has visible light passing through filter. The one on the right has no infrared filter, causing the photo to look reddish since a greater amount of infrared light passes the red filter (Example, July 5, 2006). Three methods of color filtering: beam splitter, rotation, color filter array. Beam Splitter Method Many high quality cameras use three separated sensors that each collects its own color out of the three primary colors. When a picture is taken, the beam splitter separates the light into red, green, and blue into different sensors. Each sensor creates an identical image but with a different primary color because the filter only respond to one of the three colors. The three different images are then combined together to create a final product with accurate colors (Tracy, 2006). Figure 6 below illustrates the beam splitter method.
Rotation Method Cameras using the rotation method only have a single sensor. Three primary color filters rotate rapidly to filter the light into their own color onto the sensor. This method is not as practical since the three colors are not recorded at precisely the same time. Both the target and the camera have to be stationary when the picture is taken to maintain its accuracy (Tracy, 2006).
Color Filter Array Method This method uses a filter over the sensor called color filter array. The overall filter is composed of tiny individual filters for the three primary colors. By breaking up the light into one of the colors onto each pixel, the camera processor would be able to form an accurate guess on what the overall color is at that location (Tracy, 2006). Since a photo is made out of millions of tiny pixels, human eyes would not be able to see the pattern of the filter. The most common type of color filter array is the Bayer filter. It filters the light into one single color onto each pixel with the pattern shown on Figure 8 below (Tracy, 2006).
Exposure A digital camera controls the amount of light passes through the sensor. Most processors inside a camera will automatically determine the amount of light needed to create the best quality image. There are two components that control the light that passes through the lenses. Aperture An aperture is a gateway that regulates the amount of light that can pass through by changing the size of the opening. It is located in front of the lens. Some cameras have manual adjustable apertures to give the users more options (Tracy, 2006). Shutter Speed This function controls the amount of time for light to pass through the aperture. The digital shutter triggers the aperture to close after a certain time has passed. Digital cameras usually have a default shutter speed that can be changed manually also (Tracy, 2006).
Focus It is also important to adjust the lens to focus the light onto the sensor. Most digital cameras use autofocus to automatically change the focus length. When focusing, the lens is being moved to position where it can create the sharpest image possible. Different target distances require the lens to be at different lengths from the sensor. There are two different types of autofocus: active and passive. Some cameras have both two methods to most accurately estimate the object’s distance to set the focus at the right position (Brown, 2000). Active An active autofocus sends out an infrared light signal to the targeted object. The computer then processes the time for the invisible light to bounce back and uses it to estimate the distance. Once the distance is known, the lens will be automatically adjusted to the proper position. The advantage of this method is that it can be used in the dark as well since infrared light is invisible to human eyes and would not be interfered in the dark. But it could also be a disadvantage since infrared light that exists in nature may confuse the camera (Brown, 2000). Passive Passive autofocus moves the lens back and forth until the processor finds a best position for the lens to focus. The camera’s computer analyzes the contrast in colors to decide the best focus length. Since this method requires the computer to be able see the contrast of the image, it does not work well without an efficient amount of light present (Brown, 2000). Summary Steps for taking a digital photo 1. optic board sets the proper exposure automatically 2. lens moves into its proper focus position 3. light passes through the lens 4. filter blocks out infrared light 5. red, green, blue lights are filtered 6. filtered RGB lights reach the sensor 7. signal processing board processes the input information from the sensor 8. an image is formed and stored on a memory stick 9. LCD screen provides access to view the image, if there is one presented
Figure 1. Retrieved March 11, 2007, from Light inside a digital camera Web site: http://mvh.sr.unh.edu/mvhtools/light_in_camera.htm Figure 2. Retrieved March 11, 2007, from Image generation with MatLAB Web site: http://www.cns.atr.jp/~kmtn/imageMatlab Figure 3. Retrieved March 11, 2007, from How a Digital Camera Works Web site: http://electronics.howstuffworks.com/digital-camera3.htm Figure 4. Retrieved March 11, 2007, from Example of Digital Photographs with Different Proportions of Visible and Infrared Light Web site: http://mvh.sr.unh.edu/mvhtools/vis_ir_photo_examples.htm Figure 5. Retrieved March 11, 2007, from Light inside a Digital Camera Web site: http://mvh.sr.unh.edu/mvhtools/light_in_camera.htm Figure 6. Retrieved March 11, 2007, from How a digital camera works Web site: http://electronics.howstuffworks.com/digital-camera3.htm Figure 7. Retrieved March 11, 2007, from How a digital camera works Web site: http://electronics.howstuffworks.com/digital-camera3.htm Figure 8. Retrieved March 11, 2007, from How a digital camera works Web site: http://electronics.howstuffworks.com/digital-camera4.htm Figure 9. Retrieved March 11, 2007, from Understanding the Concept of Exposure Web site: http://herron.50megs.com/aperture.htm Figure 10. Retrieved March 11, 2007, from Light inside a Digital Camera Web site: http://mvh.sr.unh.edu/mvhtools/light_in_camera.htm Figure 11. Retrieved March 11, 2007, from Light inside a digital camera Web site: http://mvh.sr.unh.edu/mvhtools/light_in_camera.htm (July 5, 2006 ). Example of Digital Photographs with Different Proportions of Visible and Infrared Light. Retrieved March 11, 2007, from Measuring Vegetaion Health Web site: http://mvh.sr.unh.edu/mvhtools/vis_ir_photo_examples.htm (July 5, 2006). Light inside a Digital Camera. Retrieved March 11, 2007, from Measuring Vegetaion Health Web site: http://mvh.sr.unh.edu/mvhtools/light_in_camera.htm Brown, Gary (April 01, 2000). How Autofocus Camera Work. Retrieved March 11, 2007, from HowStuffWorks Web site: http://www.howstuffworks.com/autofocus2.htm
Tracy V. Wilson, K. Nice and G. Gurevich, (November 29, 2006). How a Digital Camera Works. Retrieved March 11, 2007, from HowStuffWorks Web site: http://electronics.howstuffworks.com/digital-camera.htm