In digital imaging, a pixel, pel, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a two-dimensional grid, and are often represented using dots or squares, but CRT pixels correspond to their timing mechanisms and sweep rates.

Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color image systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.

In some contexts (such as descriptions of camera sensors), the term pixel is used to refer to a single scalar element of a multi-component representation (more precisely called a photosite in the camera sensor context, although the neologism sensel is sometimes used to describe the elements of a digital camera's sensor), while in others the term may refer to the entire set of such component intensities for a spatial position. In color systems that use chroma subsampling, the multi-component concept of a pixel can become difficult to apply, since the intensity measures for the different color components correspond to different spatial areas in such a representation.

The word pixel is based on a contraction of pix ("pictures") and el (for "element"); similar formations with el  for "element" include the words voxel and texel.


Sampling patterns

For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another.


For example:

  • LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens.
  • The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid.
  • A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy.
  • Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space.
  • The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit.
  • Pixels on computer monitors are normally "square" (this is, having equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, having unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard.

Resolution of computer monitors

Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. LCD monitors also use pixels to display an image, and have a native resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On some CRT monitors, the beam sweep rate may be fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all - instead they have a set of resolutions that are equally well supported. To produce the sharpest images possible on an LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor.

Resolution of telescopes

The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale s measured in radians is the ratio of the pixel spacing p and focal length f of the preceding optics, s=p/f. (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because p is usually expressed in units of arcseconds per pixel, because 1 radian equals 180/π*3600≈206,265 arcseconds, and because diameters are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as s=206p/f.

Bits per pixel

The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors:

  • 1 bpp, 21 = 2 colors (monochrome)
  • 2 bpp, 22 = 4 colors
  • 3 bpp, 23 = 8 colors
  • 8 bpp, 28 = 256 colors
  • 16 bpp, 216 = 65,536 colors ("Highcolor" )
  • 24 bpp, 224 = 16,777,216 colors ("Truecolor")

For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image).



Many display and image-acquisition systems are, for various reasons, not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance. In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels. For example, LCDs typically divide each pixel horizontally into three subpixels. When the square pixel is divided into three subpixels, each subpixel is necessarily rectangular. In the display industry terminology, subpixels are often referred to as pixels, as they are the basic addressable elements in a viewpoint of hardware, and they call pixel circuits rather than subpixel circuits.Most digital camera image sensors use single-color sensor regions, for example using the Bayer filter pattern, and in the camera industry these are known as pixels just like in the display industry, not subpixels.

For systems with subpixels, two different approaches can be taken:

  • The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or
  • The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases.

This latter approach, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately, producing a slight increase in the apparent resolution of color displays. While CRT displays use red-green-blue-masked phosphor areas, dictated by a mesh grid called the shadow mask, it would require a difficult calibration step to be aligned with the displayed pixel raster, and so CRTs do not currently use subpixel rendering.

The concept of subpixels is related to samples.



A megapixel (MP) is one million pixels, and is a term used not only for the number of pixels in an image, but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera that makes a 2048 × 1536 pixel image typically uses a few extra rows and columns of sensor elements and is commonly said to have "3.2 megapixels" or "3.4 megapixels", depending on whether the number reported is the "effective" or the "total" pixel count (as opposed to the 2048 × 1536 = 3,145,728 finished image pixels).

Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement, so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record 1 channel (only red, or green, or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement).

DxO Labs invented the Perceptual MegaPixel (P-MPix) to measure the sharpness that a camera produces when paired to a particular lens – as opposed to the MP a manufacturer states for a camera product which is based only on the camera's sensor. The new P-MPix has been praised as being a more accurate and relevant value for photographers to consider when weighing-up camera sharpness. As of mid-2013, the Sigma 35mm F1.4 DG HSM mounted on a Nikon D800 has the highest measured P-MPix. However, with a value of 23 MP, it still wipes-off more than one-third of the D800's 36.3 MP sensor.

A camera with a full-frame image sensor, and a camera with an APS-C image sensor, may have the same pixel count (for example, 16 MP), but the full-frame camera may allow better dynamic range, less noise, and improved low-light shooting performance than an APS-C camera. This is because the full-frame camera has a larger image sensor than the APS-C camera, therefore more information can be captured per pixel. A full-frame camera that shoots photographs at 36 megapixels has roughly the same pixel size as an APS-C camera that shoots at 16 megapixels.

2017 History of Graphic Design

Please publish modules in offcanvas position.