Explained: What pixel binning means for smartphone photography

So what’s the problem ? Why hasn’t pixel clustering been more common?

For decades, astrophotographers using dedicated astroimaging cameras have been able to toggle pixel binning on and off as a toggle in their camera or software settings.

So why didn’t consumer cameras offer pixel binning until recently?

The answer is that astroimaging cameras are traditionally monochrome cameras and consumer cameras are color – and pixel grouping becomes more complicated with color cameras because of the way they capture color data.

Traditionally, astrophotographers would create a color image by combining images taken with a red filter, a green filter, and a blue filter. But since a filter wheel isn’t exactly a practical accessory for an everyday camera, most cameras capture color data by incorporating a pattern of red, green, and blue color filters that cover each photosite. (or pixel) on the image sensor.

In other words, all color cameras are monochrome cameras – under their filter networks.

Until recently, almost all color digital cameras used alternating red/green and blue/green lines that formed a checkerboard-like pattern called a Bayer matrix. (A notable exception is Fujifilm’s famous X line of mirrorless cameras, which use an unconventional 6×6 pattern.) The Bayer array is named after Bryce Bayer, who invented it while working for Kodak in the 1970s.

In a Bayer array, each 2×2 pixel group contains two green pixels, located diagonally to each other, and one red pixel and one blue pixel. To determine the color at a particular point in the image, the camera’s internal processor samples the surrounding pixels to interpolate the full RGB color data for each pixel in a process called demosaicing.

But a Bayer array makes pixel grouping much more complicated. To do this, the camera normally sums the data from each red, green (2x) and blue grid into a single value, but this removes the color data, resulting in a monochrome image. A more complicated algorithm would be needed to group each color separately, which introduces artifacts and negates the gain in sensitivity significantly.

To allow color cameras to filter pixels more effectively, camera sensor manufacturers have rearranged the entire filter grid into a pattern called a quad-Bayer (or quad-pixel, in Apple’s preferred terminology) array. . In this pattern, each 2×2 group is the same color, forming a larger Bayer pattern with four pixels in each superpixel.

About Johnnie Gross

Check Also

Sun-like star discovered orbiting closest black hole to Earth

Imagine if our Sun were orbiting a black hole, perhaps spiraling into it. Admittedly, the …