Recently, we’ve been bombarded with smartphones coming with ever larger and larger megapixel ratings, but unlike times in the past, where to make it seem more impressive companies would “upscale” their sensors, say from 8MP to 13MP, that never really served any purpose other than to confuse people into thinking they’re getting something that they’re not.
Now, this is not what is happening here. Recently Samsung, Huawei and Sony have released or announced ultra-high megapixel cameras designed for phones, as recently as the 12th of August, Samsung announced their ISOCELL Bright HMX sensor, a whopping 108MP sensor optimised for taking 27MP images. Now that second bit is what is causing some confusion, and is one of the things asking if this is just upscaling and whether Samsung is lying, Well, I have good news and bad news for you. The good news is that they’re not lying, but the bad news is, you’re not going to want to take 108MP images with it.
First, a refresher on what a pixel is and what how a camera sensor is usually designed. A Pixel is something that can reproduce or capture light, in the camera sensor, it’s capturing light. A pixel is comprised of 4 sub-pixels, usually this is Red, Green and Blue, which is where RGB comes from, all colours can be digitally reproduced from different combinations of these three colours, and they are arranged in a 2×2 grid of RGGB, because the human eye is very sensitive to Green and we model camera sensors on our physiology. Huawei recently shook things up with the SuperSpectrum sensor on their P30 Pro, by replacing the Green sub-pixels with yellow, this made it harder to reproduce other colours (a burden Huawei overcame very well) but the yellow was even more sensitive to light than green is, so the more light each pixel can take in, the brighter the overall image will be.
But there are other ways to create a brighter image, the most common one is to just have a bigger sensor and therefore each pixel is larger, but in a smartphone, space is already at a premium, so just shoving a bigger sensor in a phone isn’t all that practical, so what companies started doing was oversampling. This meant that a company would effectively quarter it is resolution, but would double the light sensitivity, it would take the data from 4 pixels next to each other and create a larger “pixel” but the problem with this is that the sensor isn’t designed to plop for pixels together, so colours suffer, and there is only so much that software can do. Introducing, the quad bayer filter.
As I mentioned earlier, one of the ways to improve low light sensitivity is to increase the pixel size, but how do you know whether the one you’ve got is larger or smaller than normal? Well luckily for you, I’m here. The pixels on most high resolution (40MP+) is roughly 0.8µ or microns, now that is tiny. Before these guys, the smallest we’d usually see was 1.12µ on lower end cheap devices. The Google Pixels have sensors with 1.4µ and the big daddy, the king of large pixel sizes was the HTC One M7, the 4 “Ultrapixel” camera had 2µ sized pixels, meaning it had a sensor or comparable size to a 16MP sensor, but only had 4 megapixels on it. These modern quad bayer filter sensors usually fall in the effective 1.6µ size. What’s a normal camera sensor like? Well the Panasonic GH5 has a 4/3’s sensor that has 20MP, each of those pixels is 11.09µ so yeah, these phone sensors are tiny.
What is a quad bayer filter? Well, in a simple way, it is a way to design a sensor with the resolution benefits of a higher megapixel sensor (higher detail) but with the benefit of larger pixels (more light sensitivity). These are sensors that have a certain amount of megapixels, but they are arranged in a way to make the best of oversampling, which means you’ll usually see the phone take a picture at a quarter the resolution of what the camera is marketed at, and you weren’t lied to, but marketing is always a bit tricky.
Take Samsung’s new ISOCELL Bright HMX I mentioned earlier, this is a sensor that does actually have 108 million pixels in it if you really wanted to count them all, you could, but the sensor is designed to take awesome 27MP instead, which it will. But that means it isn’t optimised for full resolution shots, so Samsung (and others) will have to employ a fancy trick called re-mosaicing, which is almost the reverse of oversampling, Samsung has a very great photo on their site about it.
But just like putting 4 pixels together when they weren’t meant to has its issues, trying to take 4 same colour grouped subpixels and do software magic to make it a traditional pixel has its issues, mainly that whilst you do get an increased resolution, you don’t get the quadrupling of resolution that you would think you do.
The example I’ve been using in this piece is actually a bit strange, as it does use a quad bayer filter (which Samsung calls “Tetracell”) but it is also a physically large sensor. Whilst most sensors using this technique are small (relatively) such as the 1/2.25” sensor of the Samsung ISOCELL Bright GM2, Samsung’s 48MP sensor, or the IMX586’s (also 48MP) 1 /2” sensor. The ISOCELL Bright HMX is a whopping 1 /1/33”, for comparison, that is getting awfully close to the 1” sensor used in most point and shoot cameras.
So back to the title of this piece, whilst megapixels themselves aren’t changing, the way we define a megapixel is, it is still a million pixels, but how we arrange them in a sensor is changing. This isn’t upscaling, this is an innovative and fundamental change to the hardware design of a sensor to overcome the issues of smartphone cameras. When you can’t go bigger, you have to go smarter. Quad-bayer filters aren’t magic, they can’t create detail where there is none, but they are an innovative solution to a very real problem.