Let’s talk pixels. Specifically, the 14-pixel iPhone. Specifically, the pixels of the iPhone 14 Pro. Because while the headlines are that the latest Pro models offer a 48MP sensor instead of a 12MP sensor, that’s hardly the biggest improvement Apple has made to this year’s camera.
Indeed, from the four biggest changes this year, the 48MP sensor is the least important to me. But stick with me here, because there’s a lot we need to unpack before I can explain why I think the 48MP sensor is much less important than:
- sensor size
- Pixel grouping
- The photonic engine
One 48MP sensor, two 12MP
Colloquially, we talk about the iPhone camera in the singular, then refer to three different lenses: main, wide and telephoto. We’re doing it because it’s familiar — that’s how DLSRs and mirrorless cameras work, one sensor, multiple (interchangeable) lenses — and because that’s the illusion Apple is creating in the camera application, for simplicity.
The reality is of course different. The iPhone actually has three cameras mods. Each camera module is separate and each has its own sensor. When you press, say, the 3x button, you’re not just selecting telephoto, you’re switching to a different sensor. When you slide zoom, the camera app automatically and invisibly selects the appropriate camera module, and then does any necessary cropping.
Only the main camera module has a 48MP sensor; the other two modules still have 12MP.
Apple was quite upfront about this when introducing the new models, but it’s an important detail that some may have missed (emphasis added):
For the very first time, the Pro range offers a new 48MP main camera with a quad-pixel sensor that adapts to the captured photo and features second-generation sensor-shift optical image stabilization.
48MP sensor works part-time
Even when using the main camera, with its 48MP sensor, you still only take 12MP photos by default. Again, Apple:
For most photos, the quad-pixel sensor combines all four pixels into one large quad pixel.
The only time you shoot at 48 megapixels is when:
- You are using the main camera (no telephoto or wide angle)
- You shoot in ProRAW (which is disabled by default)
- You shoot in decent light
If you want to do that, here’s how. But above all, you will not be…
Apple’s approach makes sense
You might be wondering why give us a 48MP sensor and not use it?
Apple’s approach makes sense because, in truth, there is very a few occasions when shooting 48MP is better than shooting 12MP. And since this creates much larger files, eating up your storage space with a voracious appetite, it makes no sense to make it the default.
I can only think of two scenarios where shooting a 48MP image is a useful thing to do:
- You intend to print the photo, in large format
- You have to crop the image very heavily
This second reason is also a bit moot, because if you need to crop that much, you might be better off using the 3x camera.
Now let’s talk about sensor size
When comparing any smartphone camera to a high-quality DSLR or mirrorless camera, there are two big differences.
One of them is the quality of the lenses. Stand-alone cameras can have much better lenses, both due to physical size and cost. It’s not uncommon for a professional or avid amateur photographer to spend a four-figure sum on a single lens. Smartphone cameras, of course, cannot compete with this.
The second is sensor size. All things being equal, the larger the sensor, the better the image quality. Smartphones, by the very nature of their size, and all the other technologies they need to integrate, have much smaller sensors than standalone cameras. (They also have limited depth, which puts another substantial limitation on sensor size, but we don’t need to go into detail.)
A smartphone-sized sensor limits image quality and also makes it harder to get shallow depth of field – that’s why the iPhone does it artificially, with Portrait mode and video cinematographic.
Apple’s large sensor + limited megapixel approach
While there are obvious and less obvious limits to the sensor size you can use in a smartphone, Apple has always used larger sensors than other smartphone brands – which is part of why the iPhone has long been considered the go-to phone for camera quality. . (Samsung later decided to do this as well.)
But there is a second reason. If you want the best quality images possible from a smartphone, you also want the pixel be as big as possible.
That’s why Apple religiously sticks to 12MP, while brands like Samsung have crammed up to 108MP into the same size sensor. Squeezing a lot of pixels into a small sensor dramatically increases noise, which is especially noticeable in low-light photos.
Ok, it took me a while to get there, but I can finally say why I think the larger sensor, pixel-binning and photonics engine are way more important than the 48MP sensor…
#1: iPhone 14 Pro/Max sensor is 65% larger
This year, the main camera sensor of the iPhone 14 Pro/Max is 65% larger than that of last year’s model. Obviously, that’s still nothing compared to a standalone camera, but for a smartphone camera, that’s (pun intended) huge!
But, as we mentioned above, if Apple squeezed four times as many pixels into a sensor that was only 65% larger, it would actually result in lower quality! This is exactly why you will always be shooting 12MP footage. And it’s thanks to…
#2: Grouping Pixels
To shoot 12-megapixel images on the main camera, Apple uses a pixel binning technique. This means that data from four pixels is converted into a virtual pixel (by averaging the values), so the 48MP sensor is primarily used as a larger 12MP sensor.
This illustration is simplified, but it gives the basic idea:
What does it mean? Pixel size is measured in microns (one millionth of a meter). Most high-end Android smartphones have pixels measuring between 1.1 and 1.8 microns. The iPhone 14 Pro/Max, when using the sensor in 12MP mode, actually has pixels measuring 2.44 microns. It’s a really significant improvement.
Without pixel-binning, the 48MP sensor would – most of the time – be a downgrade.
#3: Photonic Engine
We know that smartphone cameras of course cannot compete with standalone cameras in terms of optics and physics, but where they can compete is in computational photography.
Computational photography has been used in SLRs for decades. When you change metering modes, for example, this instructs the computer inside your DLR to interpret the raw sensor data in a different way. Similarly, in consumer DSLRs and all mirrorless cameras, you can choose from a variety of photo modes, which again tells the microprocessor how to adjust sensor data to achieve the desired result.
So, computational photography already plays a much bigger role in standalone cameras than many realize. And Apple is very, very good at computational photography. (Okay, it’s not good in cinematic video yet, but give it a few years…)
The Photonics Engine is the dedicated chip that powers Apple’s Deep Fusion approach to computational photography, and I can already see a huge difference in the dynamic range of photos. (Examples to follow in an iPhone 14 Journal article next week.) Not just the lineup itself, but in the smart decisions made about who shadow to bring out, and who highlight to tame.
The result is significantly better photos, which have as much to do with software as hardware.
A considerably larger sensor (in smartphone terms) is a very big deal when it comes to image quality.
The pixel clustering means Apple has effectively created a much larger 12MP sensor for most photos, realizing the benefits of the larger sensor.
The photonic engine refers to a chip dedicated to image processing. I can already see the real benefits of this.
More to follow in an iPhone 14 journal post, when I put the camera through more extensive testing over the next few days.
FTC: We use revenue-generating automatic affiliate links. After.
Check out 9to5Mac on YouTube for more Apple news: