That New Yorker article on computational photography

Handpicked

The article in question is Have iPhone cameras become too smart?, written by Kyle Chayka for The New Yorker, and was brought to my attention firstly because both Nick Heer and John Gruber linked to it and shared their thoughts; but also because some readers — perhaps remembering my stance on computational photography (here and here) — thought I could be interested in reading a somewhat similar take.

What’s interesting to me is that a lot of people seem to have missed the central point of Chayka’s piece, including Gruber. Almost everyone who wrote me and mentioned this article, asked me (probably rhetorically): Does an iPhone 7 really take better photos than an iPhone 12 Pro?

The answer is — it depends. It depends on what better means to you. It depends on what photography means to you.

From what I understood by reading the article, Chayka’s main question could be paraphrased as such: Are all good-looking photos, good photos? or Are all professional-looking photos, professionally-taken photos? The answer here is complex, and cannot be objective.

If you’re someone using your smartphone as your sole camera, and your photographic intent is just to capture memories by taking instant snaps, then you’ll appreciate any computational photography advancement provided by iPhones for the past four years or so. You’ll want an iPhone 12 Pro over an iPhone 7 because, for your purposes, it’ll take better looking photos for you most of the time.

I have carefully worded that last sentence: the iPhone will take ‘better looking’, more eye-pleasing photos for you, because with this level of computational photography, your agency is basically limited to choosing what to frame and when. The iPhone does the rest of the work.

This is why computational photography’s advancements tend to be praised by those who have a more utilitarian approach to photography, and tend to be ignored or criticised by those who have a more artistic and, er, human-centered approach to photography. Both are valid approaches, don’t get me wrong. The wrong attitude is, perhaps, to consider your approach better than the other.

But let’s go back to Chayka’s article. The point that is the most thought-provoking, in my opinion, is the emphasis given to one specific aspect of the newer iPhones’ computational photography — the mechanisation, the automation, the industrial pre-packaging of a ‘good-looking’ or ‘professional-looking’ photo. Much like with all processed foods produced on an industrial scale, which all look and taste the same, computational photography applies a set of formulas to what the camera sensor captures, in order to produce consistently good-looking results. The article’s header animation summarises this clearly: a newer iPhone passes by a natural looking still life with flowers in a vase, and for a moment you can see how the iPhone sees and interprets that still life, returning a much more vibrant, contrasty scene. Certainly more striking than the scene itself, but also more artificial and less faithful to what was actually there.

That’s why, in Chayka’s view, his iPhone 7 took ‘better’ photos than his iPhone 12 Pro. It’s not a matter of technical perfection or superiority. Both the camera and the image signal processor of the iPhone 7 are clearly technically much less capable than the iPhone 12 Pro’s, and Chayka is not arguing otherwise:

On the 7, the slight roughness of the images I took seemed like a logical product of the camera’s limited capabilities. I didn’t mind imperfections like the “digital noise” that occurred when a subject was underlit or too far away, and I liked that any editing of photos was up to me. On the 12 Pro, by contrast, the digital manipulations are aggressive and unsolicited.

In other words, in Chayka’s eyes, the camera of the iPhone 7 allowed him to be more creative and more in control of the photographic process exactly because it is ‘less smart’ and less overwhelmingly ‘full machine auto’ than the camera array of the 12 Pro. And he’s not alone on that:

David Fitt, a professional photographer based in Paris, also went from an iPhone 7 to a 12 Pro, in 2020, and he still prefers the 7’s less powerful camera. On the 12 Pro, “I shoot it and it looks overprocessed,” he said. “They bring details back in the highlights and in the shadows that often are more than what you see in real life. It looks over-real.”

What Fitt says here is something I only noticed recently when taking evening and night shots with a loaned iPhone 13 Pro. When first sharing my initial thoughts on computational photography, I wrote:

Smartphone cameras have undoubtedly made noticeable progress with regard to image fidelity, and […] soon we’ll reach a point where our smartphones achieve WYSIWYG — or rather, What You Get Is Exactly What You Saw — photography.

But it’s not that at all. Especially with low-light photography, what these newer iPhones (but also the newer Pixels and Samsung flagships) return are not the scenes I was actually seeing when I took the shot. They are enhancements that often show what is there even when you don’t see it. Sometimes the image is so brightened up that it doesn’t even look like a night shot — more like something you would normally obtain with very long exposures. And again, some people like this. They want to have a good capture of that great night at a pub in London or at a bistro in Paris, and they want their phone to capture every detail. I have a different photographic intent, and prefer night shots to look like night shots, even if it means losing shadow detail, even if it means film or digital grain.

Another great quote in Chayka’s article is here (emphasis mine):

Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal. Gregory Gentert, a friend who is a fine-art photographer in Brooklyn, told me, “I’ve tried to photograph on the iPhone when light gets bluish around the end of the day, but the iPhone will try to correct that sort of thing.” A dusky purple gets edited, and in the process erased, because the hue is evaluated as undesirable, as a flaw instead of a feature. The device “sees the things I’m trying to photograph as a problem to solve,” he added.

Again, it’s clear that computational photography is polarising: people who want to be more in control of their photographic process loathe the computational pre-packaging of the resulting photos. Happy snappers whose sole goal is to get the shot and have their shots consistently nice-looking are very much unbothered by computational photography. It’s less work, less editing, and in some cases better than what they could achieve if given a traditional camera.

The problem, as far as I’m concerned, is the approach of those who happily take advantage of all the capabilities of computational photography but want to pass the resulting photos as a product of their creative process. They always shoot on ‘full machine auto’ yet they have artistic ambitions. As Chayka points out, We are all pro photographers now, at the tap of a finger, but that doesn’t mean our photos are good.

He continues (emphasis mine):

After my conversations with the iPhone-team member, Apple loaned me a 13 Pro, which includes a new Photographic Styles feature that is meant to let users in on the computational-photography process. Whereas filters and other familiar editing tools work on a whole image at once, after it is taken, Styles factors the adjustments into the stages of semantic analysis and selection between frames.
[…]
The effects of these adjustments are more subtle than the iPhone’s older post-processing filters, but the fundamental qualities of new-generation iPhone photographs remain. They are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.

Of course, there is no fixed formula or recipe to classify a photo as artistic or not. For some, the more manual intervention in the photographic process, the more artistic the result can claim to be. I’m not necessarily against some form of automated facility when taking photos with artistic intent or ambition. Things like autofocus and even shooting in Program mode can be crucial when engaging in street photography, for example. But even when shooting with a film camera in Program mode, the camera may have full control over the exposure, but the final look is always up to the photographer, who chooses what film to use and how to ‘push’ it when taking photos or afterwards in the darkroom. With a modern iPhone, its computational photography capabilities do much more than this. The phone is responsible of practically everything in a photo taken as-is without any editing, including the photo’s look, including the addition of otherwise imperceptible details.

Again, thinking about those low-light shots I took with an iPhone 13 Pro, the only thing I did was framing the scene and deciding when to shoot. What came out was a nice photo to look at, but didn’t feel ‘mine’, if you know what I mean. Maybe shooting ProRAW and then editing the photo to my taste would have felt more artisanal, if you will, but I always go back to this article by Kirk McElhearn, Apple’s new ProRAW Photo Format is neither Pro nor RAW. And for the way I do my photography, my iPhone is like an instant camera, no more no less. If I have to shoot RAW, I’d rather use one of my many digital cameras (or film cameras for that matter).

Not long ago, a photographer friend of mine succinctly remarked, All the photos taken with current flagship phones look like stock photos to me. And stock photos are great, are perfect for their purposes, but you won’t find them hanging in an art gallery.

I’ll reiterate. If you’ve read Chayka’s piece and your takeaway is that he argues that an iPhone 7 is better than an iPhone 12 Pro at taking photos, you’re missing the point. He’s saying that, in a way, the limitations of the iPhone 7’s camera were more effective in stimulating his creativity and let him have more control over the final photo, while the iPhone 12 Pro has behaved more like a photographic know-all, due to all the machine learning smarts that come built in. That’s why he asks whether iPhone cameras have become too smart. He doesn’t necessarily advocate for making ‘less smart’ iPhones, but for making iPhones that can disable their smart camera features if the user so chooses. I agree with the sentiment, and I very much agree with Nick Heer when he notes:

Right now, the iPhone’s image processing pipeline sometimes feels like it lacks confidence in the camera’s abilities. As anyone who shoots RAW on their iPhone’s camera can attest, it is a very capable lens and sensor. It can be allowed to breathe a little more.

The Author

Writer. Translator. Mac consultant. Enthusiast photographer. • If you like what I write, please consider supporting my writing by purchasing my short stories, Minigrooves or by making a donation. Thank you!