Some musings just before WWDC 2022

Tech Life

It’s that time of the year again. Apple’s WorldWide Developer Conference is about to begin. Rumour sites share rumours and what-might-happen, with various degrees of trustworthiness. And tech pundits share their wishes. I have been thinking about what to write for at least two weeks. My first half of 2022 has been much busier than expected, both work-wise and in my private life, and the months of March-April-May have been particularly messy, as you may have guessed by the scarcity of updates here. Add to this the fact that, as of late, I’ve been finding writing about tech harder than usual, and I think you won’t have trouble believing that I was very close to not writing anything about the upcoming WWDC at all. 

But why has writing about tech been harder than usual? Because as time goes on, the gap between what I want from tech and what tech companies are doing and their priorities just keeps widening. With Apple, it’s largely no different. That’s why writing a piece about my ‘wishes’ with regard to what Apple will present at the WWDC keynote felt utterly pointless — because what I want from Apple is something the company doesn’t seem interested in pursuing, or can’t do because that would imply a course correction that involves too much effort on their part.

Mac OS and iOS/iPadOS

While I have been for the most part pleased with what Apple has done with the hardware since the Apple Silicon transition started, their software keeps being underwhelming at best. It’s not just software quality or questionable design decisions in their operating systems’ UIs. It’s their — how to call it? — lack of platform vision, maybe.

The core feature of Apple’s ecosystem, what has made me choose Apple’s devices over decades and stick with them, is what Apple has generally done better than the competition: hardware-software integration. Apple’s advantage, of course, is that unlike most PC manufacturers the company builds both the hardware and the software.

This integration used to be tight, and it was the main reason behind the It Just Works motto. But over the past… hmmm ten years maybe? Over the past ten years it’s like there have been two distinct companies inside Apple, Apple Hardware and Apple Software, which have communicated with each other less and less frequently and less and less effectually. Apple Hardware has been accelerating over time, sometimes making mistakes but apparently willing to learn from them and correcting them over time. And with the innovation of Apple Silicon, they have done an excellent job at delivering breathtakingly powerful Macs, iPads, iPhones.

Apple Software hasn’t kept up with the pace at all. It has been moving in circles. It has been trying to fix what was not broken. It has introduced regressions in the user interface. It has seemingly un-learnt some good lessons of the past; lessons, I should point out, imparted by Apple Software themselves… only the older guard, people who clearly better understood the importance of software and the role it plays in powering a platform.

So here we are today, with insanely powerful Macs and iPads driven by inadequate operating systems. And as I was writing in Raw power alone is not enough, not only has Apple not been a source for software inspiration and innovation in years, but with their overbearingly protective attitude, they’ve been stifling many third-party developers, especially on iOS/iPadOS. They have been — inadvertently or not — obstructing innovation in software. What’s the point of having iPad Pros that are more powerful than a lot of non-Apple PCs, and then have third-party developers jump through so many hoops and restrict their movements so much that the apps they eventually create are miraculous constructions which have to balance so many things internally that sometimes even a minor OS update is enough to cause disruption.

Platform trajectory

A perhaps unpopular opinion I’ve been holding for a while is that the convergence of Mac OS and iOS/iPadOS as platforms has been a bad idea, and that ideally it should be rethought. I’ve been saying this for years: to have the best of Mac OS and the best of iPadOS, Apple should focus on the particular strengths each platform has. The focus should be to double down on the differences between a Mac and an iPad so that you can one day provide the best computer and the best tablet experience. All these attempts at homogenising Mac OS and iPadOS for the sake of having ‘a familiar environment’ has only been hurting the usability of each operating system. 

Remember the iPad at its heyday, when it was essentially a consumption device where you could consume content like you would on an iPhone, but more comfortably due to the iPad’s bigger screen? At that point in time iOS worked perfectly as the operating system for a device with that purpose.

But it was only natural to want to do more with a tablet. It was a device that just asked for creativity and creation. Things came to a crossroads, and in my opinion Apple took the wrong road. Instead of creating ‘iPadOS’ then (let’s say around the time iOS reached version 6), and start working on making a truly tablet-oriented OS, they let the software stagnate on the iPad. For years. For years what ran on an iPad was big iPhone OS and little more. Until the pressure from the more creative and iPad-first users got unbearable and iPadOS was officially created in 2019. And at that point, after not going anywhere for years, what can you do to make the iPad a more versatile and creative device? Have its OS ape Mac OS, essentially. Maybe not quite in a literal or slavish manner, but certainly conceptually.

The Newton’s operating system didn’t want to be ‘Mac OS on a PDA’. NewtonOS was/is an OS built for and around the hardware it ran on. iPadOS’s path seems to be destined to increasingly borrow from Mac OS. This is misguided. And not because I don’t want the iPad to get Mac OS features. That’s not the point.

But this is what you get when you leave a potentially computing-changing device stagnate for years OS-wise. Now its operating system can only go from being big iPhone OS to being little Mac OS because it’s too late to make a U‑turn and rethink the whole software paradigm. “Ooh, but multitasking on the iPad is getting better and will get better!” — Yeah, it only took 12 years to be able to run 3 apps simultaneously. As I said on Twitter recently, even by giving a better multitasking UI to the iPad, it’s not doing the iPad much justice, and it’s still a type of multitasking whose execution is very computer-like, but crammed into a tablet’s interface. The lazy thinking here is, It’s intuitive because it’s like on computers, people are used to that. But imagine making a tablet OS that’s really built around the characteristics, the form factor, and the applications (use cases) of a tablet. Imagine a tablet OS that fully embraces touch but also the stylus/pencil, with gestures and paradigms borrowed from writing and drawing. It could be just as intuitive, but it naturally would require more effort at the design and execution stages. Yet you may have a groundbreaking OS that treats the iPad like the tablet that it is and builds on its strengths as a not-computer, or as an untraditional computer, if you like.

Instead we’ll soon face a sort of ‘OS confusion’ and conflation, and the differences and distinctions will be more superficial, i.e. driven by hardware. iPads will increasingly become touchscreen Macs. And if Apple one day introduces a Mac with a touchscreen, what kind of differences will we be able to appreciate between, say, a 12.9‑inch touchscreen Mac and a 12.9‑inch iPad Pro with a Magic Keyboard? Not many, I’d say. And that’s disappointing. Make this thought experiment: if Apple were to introduce a convertible 2‑in‑1 ‘MacPad’, with touch screen and Pencil support, would you buy an iPad? This MacPad would theoretically have it all: the same touch capabilities of current iPads, and a Mac OS that can effectively run both Mac and iOS apps. Maybe some nerds will even think this is the best of both worlds, while it’s probably going to be the worst of both worlds, mainly because of the compromises you’ll have at the user interface level when you ultimately mix up a traditional computer interface with a touch interface. 

Based on what I’m seeing today, it’s hard not to think that Mac OS and iPadOS as platforms are on a path to become Apple’s version of Microsoft’s Surface ecosystem. Yes, it’s possible that Apple could end up making a better job at it, but it’s disappointing to think that the future of the iPad is to resemble something Microsoft did about ten years ago. Not because what Microsoft did and is doing with the Surface is a bad thing, not at all. It’s that doing a similar thing now doesn’t strike me as being particularly innovative or groundbreaking.

Here’s one thing I’d like Apple to do: give third-party developers a more Mac-like access to iPadOS. Yes, I think Apple should start differentiating policies between iOS and iPadOS. iPadOS shouldn’t be locked-down like iOS. It should allow for a little more breathing room like Mac OS does. Keeping a tighter control on what iOS allows to be put on an iPhone could still make sense in order to protect users from malware, etc., given that the smartphone user base is all over the place in the tech-savvy spectrum. But iPad users are generally a more tech-savvy bunch, with more sophisticated needs and creative demands. They could only benefit from a more open iPadOS. It would have the same security protections Mac OS has, and things wouldn’t certainly become ‘the Wild West’ Apple fears so much. If Apple has little to offer, software-wise, to push innovation on the iPad, they should at least avoid standing in the way of developers who potentially could. What’s the point of telling developers, Here’s this new amazing iPad, we can’t wait to see what you can do with it if the reality actually translates into, We can’t wait to see what you can do with it, provided you don’t do this, and this, and this, and this, and this.

Last-minute additions before wrapping up

All this talking about iPad, I was forgetting about the Mac. The thing I’d love to see in the next iteration of Mac OS is something I was mentioning at the beginning of this piece: I wish Apple UI designers would stop messing with the user interface and stop confusing ‘simple UI’ with ‘dumb UI’. I wish they started loving usability more than minimalism. Look at this tweet from Mario Guzman and its follow-up. The progress indicator is small and unhelpful as it is. The fact that you can obtain more information by clicking on it doesn’t make things better: such information should be given by default because it’s meant to be glanceable. I should be able to have Photos in a window on the side and check on the import progress while doing something else without having to click anywhere. 

And don’t get me started on the icons themselves: at first glance, I didn’t even know that part of the UI was from Photos. Conversely, you could screenshot any part of the old iPhoto and Aperture’s main interface and you would recognise where you were and what that was. The art of making things discoverable and intuitive is to visually state the obvious in whatever environment you are. And to know whether something is obvious enough (an icon, a slider, a control), you ought to test it with users that are outside your design team and their collective hallucinations. 

The other, more specific thing I’d love to see on Mac OS is “Time Machine 2”. Time Machine is a great feature that truly made backups easier for regular folks. But since its introduction back in the Mac OS X 10.5 Leopard era, the Time Machine interface (and I daresay its performance) has essentially remained the same. I’m not asking too much here, I think — just a less black-box‑y interface, something a bit more interactive so that you’re not left wondering if the entire backup process has frozen or if it’s still in progress when all you get is Preparing backup…. Also, it wouldn’t hurt to have the option of more easily checking older backups and trashing them manually if you need or want it. And performance-wise, well, er, maybe having speedier backups wouldn’t be bad either. Not too long ago I witnessed a Time Machine backup of a Mac’s internal SSD to an external SSD backup drive, both APFS-formatted, and the experience was more underwhelming than expected.

Wrapping up

Now you understand why it felt pointless to write a pre-WWDC ‘list of wishes’. Simply put, Apple is moving in a direction I feel less and less compatible with, generally speaking. The more I want Apple to slow down, do fewer things but better and with a sharper focus, the more Apple seems to do exactly the opposite. I always hope to be proven wrong one day, and that Apple can surprise everyone with some unexpected left-field idea. The way I’d love Apple to operate is perhaps too developer- and consumer-friendly, maybe too countercurrent in relation to the tech landscape surrounding us today. That’s because I was sort of taught to think this way by Apple themselves when they were the industry brilliant underdog when Steve Jobs was still around.

So long, iPod. You’ll be missed.

Tech Life

iPod-love

Today, Apple announced the discontinuation of the last available iPod, the seventh-generation iPod touch (introduced in May 2019). 

It is the end of an era. Those who know me, at this point, are probably expecting a long-winded tirade about how Apple is leaving behind yet another important piece of its history, that it’s just another money-making tactic to drive the sales of iPhones and HomePods, whatever. 

You’re going to be disappointed. For the way a lot of people consume music today (yes, I chose consume purposefully), a device like the iPod touch doesn’t make much sense anymore. In fact, I’m genuinely surprised the iPod touch has lasted this long under Cook’s administration.

The iPod has been around for little more than 20 years, which is a very respectable milestone for a device that truly changed the way people listen to music, and managed to maintain its interestingness and fun factor by morphing into different shapes over the years. It was probably the first Apple device to be loved both by die-hard Apple fans and non-Apple users. Our household is full of iPods, which both I (a long-time Apple user) and my wife (a long-time non-Apple user) have enjoyed over the years. My first iPod was a 10 GB third-generation model I purchased in 2003. My wife’s first iPod was a 4 GB blue iPod mini (2004). The hard drives of these two iPods both failed a few years ago, but I managed to upgrade them by using CompactFlash cards, so now my third-generation iPod has 16 GB of storage, and the mini has 8 GB. They still have their original batteries and on a full charge they still last about 2–3 hours of non-continuous playback. 

The first-generation iPod touch has a special place in our household. When the first iPhone was introduced, it wasn’t available in many countries outside the US. I had to wait until September 2008 to get my first iPhone, and it was the iPhone 3G. But the first-generation iPod touch, launched in 2007, was indeed available here in Europe, and so when my brother-in-law gifted it to my wife, it was our very first hands-on experience with the Multi-touch interface and the operating system of the iPhone.

The iPod shuffle and the nano were two other lines we’ve loved and still love a lot: I have a second- and third-generation shuffle, a third- and a seventh-generation nano; my wife has two second-generation shuffle models, and a sixth- and seventh-generation nano. And my brother-in-law has even an iPod Hi-Fi.

Today, like many other people, my family enjoys music on mobile devices mainly via streaming services (Spotify, in our case); and yet, we still use these smaller iPods when out and about. And here comes the only point I wanted to make with this brief piece.

As I said at the beginning, a device like the iPod touch is rather redundant for the way we consume music nowadays. However, I think a device like the iPod shuffle still makes a lot of sense. Its main characteristics, what made it an ingenious and very successful device back then, still make it an interesting and appealing device today:

  • Its size and weight
  • Its design: the iPod shuffle is effectively an unobtrusive, wearable device
  • Its lack of UI and the concept of filling it with songs you then listen to randomly (or in sequence, if you prefer)
  • Its low price

Listening to music with an iPod shuffle is still (and can still be) a fun experience. You can create the digital equivalent of a mixtape, load it on your shuffle, clip the shuffle to your shirt/jeans/jacket, and then you can go out and listen to music without even having to touch the device, unless you need to change volume or skip a track. It’s basically a hands-free device that disappears on you. If Apple made a new iPod shuffle with Bluetooth, the invisibility factor would be even higher, since you wouldn’t even have the earphones’ cable around you to remind you that you are wearing an iPod. It would still be a nice device for commuting, or jogging, or during a workout.

Sure, you might say that these uses are now taken over by the Apple Watch or other smartwatches, but for an Apple Watch you’ll pay a minimum of $199 up to more than $1,000. An iPod shuffle would be a $50 device. If you’re a casual user who just wants to have some music while out and about, jogging, etc., and don’t use a smartwatch, a little wearable device like the iPod shuffle could still be your cup of tea. But maybe wanting from today’s Apple a fun, inexpensive, wearable, colourful device is asking too much. Here, have an AirTag instead.

Raw power alone is not enough

Software

Nick Heer, a few days ago, posed a question:

This is a good and wide-ranging interview that dances around a question I have been thinking about for a while now: what capabilities do high-performance products like these [the Mac Studio] unlock for a creative professional? It is great to see how much faster they are at compiling applications or rendering video, but I wonder what new things people will attempt on machines like these which may have been too daunting before. 

New applications, new endeavours, are certainly made possible by technological advancements in hardware, chip design and engineering. I’m looking at my Power Mac G4 Cube on this other desk. It was introduced 22 years ago, it has a 450 MHz CPU, 1.5 GB of RAM, and a 60 GB spinning hard drive. Its graphics card has 16 MB (megabytes) of memory. When you look at the specs of an M1 Ultra Mac Studio, you have a pretty good idea of the progress that has been made in 22 years when it comes to storage, memory, graphical & computational power, and overall speed and responsiveness. A rendering job that takes a new Mac Studio a couple of minutes, this poor G4 Cube would probably take a whole day to compute — provided it could even do it in the first place.

But there’s another crucial thing to consider: software. There’s always a car analogy when talking about computers, and this time is no different — and software is the fuel in this analogy. You can have an astoundingly powerful, astoundingly energy-efficient engine that makes the car reach 300 km/h in 2 seconds. But without fuel, the car won’t go anywhere.

However, software in a computer system does more than just making the engine run. It also gives the system a purpose, a direction. It gives the system applications, both in the sense of software programs, and in the sense of uses for a machine.

Without innovation in software, all we’re doing with these new powerful machines is essentially the same we were doing 20 years ago on PowerPC G4 and G5 computers, but faster and more conveniently. Granted, it is progress, especially in those fields involving CPU- and GPU-intensive tasks and greatly benefitting by having lots and lots of calculations made in the shortest possible time.

But progress can’t be just about quantitative aspects of computing, as great and beneficial as they are. What new applications can an amazing M1-Ultra-powered Mac Studio unlock, if there are no new types of software applications that could provide new directions and uses?

This is the personal beef I have with tech innovation today, which I feel still revolving around the concept of ‘reinventing the wheel and making it spin faster’. I might be wrong on this, and it might just be an inaccurate subjective impression, but today I feel a distinct dearth of vision when it comes to what a computer can do. If the sheer raw power of computers has increased orders of magnitude in the last 30 years, the range of applications (in both senses) for a computer hasn’t increased or spread in a comparable way. 

(If you’re thinking, But what about AR/VR and the Metaverse, for example? — you know that these concepts are decades old, right? And that their applications are only underwhelmingly better than what was produced in the 1990s? And that the user interface and interaction hurdles to make these concepts work really seamlessly haven’t changed that much since?)

This reflection ties with what I was talking about in my two pieces (see here and the follow-up here) on Mac software stagnation. These past few years — after a period of Mac hardware stagnation and hardware design fiascos like the MacBook butterfly keyboard and the 2013 Mac Pro — Apple has got back on track and has really, positively pushed the envelope with their in-house designed systems on a chip, on mobile devices and then finally on Macs. What an iPad Air, iPad Pro, and even a base M1 Mac can achieve with their M‑class chips is remarkable in terms of raw power (and efficiency). But I’m not seeing the same kind of advancement in software. 

Apple’s first-party applications included with Mac OS are mediocre at best. Their pro apps appear to be more maintained than developed with the aim of advancement, with the possible exception of Final Cut Pro (video professionals, feel free to chime in). Apps that were previously good-quality, powerful, and versatile have been neutered and have become ‘just okay’ or ‘good enough’. The Utilities folder in Mac OS has been slowly but surely depopulated over time. iOS apps with an ingenious premise, like Music Memos, are being left behind as flashes in the pan. The consensus with iTunes was that Apple should have split it into different apps so that these could be better at handling specific tasks than the old monolithic media manager. Apple eventually did split iTunes into different apps, but forgot the second part of the assignment. The result is that I still go back to a Mac with iTunes to handle my media, and I’m not the only one.

Aperture overall was a better application than Adobe Lightroom when the two apps coexisted. Apple could have kept improving Aperture and kept making it better than Lightroom. Instead they gave up. We now have Photos as sole ‘sophisticated’ Apple photo tool. Which is neither fish (iPhoto) nor flesh (Aperture).

And then there are two applications I must mention because I’m still profoundly annoyed by their discontinuation: iWeb and iBooks Author. Have I made you raise an eyebrow? Good. Hear me out. 

iWeb certainly had its flaws. It was the typical app with a good premise that was never cultivated properly, never really optimised, never made better, and just left to wither. But let’s look at iWeb within a broader context: it’s 2022 — shouldn’t we have a powerful yet simple-to-use WYSIWYG tool to craft a website? Sure, there are accessible platforms that let you set up a blog with relative ease, and there are simple-enough tools to set up a static site, but a non-tech-savvy person will still find these tools to be sophisticated enough to be a bit off-putting. 

The Web has been around for thirty years now, why do HTML, CSS, etc., still exist? It’s a hyperbole, hopefully you’re getting my point here. Why aren’t there standardised tools to just create online spaces in a perfectly accessible WYSIWYG way? Why do regular people still have to struggle with strings of code and magical syntax to make trivial customisations to the websites they’ve patiently managed to create?

iWeb could have been a great tool, because it had this spirit — the Macintosh spirit — of attempting to help people make hard stuff in a simple, visual, intuitive way. 

iBooks Author wasn’t perfect either, and had some glaring omissions (an ebook authoring tool that doesn’t even have appropriate facilities to handle footnotes is laughable), but it had the potential of becoming a good application to create books. By the way, do you know any good-quality application to make ebooks that is sophisticated, relatively easy to use, with a good UI, and well-designed overall? On the Mac, only Vellum comes to mind. On other platforms I honestly have no idea, but I’m not terribly optimistic. Even Vellum needs you to install Kindle Previewer if you intend to publish using Amazon’s formats for the Kindle platform.

iBooks Author could have been overhauled and further developed, but apparently the only professionals Apple knows are in the audio/video departments. What about professional tools for authors and writers? The Pages app? Because that’s what Apple suggested to use when they discontinued iBooks Author in 2020 (which was already on life support by then). Come on.

I’m not saying that there are absolutely no tools available today for Web development or book designing. What I’m saying is that software as an abstract concept has aged worse than hardware in the history of computing. Software today still comes with much more friction than it should have, given the context of general technological advancement that has happened for the past 40 years or so. Most programming languages are old. The old foundations are getting more and more impractical to handle modern applications (uses) but the new foundations and new programming tools are still too immature to be an effectual replacement or successor. 

And don’t get me wrong — I’m not blaming third-party developers and indie developers here. They’re working as hard and as best as they can given the increasingly difficult conditions they’re put in, especially those developing for Apple platforms. It’s a maddening scenario: with their unnecessarily tight restrictions in the name of security (theatre), with their capricious and petty App review checkpoints, Apple seems to be actively obstructing innovation in software. And the company isn’t even doing it as a way to push aside third-party solutions to instead show off their software innovations and breakthroughs, because those are increasingly rare sights.

So, again, we have absurdly powerful machines like the Mac Studio and soon we’ll have the even more mind-boggling Apple silicon Mac Pro, and what kind of software will they run? A handful of professional apps which hopefully will take advantage of these machines’ capabilities to make the same things professional Macs did twenty years ago, ten years ago, but better and faster. Though the question is: what kind of software innovation will these impossibly powerful Macs unlock or facilitate? What kinds of new applications (uses) will these Macs allow? I have no idea. And I have no idea whether we’ll see something moving in this direction.

Apple’s chip and hardware advancements have inspired the competition (Intel) to do better, and that’s a great thing. On the software side, I’ve seen very little from Apple to be considered remotely inspirational. What I’ve seen are platform management techniques that have pushed things like subscriptions and lock-in, and a generally toxic gatekeeping behaviour. What I’ve seen is an operating system like Mac OS — based on strong UNIX foundations and rigorous, well-thought-out human interface guidelines — become a brittle, hollow shell, with questionable UI design choices, and bugs that get dragged from one iteration to another. When Apple’s own software has generally worsened over time; when they treat third-party developers as a necessary nuisance that has to be begrudgingly dealt with on a regular basis — instead of, you know, actually celebrate them and inspire them to write even better software for the Apple ecosystem; when their insistence with security through lock-down and lock-in leads to an ecosystem whose overall thriving is stifled at worst and corralled at best… How can Apple be an inspirational force in software?

That New Yorker article on computational photography

Handpicked

The article in question is Have iPhone cameras become too smart?, written by Kyle Chayka for The New Yorker, and was brought to my attention firstly because both Nick Heer and John Gruber linked to it and shared their thoughts; but also because some readers — perhaps remembering my stance on computational photography (here and here) — thought I could be interested in reading a somewhat similar take.

What’s interesting to me is that a lot of people seem to have missed the central point of Chayka’s piece, including Gruber. Almost everyone who wrote me and mentioned this article, asked me (probably rhetorically): Does an iPhone 7 really take better photos than an iPhone 12 Pro?

The answer is — it depends. It depends on what better means to you. It depends on what photography means to you.

From what I understood by reading the article, Chayka’s main question could be paraphrased as such: Are all good-looking photos, good photos? or Are all professional-looking photos, professionally-taken photos? The answer here is complex, and cannot be objective.

If you’re someone using your smartphone as your sole camera, and your photographic intent is just to capture memories by taking instant snaps, then you’ll appreciate any computational photography advancement provided by iPhones for the past four years or so. You’ll want an iPhone 12 Pro over an iPhone 7 because, for your purposes, it’ll take better looking photos for you most of the time.

I have carefully worded that last sentence: the iPhone will take ‘better looking’, more eye-pleasing photos for you, because with this level of computational photography, your agency is basically limited to choosing what to frame and when. The iPhone does the rest of the work.

This is why computational photography’s advancements tend to be praised by those who have a more utilitarian approach to photography, and tend to be ignored or criticised by those who have a more artistic and, er, human-centered approach to photography. Both are valid approaches, don’t get me wrong. The wrong attitude is, perhaps, to consider your approach better than the other.

But let’s go back to Chayka’s article. The point that is the most thought-provoking, in my opinion, is the emphasis given to one specific aspect of the newer iPhones’ computational photography — the mechanisation, the automation, the industrial pre-packaging of a ‘good-looking’ or ‘professional-looking’ photo. Much like with all processed foods produced on an industrial scale, which all look and taste the same, computational photography applies a set of formulas to what the camera sensor captures, in order to produce consistently good-looking results. The article’s header animation summarises this clearly: a newer iPhone passes by a natural looking still life with flowers in a vase, and for a moment you can see how the iPhone sees and interprets that still life, returning a much more vibrant, contrasty scene. Certainly more striking than the scene itself, but also more artificial and less faithful to what was actually there.

That’s why, in Chayka’s view, his iPhone 7 took ‘better’ photos than his iPhone 12 Pro. It’s not a matter of technical perfection or superiority. Both the camera and the image signal processor of the iPhone 7 are clearly technically much less capable than the iPhone 12 Pro’s, and Chayka is not arguing otherwise:

On the 7, the slight roughness of the images I took seemed like a logical product of the camera’s limited capabilities. I didn’t mind imperfections like the “digital noise” that occurred when a subject was underlit or too far away, and I liked that any editing of photos was up to me. On the 12 Pro, by contrast, the digital manipulations are aggressive and unsolicited.

In other words, in Chayka’s eyes, the camera of the iPhone 7 allowed him to be more creative and more in control of the photographic process exactly because it is ‘less smart’ and less overwhelmingly ‘full machine auto’ than the camera array of the 12 Pro. And he’s not alone on that:

David Fitt, a professional photographer based in Paris, also went from an iPhone 7 to a 12 Pro, in 2020, and he still prefers the 7’s less powerful camera. On the 12 Pro, “I shoot it and it looks overprocessed,” he said. “They bring details back in the highlights and in the shadows that often are more than what you see in real life. It looks over-real.”

What Fitt says here is something I only noticed recently when taking evening and night shots with a loaned iPhone 13 Pro. When first sharing my initial thoughts on computational photography, I wrote:

Smartphone cameras have undoubtedly made noticeable progress with regard to image fidelity, and […] soon we’ll reach a point where our smartphones achieve WYSIWYG — or rather, What You Get Is Exactly What You Saw — photography.

But it’s not that at all. Especially with low-light photography, what these newer iPhones (but also the newer Pixels and Samsung flagships) return are not the scenes I was actually seeing when I took the shot. They are enhancements that often show what is there even when you don’t see it. Sometimes the image is so brightened up that it doesn’t even look like a night shot — more like something you would normally obtain with very long exposures. And again, some people like this. They want to have a good capture of that great night at a pub in London or at a bistro in Paris, and they want their phone to capture every detail. I have a different photographic intent, and prefer night shots to look like night shots, even if it means losing shadow detail, even if it means film or digital grain.

Another great quote in Chayka’s article is here (emphasis mine):

Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal. Gregory Gentert, a friend who is a fine-art photographer in Brooklyn, told me, “I’ve tried to photograph on the iPhone when light gets bluish around the end of the day, but the iPhone will try to correct that sort of thing.” A dusky purple gets edited, and in the process erased, because the hue is evaluated as undesirable, as a flaw instead of a feature. The device “sees the things I’m trying to photograph as a problem to solve,” he added.

Again, it’s clear that computational photography is polarising: people who want to be more in control of their photographic process loathe the computational pre-packaging of the resulting photos. Happy snappers whose sole goal is to get the shot and have their shots consistently nice-looking are very much unbothered by computational photography. It’s less work, less editing, and in some cases better than what they could achieve if given a traditional camera.

The problem, as far as I’m concerned, is the approach of those who happily take advantage of all the capabilities of computational photography but want to pass the resulting photos as a product of their creative process. They always shoot on ‘full machine auto’ yet they have artistic ambitions. As Chayka points out, We are all pro photographers now, at the tap of a finger, but that doesn’t mean our photos are good.

He continues (emphasis mine):

After my conversations with the iPhone-team member, Apple loaned me a 13 Pro, which includes a new Photographic Styles feature that is meant to let users in on the computational-photography process. Whereas filters and other familiar editing tools work on a whole image at once, after it is taken, Styles factors the adjustments into the stages of semantic analysis and selection between frames.
[…]
The effects of these adjustments are more subtle than the iPhone’s older post-processing filters, but the fundamental qualities of new-generation iPhone photographs remain. They are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.

Of course, there is no fixed formula or recipe to classify a photo as artistic or not. For some, the more manual intervention in the photographic process, the more artistic the result can claim to be. I’m not necessarily against some form of automated facility when taking photos with artistic intent or ambition. Things like autofocus and even shooting in Program mode can be crucial when engaging in street photography, for example. But even when shooting with a film camera in Program mode, the camera may have full control over the exposure, but the final look is always up to the photographer, who chooses what film to use and how to ‘push’ it when taking photos or afterwards in the darkroom. With a modern iPhone, its computational photography capabilities do much more than this. The phone is responsible of practically everything in a photo taken as-is without any editing, including the photo’s look, including the addition of otherwise imperceptible details.

Again, thinking about those low-light shots I took with an iPhone 13 Pro, the only thing I did was framing the scene and deciding when to shoot. What came out was a nice photo to look at, but didn’t feel ‘mine’, if you know what I mean. Maybe shooting ProRAW and then editing the photo to my taste would have felt more artisanal, if you will, but I always go back to this article by Kirk McElhearn, Apple’s new ProRAW Photo Format is neither Pro nor RAW. And for the way I do my photography, my iPhone is like an instant camera, no more no less. If I have to shoot RAW, I’d rather use one of my many digital cameras (or film cameras for that matter).

Not long ago, a photographer friend of mine succinctly remarked, All the photos taken with current flagship phones look like stock photos to me. And stock photos are great, are perfect for their purposes, but you won’t find them hanging in an art gallery.

I’ll reiterate. If you’ve read Chayka’s piece and your takeaway is that he argues that an iPhone 7 is better than an iPhone 12 Pro at taking photos, you’re missing the point. He’s saying that, in a way, the limitations of the iPhone 7’s camera were more effective in stimulating his creativity and let him have more control over the final photo, while the iPhone 12 Pro has behaved more like a photographic know-all, due to all the machine learning smarts that come built in. That’s why he asks whether iPhone cameras have become too smart. He doesn’t necessarily advocate for making ‘less smart’ iPhones, but for making iPhones that can disable their smart camera features if the user so chooses. I agree with the sentiment, and I very much agree with Nick Heer when he notes:

Right now, the iPhone’s image processing pipeline sometimes feels like it lacks confidence in the camera’s abilities. As anyone who shoots RAW on their iPhone’s camera can attest, it is a very capable lens and sensor. It can be allowed to breathe a little more.

First impressions after the ‘Peek Performance’ Apple event

Tech Life

Sometimes Apple’s one-hour recorded events feel a bit like compressed archives, with lots of stuff to unpack. And several past events over the last few years have always contained some controversial element that made me write paragraphs and paragraphs of ranting criticism (the terrible keyboards that plagued Mac laptops for four years; the first appearance of a notch with the iPhone X, the unnecessarily thin design of the M1 24-inch iMac, the second appearance of a notch but this time on the new 14- and 16-inch MacBook Pros, etc.). But this ‘Peek Performance’ event was the first in a long time where I felt there was nothing ‘wrong’ or controversial — for me, at least.

Apple TV+

I’m sure there are going to be great films and series in there. That all-encompassing trailer montage was so packed it ended up not telling me anything or piquing my interest particularly (save maybe for Macbeth). And when you mention sports, especially baseball, I just tune out. Sorry, baseball fans, nothing personal.

The third-generation iPhone SE

Awarded the title of The Meh Phone basically by all tech YouTubers, this is actually my favourite iPhone at the moment. The design is still the same as the second-generation iPhone SE and as the older iPhone 8, and I frankly don’t get the hate. This is not the iPhone line where Apple is innovative. This is the iPhone line where Apple is price competitive. And where Apple still pleases people who love the smaller size and the conservative design. Like yours truly.

I’m still using an iPhone 8 as my main phone, and in 2020 I was very close to get a second-generation iPhone SE, but it still felt too early to upgrade, and even today the iPhone 8 is plenty for my needs. If you don’t rely on an iPhone for your photography, and just use it for taking quick snaps, then you wonder hard why you should invest so much money on a flagship iPhone whose camera array and video/photo features are essentially what makes it a flagship.

I very much appreciate that Apple is still using the design of the iPhone 8 for the SE line. I don’t care for FaceID and much prefer TouchID for authentication, and I very much enjoy an iPhone without a notch. So, since it now has an A15 Bionic chip, 5G connectivity, a better camera, a slightly better battery, and will be supported for many years, it’s extremely likely that the iPhone SE 3 will be my next phone. 

The speed-bumped iPad Air

The new iPad Air is essentially the same as the previous iPad Air, but it’s now equipped with an M1 chip, just like the more expensive iPad Pro. If you’re in the market for an iPad right now, then it’s hard not to consider this new fifth-generation iPad Air. It’s still $599 in its base configuration (64 GB of storage, Wi-Fi only), while the base 11-inch iPad Pro (128 GB, Wi-Fi only) is $799. And as I’m reading the feature comparison between these two devices on Apple’s site, there are only a handful of features the iPad Air lacks compared to the Pro:

  • It only has one back camera module, and the front camera lacks TrueDepth technology
  • It doesn’t have ProMotion
  • Its 5G connectivity doesn’t support mmWave
  • It’s only available in two storage capacities, 64 and 256 GB
  • Maximum brightness is 500 nits (versus the 600 of the iPad Pro)
  • It’s only available with 8 GB of RAM (no 16 GB RAM option)
  • It doesn’t feature ‘Audio zoom’ (whatever that is) and Stereo recording
  • Its USB‑C connector doesn’t support Thunderbolt/USB 4
  • Its front camera neither supports Portrait mode with advanced bokeh and Depth Control, nor Portrait Lighting
  • It doesn’t feature FaceID (has TouchID instead)
  • It has two speakers versus the four speakers in the iPad Pro

It looks like a long list at first glance, but I’m sure many people will be fine with the iPad Air’s camera system, its ‘simple’ USB‑C connector, and its 8 GB of RAM.

Above I said that this iPad Air, when compared with the iPad Pro, is a good deal right now, because if you look at the game Apple is playing with chips, devices, and performance, it’s clear that when the company introduces the next-generation iPad Pro it will feature an even faster processor.

But still, as I wrote on Twitter, I wonder what they’re going to do with the next iPad Pro. An even faster chip? Do we need even faster iPads for what they do? Then I added: I’m waiting for the moment where you’ll go to an Apple Store, choose a chip, then the shape of the device you want the chip in. Because the shape will be the only differentiating thing. I was being a little hyperbolic here, but the fact is that Apple chips across their iOS and Mac lineups are delivering a degree of speed and performance which is very rapidly reaching a point where it can only be measured and differentiated using specific benchmark tools. In sci-fi terminology, it’s like being on a ship that is always travelling at faster-than-light velocity.

And just as I was mulling over these thoughts, Apple introduced the new M1 Ultra chip, which is essentially two M1 Pro Max SoCs fused together. Just going over its specifications, the projected performance is basically unfathomable. In everyday use, you essentially interact with a computing environment where everything is instant.

But this is the hardware side. On the software side, we have the current iterations of iOS and Mac OS. And frankly, iPadOS doesn’t even know where to begin harnessing that sheer hardware power. Many videographers could easily edit their stuff just using an iPad Pro or even this new iPad Air… but there is no Final Cut Pro for iPadOS. You’re stuck with iMovie. It’s a bit like having a PlayStation 5 where you can basically play only Pong and other old Atari 2600 games. 

At least nobody on stage said that annoying phrase Apple executives have often recycled, that goes like, We can’t wait to see what you’ll create with this device. Because by now the retort — from developers and users alike — comes easily: We could do a lot more stuff if you were less capricious with your App Store rejections…

The Mac Studio

I didn’t really heed rumours of a new desktop Mac that wasn’t the iMac or the Mac mini, perhaps because rumours of such a Mac have been spreading for several years and nothing had materialised. If I hadn’t seen a last-minute news article hinting at this new Mac with the working name ‘Mac Studio’, the reveal yesterday would have really caught me by surprise.

But hey, it’s here, it materialised. Like Marques Brownlee, I too was wondering if we should consider the Mac Studio a ‘mini Mac Pro’ or a ‘Pro Mac mini’, but since John Ternus (Apple’s Senior VP of Hardware Engineering) at the very end of his segment hinted that there’s still one Mac — a new Mac Pro — before the Apple Silicon transition is complete, I guess that that means the Mac Studio is indeed a Pro Mac mini.

I like everything about the Mac Studio, from its form factor to the abundance of ports (including two USB‑A ports!), from its performance to its price — yes, even the higher-tier configuration starting at $3,999 seems a good value considering the beyond-astounding performance afforded by the new M1 Ultra chip. 

Like what I was saying about the iPhone 8 earlier, my current Mac setup is still serving my needs quite well, and for now there’s nothing that’s pressing me for an upgrade. My desktop Mac is a 2017 21.5‑inch 4K iMac, and when I’m out and about I’ll either take my 2015 13-inch retina MacBook Pro or the 2013 11-inch MacBook Air. The Mac Studio is absolutely overkill for my needs and workflows, for which the current M1 Mac mini or 24-inch iMac would probably suffice. But since I don’t upgrade Macs frequently, when I do I tend to look for a machine with a certain amount of future-proofing, so that it can last me many years. Given its specifications, the Mac Studio is the perfect candidate, even in its entry-level configuration. And when the need for an Apple Silicon Mac laptop arises down the road, I can always get a second-hand M1 MacBook Air.

But back to what John Ternus said:

And they [the Mac Studio and Studio Display] join the rest of our incredible Mac lineup with Apple Silicon, making our transition nearly complete, with just one more product to go — Mac Pro. But that is for another day. 

This made me wonder. If the only Mac left to complete the architecture transition is the Mac Pro, what about the 27-inch iMac/iMac Pro? 

On the one hand, if Apple sees the Mac Studio + the 27-inch Studio Display as the natural replacement for what were the 27-inch iMac and the 27-inch iMac Pro, then it’s kind of weird that the M1 iMac is referred to as the 24-inch iMac. It could be to still differentiate it from the previous 21.5- and 27-inch Intel models, but all these iMacs, including the iMac Pro, are unavailable for purchase on Apple’s website. Concretely, the 24-inch iMac is the only iMac you can buy today.

On the other hand, it could be safe to assume that Apple may consider a 27-inch Apple Silicon iMac as a variant of the 24-inch. Since I believe that with the M1 24-inch iMac they’ve painted themselves into yet another corner, design-wise, I’m starting to think that yes, a new 27-inch iMac might appear, but it won’t be an iMac Pro replacement. The Mac Studio is the iMac Pro replacement. The 27-inch iMac that might appear will essentially be a bigger iMac, maybe with a faster ’M’ chip, and that’s it. 

Why did I say that Apple has painted itself into another corner, design-wise? Because when you produce an incredibly thin 24-inch iMac, chances are that a 27-inch variant will have to be equally thin and retain the same form factor and design choices, for consistency. And space inside that 24-inch iMac is at a premium. Sure, a 27-inch chassis is bigger, but not by that much. So the question is: can a new 27-inch iMac offer pro performance and capabilities in a shape as thin as the 24-inch iMac? I’m not sure, especially from a thermal standpoint.

So, to sum up — For now my theory is that yes, we may see a new 27-inch iMac someday, but it won’t be an iMac Pro, just a prosumer, better version of the current M1 24-inch model. The Mac Studio is the new iMac Pro. And its base configuration, with a base configuration of the Studio Display, ends up costing less than what the base iMac Pro cost in 2017.

Studio Display

Putting aside the esoteric beast that is the 2019 Pro Display XDR, the 27-inch 5K Apple Studio Display presented yesterday is the first affordable standalone display Apple has produced since the Thunderbolt Display in 2011, eleven years ago. After the Mac Studio, it was another nice surprise. 

If you want the specs dump, here they are. The details that most stood out for me were the presence of an A13 Bionic chip inside, which allows the display to have a high-quality 12-megapixel ultrawide camera with the Centre Stage feature, a speaker system that supports Spatial Audio, and “Hey Siri”. Another interesting thing is that the Studio Display is equipped with three USB‑C ports and one Thunderbolt 3 port that is capable of charging any connected MacBook and also fast-charge the 14-inch MacBook Pro.

One slightly puzzling detail for me are the stand options. You can order the display with the ‘default’ option of a tilt-adjustable stand, or you can opt for a tilt- and height-adjustable stand, or you can choose not to have a stand and order the Studio Display with just a VESA mount if you plan to use it with a monitor arm.

Now, the tilt- and height-adjustable stand works in a similar way as the stand of the Pro Display XDR, and choosing this option at purchase will increase the price of the Studio Display by $400. I understand that it’s a more complex piece of machinery than the regular tilt-adjustable stand, but if you’re looking for a better and more flexible tilt and height adjustability, a more pragmatic option could be to choose the VESA mount configuration at no additional cost, and hook the Studio Display to a monitor arm. There are decent arms that cost less than $400.

I think that, all in all, this is a good display that importantly fills a void in this space that was being felt by an increasing number of users. Not that the LG UltraFine 5K Display was a bad solution, but it seems that the $300 you pay extra for the Studio Display are well spent, given its features and the integration you have with a first-party product. Its base price of $1,599 doesn’t seem that expensive, and as they were presenting it yesterday I honestly thought the starting price would be more like $1,899.

However, as I was watching Dave Lee’s video about his first impressions of the Mac Studio and Studio Display, he had this to say about the Display in his conclusion:

So, this display is 60 Hz, there’s no ProMotion, it’s not mini-LED, it’s got no HDR, it can’t get super bright like the XDR Display that can hit 1000 nits sustained — 1600 at the top end. This is a 600-nits panel; I don’t know if it can go higher, but that’s the listed [value] 600 nits. And that means no real HDR. And on a creative display — like they’re showing in the marketing materials where they got people making movies on this thing — I felt like they would have gone brighter with this panel. […]

A 60 Hz panel, with no HDR, no FaceID, for $1,600 without even height adjustability… hmmm, that’s steep. 

Given that Dave is a creative who works with video at a professional level, he’s certainly more qualified than I am to make this kind of observation and he may have a point here. Of course Apple had to make some compromises, as it wouldn’t have been possible to have a $1,600 27-inch display with the same characteristics as the Pro Display XDR, and I suspect the Studio Display will be ‘good enough’ for photo/video professionals who don’t need the high-end Pro Display XDR. 

One last thing

It felt a bit strange that Apple didn’t say a word about the war in Ukraine. Maybe it was too late to add a recorded statement, but a mention of their support could have been inserted as an intro or outro slide. Just a thought.