So long, iPod. You’ll be missed.

Tech Life

iPod-love

Today, Apple announced the discontinuation of the last available iPod, the seventh-generation iPod touch (introduced in May 2019). 

It is the end of an era. Those who know me, at this point, are probably expecting a long-winded tirade about how Apple is leaving behind yet another important piece of its history, that it’s just another money-making tactic to drive the sales of iPhones and HomePods, whatever. 

You’re going to be disappointed. For the way a lot of people consume music today (yes, I chose consume purposefully), a device like the iPod touch doesn’t make much sense anymore. In fact, I’m genuinely surprised the iPod touch has lasted this long under Cook’s administration.

The iPod has been around for little more than 20 years, which is a very respectable milestone for a device that truly changed the way people listen to music, and managed to maintain its interestingness and fun factor by morphing into different shapes over the years. It was probably the first Apple device to be loved both by die-hard Apple fans and non-Apple users. Our household is full of iPods, which both I (a long-time Apple user) and my wife (a long-time non-Apple user) have enjoyed over the years. My first iPod was a 10 GB third-generation model I purchased in 2003. My wife’s first iPod was a 4 GB blue iPod mini (2004). The hard drives of these two iPods both failed a few years ago, but I managed to upgrade them by using CompactFlash cards, so now my third-generation iPod has 16 GB of storage, and the mini has 8 GB. They still have their original batteries and on a full charge they still last about 2–3 hours of non-continuous playback. 

The first-generation iPod touch has a special place in our household. When the first iPhone was introduced, it wasn’t available in many countries outside the US. I had to wait until September 2008 to get my first iPhone, and it was the iPhone 3G. But the first-generation iPod touch, launched in 2007, was indeed available here in Europe, and so when my brother-in-law gifted it to my wife, it was our very first hands-on experience with the Multi-touch interface and the operating system of the iPhone.

The iPod shuffle and the nano were two other lines we’ve loved and still love a lot: I have a second- and third-generation shuffle, a third- and a seventh-generation nano; my wife has two second-generation shuffle models, and a sixth- and seventh-generation nano. And my brother-in-law has even an iPod Hi-Fi.

Today, like many other people, my family enjoys music on mobile devices mainly via streaming services (Spotify, in our case); and yet, we still use these smaller iPods when out and about. And here comes the only point I wanted to make with this brief piece.

As I said at the beginning, a device like the iPod touch is rather redundant for the way we consume music nowadays. However, I think a device like the iPod shuffle still makes a lot of sense. Its main characteristics, what made it an ingenious and very successful device back then, still make it an interesting and appealing device today:

  • Its size and weight
  • Its design: the iPod shuffle is effectively an unobtrusive, wearable device
  • Its lack of UI and the concept of filling it with songs you then listen to randomly (or in sequence, if you prefer)
  • Its low price

Listening to music with an iPod shuffle is still (and can still be) a fun experience. You can create the digital equivalent of a mixtape, load it on your shuffle, clip the shuffle to your shirt/jeans/jacket, and then you can go out and listen to music without even having to touch the device, unless you need to change volume or skip a track. It’s basically a hands-free device that disappears on you. If Apple made a new iPod shuffle with Bluetooth, the invisibility factor would be even higher, since you wouldn’t even have the earphones’ cable around you to remind you that you are wearing an iPod. It would still be a nice device for commuting, or jogging, or during a workout.

Sure, you might say that these uses are now taken over by the Apple Watch or other smartwatches, but for an Apple Watch you’ll pay a minimum of $199 up to more than $1,000. An iPod shuffle would be a $50 device. If you’re a casual user who just wants to have some music while out and about, jogging, etc., and don’t use a smartwatch, a little wearable device like the iPod shuffle could still be your cup of tea. But maybe wanting from today’s Apple a fun, inexpensive, wearable, colourful device is asking too much. Here, have an AirTag instead.

Raw power alone is not enough

Software

Nick Heer, a few days ago, posed a question:

This is a good and wide-ranging interview that dances around a question I have been thinking about for a while now: what capabilities do high-performance products like these [the Mac Studio] unlock for a creative professional? It is great to see how much faster they are at compiling applications or rendering video, but I wonder what new things people will attempt on machines like these which may have been too daunting before. 

New applications, new endeavours, are certainly made possible by technological advancements in hardware, chip design and engineering. I’m looking at my Power Mac G4 Cube on this other desk. It was introduced 22 years ago, it has a 450 MHz CPU, 1.5 GB of RAM, and a 60 GB spinning hard drive. Its graphics card has 16 MB (megabytes) of memory. When you look at the specs of an M1 Ultra Mac Studio, you have a pretty good idea of the progress that has been made in 22 years when it comes to storage, memory, graphical & computational power, and overall speed and responsiveness. A rendering job that takes a new Mac Studio a couple of minutes, this poor G4 Cube would probably take a whole day to compute — provided it could even do it in the first place.

But there’s another crucial thing to consider: software. There’s always a car analogy when talking about computers, and this time is no different — and software is the fuel in this analogy. You can have an astoundingly powerful, astoundingly energy-efficient engine that makes the car reach 300 km/h in 2 seconds. But without fuel, the car won’t go anywhere.

However, software in a computer system does more than just making the engine run. It also gives the system a purpose, a direction. It gives the system applications, both in the sense of software programs, and in the sense of uses for a machine.

Without innovation in software, all we’re doing with these new powerful machines is essentially the same we were doing 20 years ago on PowerPC G4 and G5 computers, but faster and more conveniently. Granted, it is progress, especially in those fields involving CPU- and GPU-intensive tasks and greatly benefitting by having lots and lots of calculations made in the shortest possible time.

But progress can’t be just about quantitative aspects of computing, as great and beneficial as they are. What new applications can an amazing M1-Ultra-powered Mac Studio unlock, if there are no new types of software applications that could provide new directions and uses?

This is the personal beef I have with tech innovation today, which I feel still revolving around the concept of ‘reinventing the wheel and making it spin faster’. I might be wrong on this, and it might just be an inaccurate subjective impression, but today I feel a distinct dearth of vision when it comes to what a computer can do. If the sheer raw power of computers has increased orders of magnitude in the last 30 years, the range of applications (in both senses) for a computer hasn’t increased or spread in a comparable way. 

(If you’re thinking, But what about AR/VR and the Metaverse, for example? — you know that these concepts are decades old, right? And that their applications are only underwhelmingly better than what was produced in the 1990s? And that the user interface and interaction hurdles to make these concepts work really seamlessly haven’t changed that much since?)

This reflection ties with what I was talking about in my two pieces (see here and the follow-up here) on Mac software stagnation. These past few years — after a period of Mac hardware stagnation and hardware design fiascos like the MacBook butterfly keyboard and the 2013 Mac Pro — Apple has got back on track and has really, positively pushed the envelope with their in-house designed systems on a chip, on mobile devices and then finally on Macs. What an iPad Air, iPad Pro, and even a base M1 Mac can achieve with their M‑class chips is remarkable in terms of raw power (and efficiency). But I’m not seeing the same kind of advancement in software. 

Apple’s first-party applications included with Mac OS are mediocre at best. Their pro apps appear to be more maintained than developed with the aim of advancement, with the possible exception of Final Cut Pro (video professionals, feel free to chime in). Apps that were previously good-quality, powerful, and versatile have been neutered and have become ‘just okay’ or ‘good enough’. The Utilities folder in Mac OS has been slowly but surely depopulated over time. iOS apps with an ingenious premise, like Music Memos, are being left behind as flashes in the pan. The consensus with iTunes was that Apple should have split it into different apps so that these could be better at handling specific tasks than the old monolithic media manager. Apple eventually did split iTunes into different apps, but forgot the second part of the assignment. The result is that I still go back to a Mac with iTunes to handle my media, and I’m not the only one.

Aperture overall was a better application than Adobe Lightroom when the two apps coexisted. Apple could have kept improving Aperture and kept making it better than Lightroom. Instead they gave up. We now have Photos as sole ‘sophisticated’ Apple photo tool. Which is neither fish (iPhoto) nor flesh (Aperture).

And then there are two applications I must mention because I’m still profoundly annoyed by their discontinuation: iWeb and iBooks Author. Have I made you raise an eyebrow? Good. Hear me out. 

iWeb certainly had its flaws. It was the typical app with a good premise that was never cultivated properly, never really optimised, never made better, and just left to wither. But let’s look at iWeb within a broader context: it’s 2022 — shouldn’t we have a powerful yet simple-to-use WYSIWYG tool to craft a website? Sure, there are accessible platforms that let you set up a blog with relative ease, and there are simple-enough tools to set up a static site, but a non-tech-savvy person will still find these tools to be sophisticated enough to be a bit off-putting. 

The Web has been around for thirty years now, why do HTML, CSS, etc., still exist? It’s a hyperbole, hopefully you’re getting my point here. Why aren’t there standardised tools to just create online spaces in a perfectly accessible WYSIWYG way? Why do regular people still have to struggle with strings of code and magical syntax to make trivial customisations to the websites they’ve patiently managed to create?

iWeb could have been a great tool, because it had this spirit — the Macintosh spirit — of attempting to help people make hard stuff in a simple, visual, intuitive way. 

iBooks Author wasn’t perfect either, and had some glaring omissions (an ebook authoring tool that doesn’t even have appropriate facilities to handle footnotes is laughable), but it had the potential of becoming a good application to create books. By the way, do you know any good-quality application to make ebooks that is sophisticated, relatively easy to use, with a good UI, and well-designed overall? On the Mac, only Vellum comes to mind. On other platforms I honestly have no idea, but I’m not terribly optimistic. Even Vellum needs you to install Kindle Previewer if you intend to publish using Amazon’s formats for the Kindle platform.

iBooks Author could have been overhauled and further developed, but apparently the only professionals Apple knows are in the audio/video departments. What about professional tools for authors and writers? The Pages app? Because that’s what Apple suggested to use when they discontinued iBooks Author in 2020 (which was already on life support by then). Come on.

I’m not saying that there are absolutely no tools available today for Web development or book designing. What I’m saying is that software as an abstract concept has aged worse than hardware in the history of computing. Software today still comes with much more friction than it should have, given the context of general technological advancement that has happened for the past 40 years or so. Most programming languages are old. The old foundations are getting more and more impractical to handle modern applications (uses) but the new foundations and new programming tools are still too immature to be an effectual replacement or successor. 

And don’t get me wrong — I’m not blaming third-party developers and indie developers here. They’re working as hard and as best as they can given the increasingly difficult conditions they’re put in, especially those developing for Apple platforms. It’s a maddening scenario: with their unnecessarily tight restrictions in the name of security (theatre), with their capricious and petty App review checkpoints, Apple seems to be actively obstructing innovation in software. And the company isn’t even doing it as a way to push aside third-party solutions to instead show off their software innovations and breakthroughs, because those are increasingly rare sights.

So, again, we have absurdly powerful machines like the Mac Studio and soon we’ll have the even more mind-boggling Apple silicon Mac Pro, and what kind of software will they run? A handful of professional apps which hopefully will take advantage of these machines’ capabilities to make the same things professional Macs did twenty years ago, ten years ago, but better and faster. Though the question is: what kind of software innovation will these impossibly powerful Macs unlock or facilitate? What kinds of new applications (uses) will these Macs allow? I have no idea. And I have no idea whether we’ll see something moving in this direction.

Apple’s chip and hardware advancements have inspired the competition (Intel) to do better, and that’s a great thing. On the software side, I’ve seen very little from Apple to be considered remotely inspirational. What I’ve seen are platform management techniques that have pushed things like subscriptions and lock-in, and a generally toxic gatekeeping behaviour. What I’ve seen is an operating system like Mac OS — based on strong UNIX foundations and rigorous, well-thought-out human interface guidelines — become a brittle, hollow shell, with questionable UI design choices, and bugs that get dragged from one iteration to another. When Apple’s own software has generally worsened over time; when they treat third-party developers as a necessary nuisance that has to be begrudgingly dealt with on a regular basis — instead of, you know, actually celebrate them and inspire them to write even better software for the Apple ecosystem; when their insistence with security through lock-down and lock-in leads to an ecosystem whose overall thriving is stifled at worst and corralled at best… How can Apple be an inspirational force in software?

That New Yorker article on computational photography

Handpicked

The article in question is Have iPhone cameras become too smart?, written by Kyle Chayka for The New Yorker, and was brought to my attention firstly because both Nick Heer and John Gruber linked to it and shared their thoughts; but also because some readers — perhaps remembering my stance on computational photography (here and here) — thought I could be interested in reading a somewhat similar take.

What’s interesting to me is that a lot of people seem to have missed the central point of Chayka’s piece, including Gruber. Almost everyone who wrote me and mentioned this article, asked me (probably rhetorically): Does an iPhone 7 really take better photos than an iPhone 12 Pro?

The answer is — it depends. It depends on what better means to you. It depends on what photography means to you.

From what I understood by reading the article, Chayka’s main question could be paraphrased as such: Are all good-looking photos, good photos? or Are all professional-looking photos, professionally-taken photos? The answer here is complex, and cannot be objective.

If you’re someone using your smartphone as your sole camera, and your photographic intent is just to capture memories by taking instant snaps, then you’ll appreciate any computational photography advancement provided by iPhones for the past four years or so. You’ll want an iPhone 12 Pro over an iPhone 7 because, for your purposes, it’ll take better looking photos for you most of the time.

I have carefully worded that last sentence: the iPhone will take ‘better looking’, more eye-pleasing photos for you, because with this level of computational photography, your agency is basically limited to choosing what to frame and when. The iPhone does the rest of the work.

This is why computational photography’s advancements tend to be praised by those who have a more utilitarian approach to photography, and tend to be ignored or criticised by those who have a more artistic and, er, human-centered approach to photography. Both are valid approaches, don’t get me wrong. The wrong attitude is, perhaps, to consider your approach better than the other.

But let’s go back to Chayka’s article. The point that is the most thought-provoking, in my opinion, is the emphasis given to one specific aspect of the newer iPhones’ computational photography — the mechanisation, the automation, the industrial pre-packaging of a ‘good-looking’ or ‘professional-looking’ photo. Much like with all processed foods produced on an industrial scale, which all look and taste the same, computational photography applies a set of formulas to what the camera sensor captures, in order to produce consistently good-looking results. The article’s header animation summarises this clearly: a newer iPhone passes by a natural looking still life with flowers in a vase, and for a moment you can see how the iPhone sees and interprets that still life, returning a much more vibrant, contrasty scene. Certainly more striking than the scene itself, but also more artificial and less faithful to what was actually there.

That’s why, in Chayka’s view, his iPhone 7 took ‘better’ photos than his iPhone 12 Pro. It’s not a matter of technical perfection or superiority. Both the camera and the image signal processor of the iPhone 7 are clearly technically much less capable than the iPhone 12 Pro’s, and Chayka is not arguing otherwise:

On the 7, the slight roughness of the images I took seemed like a logical product of the camera’s limited capabilities. I didn’t mind imperfections like the “digital noise” that occurred when a subject was underlit or too far away, and I liked that any editing of photos was up to me. On the 12 Pro, by contrast, the digital manipulations are aggressive and unsolicited.

In other words, in Chayka’s eyes, the camera of the iPhone 7 allowed him to be more creative and more in control of the photographic process exactly because it is ‘less smart’ and less overwhelmingly ‘full machine auto’ than the camera array of the 12 Pro. And he’s not alone on that:

David Fitt, a professional photographer based in Paris, also went from an iPhone 7 to a 12 Pro, in 2020, and he still prefers the 7’s less powerful camera. On the 12 Pro, “I shoot it and it looks overprocessed,” he said. “They bring details back in the highlights and in the shadows that often are more than what you see in real life. It looks over-real.”

What Fitt says here is something I only noticed recently when taking evening and night shots with a loaned iPhone 13 Pro. When first sharing my initial thoughts on computational photography, I wrote:

Smartphone cameras have undoubtedly made noticeable progress with regard to image fidelity, and […] soon we’ll reach a point where our smartphones achieve WYSIWYG — or rather, What You Get Is Exactly What You Saw — photography.

But it’s not that at all. Especially with low-light photography, what these newer iPhones (but also the newer Pixels and Samsung flagships) return are not the scenes I was actually seeing when I took the shot. They are enhancements that often show what is there even when you don’t see it. Sometimes the image is so brightened up that it doesn’t even look like a night shot — more like something you would normally obtain with very long exposures. And again, some people like this. They want to have a good capture of that great night at a pub in London or at a bistro in Paris, and they want their phone to capture every detail. I have a different photographic intent, and prefer night shots to look like night shots, even if it means losing shadow detail, even if it means film or digital grain.

Another great quote in Chayka’s article is here (emphasis mine):

Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal. Gregory Gentert, a friend who is a fine-art photographer in Brooklyn, told me, “I’ve tried to photograph on the iPhone when light gets bluish around the end of the day, but the iPhone will try to correct that sort of thing.” A dusky purple gets edited, and in the process erased, because the hue is evaluated as undesirable, as a flaw instead of a feature. The device “sees the things I’m trying to photograph as a problem to solve,” he added.

Again, it’s clear that computational photography is polarising: people who want to be more in control of their photographic process loathe the computational pre-packaging of the resulting photos. Happy snappers whose sole goal is to get the shot and have their shots consistently nice-looking are very much unbothered by computational photography. It’s less work, less editing, and in some cases better than what they could achieve if given a traditional camera.

The problem, as far as I’m concerned, is the approach of those who happily take advantage of all the capabilities of computational photography but want to pass the resulting photos as a product of their creative process. They always shoot on ‘full machine auto’ yet they have artistic ambitions. As Chayka points out, We are all pro photographers now, at the tap of a finger, but that doesn’t mean our photos are good.

He continues (emphasis mine):

After my conversations with the iPhone-team member, Apple loaned me a 13 Pro, which includes a new Photographic Styles feature that is meant to let users in on the computational-photography process. Whereas filters and other familiar editing tools work on a whole image at once, after it is taken, Styles factors the adjustments into the stages of semantic analysis and selection between frames.
[…]
The effects of these adjustments are more subtle than the iPhone’s older post-processing filters, but the fundamental qualities of new-generation iPhone photographs remain. They are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning.

Of course, there is no fixed formula or recipe to classify a photo as artistic or not. For some, the more manual intervention in the photographic process, the more artistic the result can claim to be. I’m not necessarily against some form of automated facility when taking photos with artistic intent or ambition. Things like autofocus and even shooting in Program mode can be crucial when engaging in street photography, for example. But even when shooting with a film camera in Program mode, the camera may have full control over the exposure, but the final look is always up to the photographer, who chooses what film to use and how to ‘push’ it when taking photos or afterwards in the darkroom. With a modern iPhone, its computational photography capabilities do much more than this. The phone is responsible of practically everything in a photo taken as-is without any editing, including the photo’s look, including the addition of otherwise imperceptible details.

Again, thinking about those low-light shots I took with an iPhone 13 Pro, the only thing I did was framing the scene and deciding when to shoot. What came out was a nice photo to look at, but didn’t feel ‘mine’, if you know what I mean. Maybe shooting ProRAW and then editing the photo to my taste would have felt more artisanal, if you will, but I always go back to this article by Kirk McElhearn, Apple’s new ProRAW Photo Format is neither Pro nor RAW. And for the way I do my photography, my iPhone is like an instant camera, no more no less. If I have to shoot RAW, I’d rather use one of my many digital cameras (or film cameras for that matter).

Not long ago, a photographer friend of mine succinctly remarked, All the photos taken with current flagship phones look like stock photos to me. And stock photos are great, are perfect for their purposes, but you won’t find them hanging in an art gallery.

I’ll reiterate. If you’ve read Chayka’s piece and your takeaway is that he argues that an iPhone 7 is better than an iPhone 12 Pro at taking photos, you’re missing the point. He’s saying that, in a way, the limitations of the iPhone 7’s camera were more effective in stimulating his creativity and let him have more control over the final photo, while the iPhone 12 Pro has behaved more like a photographic know-all, due to all the machine learning smarts that come built in. That’s why he asks whether iPhone cameras have become too smart. He doesn’t necessarily advocate for making ‘less smart’ iPhones, but for making iPhones that can disable their smart camera features if the user so chooses. I agree with the sentiment, and I very much agree with Nick Heer when he notes:

Right now, the iPhone’s image processing pipeline sometimes feels like it lacks confidence in the camera’s abilities. As anyone who shoots RAW on their iPhone’s camera can attest, it is a very capable lens and sensor. It can be allowed to breathe a little more.

First impressions after the ‘Peek Performance’ Apple event

Tech Life

Sometimes Apple’s one-hour recorded events feel a bit like compressed archives, with lots of stuff to unpack. And several past events over the last few years have always contained some controversial element that made me write paragraphs and paragraphs of ranting criticism (the terrible keyboards that plagued Mac laptops for four years; the first appearance of a notch with the iPhone X, the unnecessarily thin design of the M1 24-inch iMac, the second appearance of a notch but this time on the new 14- and 16-inch MacBook Pros, etc.). But this ‘Peek Performance’ event was the first in a long time where I felt there was nothing ‘wrong’ or controversial — for me, at least.

Apple TV+

I’m sure there are going to be great films and series in there. That all-encompassing trailer montage was so packed it ended up not telling me anything or piquing my interest particularly (save maybe for Macbeth). And when you mention sports, especially baseball, I just tune out. Sorry, baseball fans, nothing personal.

The third-generation iPhone SE

Awarded the title of The Meh Phone basically by all tech YouTubers, this is actually my favourite iPhone at the moment. The design is still the same as the second-generation iPhone SE and as the older iPhone 8, and I frankly don’t get the hate. This is not the iPhone line where Apple is innovative. This is the iPhone line where Apple is price competitive. And where Apple still pleases people who love the smaller size and the conservative design. Like yours truly.

I’m still using an iPhone 8 as my main phone, and in 2020 I was very close to get a second-generation iPhone SE, but it still felt too early to upgrade, and even today the iPhone 8 is plenty for my needs. If you don’t rely on an iPhone for your photography, and just use it for taking quick snaps, then you wonder hard why you should invest so much money on a flagship iPhone whose camera array and video/photo features are essentially what makes it a flagship.

I very much appreciate that Apple is still using the design of the iPhone 8 for the SE line. I don’t care for FaceID and much prefer TouchID for authentication, and I very much enjoy an iPhone without a notch. So, since it now has an A15 Bionic chip, 5G connectivity, a better camera, a slightly better battery, and will be supported for many years, it’s extremely likely that the iPhone SE 3 will be my next phone. 

The speed-bumped iPad Air

The new iPad Air is essentially the same as the previous iPad Air, but it’s now equipped with an M1 chip, just like the more expensive iPad Pro. If you’re in the market for an iPad right now, then it’s hard not to consider this new fifth-generation iPad Air. It’s still $599 in its base configuration (64 GB of storage, Wi-Fi only), while the base 11-inch iPad Pro (128 GB, Wi-Fi only) is $799. And as I’m reading the feature comparison between these two devices on Apple’s site, there are only a handful of features the iPad Air lacks compared to the Pro:

  • It only has one back camera module, and the front camera lacks TrueDepth technology
  • It doesn’t have ProMotion
  • Its 5G connectivity doesn’t support mmWave
  • It’s only available in two storage capacities, 64 and 256 GB
  • Maximum brightness is 500 nits (versus the 600 of the iPad Pro)
  • It’s only available with 8 GB of RAM (no 16 GB RAM option)
  • It doesn’t feature ‘Audio zoom’ (whatever that is) and Stereo recording
  • Its USB‑C connector doesn’t support Thunderbolt/USB 4
  • Its front camera neither supports Portrait mode with advanced bokeh and Depth Control, nor Portrait Lighting
  • It doesn’t feature FaceID (has TouchID instead)
  • It has two speakers versus the four speakers in the iPad Pro

It looks like a long list at first glance, but I’m sure many people will be fine with the iPad Air’s camera system, its ‘simple’ USB‑C connector, and its 8 GB of RAM.

Above I said that this iPad Air, when compared with the iPad Pro, is a good deal right now, because if you look at the game Apple is playing with chips, devices, and performance, it’s clear that when the company introduces the next-generation iPad Pro it will feature an even faster processor.

But still, as I wrote on Twitter, I wonder what they’re going to do with the next iPad Pro. An even faster chip? Do we need even faster iPads for what they do? Then I added: I’m waiting for the moment where you’ll go to an Apple Store, choose a chip, then the shape of the device you want the chip in. Because the shape will be the only differentiating thing. I was being a little hyperbolic here, but the fact is that Apple chips across their iOS and Mac lineups are delivering a degree of speed and performance which is very rapidly reaching a point where it can only be measured and differentiated using specific benchmark tools. In sci-fi terminology, it’s like being on a ship that is always travelling at faster-than-light velocity.

And just as I was mulling over these thoughts, Apple introduced the new M1 Ultra chip, which is essentially two M1 Pro Max SoCs fused together. Just going over its specifications, the projected performance is basically unfathomable. In everyday use, you essentially interact with a computing environment where everything is instant.

But this is the hardware side. On the software side, we have the current iterations of iOS and Mac OS. And frankly, iPadOS doesn’t even know where to begin harnessing that sheer hardware power. Many videographers could easily edit their stuff just using an iPad Pro or even this new iPad Air… but there is no Final Cut Pro for iPadOS. You’re stuck with iMovie. It’s a bit like having a PlayStation 5 where you can basically play only Pong and other old Atari 2600 games. 

At least nobody on stage said that annoying phrase Apple executives have often recycled, that goes like, We can’t wait to see what you’ll create with this device. Because by now the retort — from developers and users alike — comes easily: We could do a lot more stuff if you were less capricious with your App Store rejections…

The Mac Studio

I didn’t really heed rumours of a new desktop Mac that wasn’t the iMac or the Mac mini, perhaps because rumours of such a Mac have been spreading for several years and nothing had materialised. If I hadn’t seen a last-minute news article hinting at this new Mac with the working name ‘Mac Studio’, the reveal yesterday would have really caught me by surprise.

But hey, it’s here, it materialised. Like Marques Brownlee, I too was wondering if we should consider the Mac Studio a ‘mini Mac Pro’ or a ‘Pro Mac mini’, but since John Ternus (Apple’s Senior VP of Hardware Engineering) at the very end of his segment hinted that there’s still one Mac — a new Mac Pro — before the Apple Silicon transition is complete, I guess that that means the Mac Studio is indeed a Pro Mac mini.

I like everything about the Mac Studio, from its form factor to the abundance of ports (including two USB‑A ports!), from its performance to its price — yes, even the higher-tier configuration starting at $3,999 seems a good value considering the beyond-astounding performance afforded by the new M1 Ultra chip. 

Like what I was saying about the iPhone 8 earlier, my current Mac setup is still serving my needs quite well, and for now there’s nothing that’s pressing me for an upgrade. My desktop Mac is a 2017 21.5‑inch 4K iMac, and when I’m out and about I’ll either take my 2015 13-inch retina MacBook Pro or the 2013 11-inch MacBook Air. The Mac Studio is absolutely overkill for my needs and workflows, for which the current M1 Mac mini or 24-inch iMac would probably suffice. But since I don’t upgrade Macs frequently, when I do I tend to look for a machine with a certain amount of future-proofing, so that it can last me many years. Given its specifications, the Mac Studio is the perfect candidate, even in its entry-level configuration. And when the need for an Apple Silicon Mac laptop arises down the road, I can always get a second-hand M1 MacBook Air.

But back to what John Ternus said:

And they [the Mac Studio and Studio Display] join the rest of our incredible Mac lineup with Apple Silicon, making our transition nearly complete, with just one more product to go — Mac Pro. But that is for another day. 

This made me wonder. If the only Mac left to complete the architecture transition is the Mac Pro, what about the 27-inch iMac/iMac Pro? 

On the one hand, if Apple sees the Mac Studio + the 27-inch Studio Display as the natural replacement for what were the 27-inch iMac and the 27-inch iMac Pro, then it’s kind of weird that the M1 iMac is referred to as the 24-inch iMac. It could be to still differentiate it from the previous 21.5- and 27-inch Intel models, but all these iMacs, including the iMac Pro, are unavailable for purchase on Apple’s website. Concretely, the 24-inch iMac is the only iMac you can buy today.

On the other hand, it could be safe to assume that Apple may consider a 27-inch Apple Silicon iMac as a variant of the 24-inch. Since I believe that with the M1 24-inch iMac they’ve painted themselves into yet another corner, design-wise, I’m starting to think that yes, a new 27-inch iMac might appear, but it won’t be an iMac Pro replacement. The Mac Studio is the iMac Pro replacement. The 27-inch iMac that might appear will essentially be a bigger iMac, maybe with a faster ’M’ chip, and that’s it. 

Why did I say that Apple has painted itself into another corner, design-wise? Because when you produce an incredibly thin 24-inch iMac, chances are that a 27-inch variant will have to be equally thin and retain the same form factor and design choices, for consistency. And space inside that 24-inch iMac is at a premium. Sure, a 27-inch chassis is bigger, but not by that much. So the question is: can a new 27-inch iMac offer pro performance and capabilities in a shape as thin as the 24-inch iMac? I’m not sure, especially from a thermal standpoint.

So, to sum up — For now my theory is that yes, we may see a new 27-inch iMac someday, but it won’t be an iMac Pro, just a prosumer, better version of the current M1 24-inch model. The Mac Studio is the new iMac Pro. And its base configuration, with a base configuration of the Studio Display, ends up costing less than what the base iMac Pro cost in 2017.

Studio Display

Putting aside the esoteric beast that is the 2019 Pro Display XDR, the 27-inch 5K Apple Studio Display presented yesterday is the first affordable standalone display Apple has produced since the Thunderbolt Display in 2011, eleven years ago. After the Mac Studio, it was another nice surprise. 

If you want the specs dump, here they are. The details that most stood out for me were the presence of an A13 Bionic chip inside, which allows the display to have a high-quality 12-megapixel ultrawide camera with the Centre Stage feature, a speaker system that supports Spatial Audio, and “Hey Siri”. Another interesting thing is that the Studio Display is equipped with three USB‑C ports and one Thunderbolt 3 port that is capable of charging any connected MacBook and also fast-charge the 14-inch MacBook Pro.

One slightly puzzling detail for me are the stand options. You can order the display with the ‘default’ option of a tilt-adjustable stand, or you can opt for a tilt- and height-adjustable stand, or you can choose not to have a stand and order the Studio Display with just a VESA mount if you plan to use it with a monitor arm.

Now, the tilt- and height-adjustable stand works in a similar way as the stand of the Pro Display XDR, and choosing this option at purchase will increase the price of the Studio Display by $400. I understand that it’s a more complex piece of machinery than the regular tilt-adjustable stand, but if you’re looking for a better and more flexible tilt and height adjustability, a more pragmatic option could be to choose the VESA mount configuration at no additional cost, and hook the Studio Display to a monitor arm. There are decent arms that cost less than $400.

I think that, all in all, this is a good display that importantly fills a void in this space that was being felt by an increasing number of users. Not that the LG UltraFine 5K Display was a bad solution, but it seems that the $300 you pay extra for the Studio Display are well spent, given its features and the integration you have with a first-party product. Its base price of $1,599 doesn’t seem that expensive, and as they were presenting it yesterday I honestly thought the starting price would be more like $1,899.

However, as I was watching Dave Lee’s video about his first impressions of the Mac Studio and Studio Display, he had this to say about the Display in his conclusion:

So, this display is 60 Hz, there’s no ProMotion, it’s not mini-LED, it’s got no HDR, it can’t get super bright like the XDR Display that can hit 1000 nits sustained — 1600 at the top end. This is a 600-nits panel; I don’t know if it can go higher, but that’s the listed [value] 600 nits. And that means no real HDR. And on a creative display — like they’re showing in the marketing materials where they got people making movies on this thing — I felt like they would have gone brighter with this panel. […]

A 60 Hz panel, with no HDR, no FaceID, for $1,600 without even height adjustability… hmmm, that’s steep. 

Given that Dave is a creative who works with video at a professional level, he’s certainly more qualified than I am to make this kind of observation and he may have a point here. Of course Apple had to make some compromises, as it wouldn’t have been possible to have a $1,600 27-inch display with the same characteristics as the Pro Display XDR, and I suspect the Studio Display will be ‘good enough’ for photo/video professionals who don’t need the high-end Pro Display XDR. 

One last thing

It felt a bit strange that Apple didn’t say a word about the war in Ukraine. Maybe it was too late to add a recorded statement, but a mention of their support could have been inserted as an intro or outro slide. Just a thought.

Reactions and feedback on my brief reflection on Mac software stagnation

Software

In general, when I feel I have something to say on a certain subject, I tend to write articles with the goal of being as thorough as I can, at least within the scope I’ve established for the article. This is a habit I’ve formed over time primarily because

  1. Too often I stumble on posts or articles that seem to just share an idea, impression or opinion without elaborating much on it. They’re frustrating reads, because it feels as if the author never wants to commit to the next step — why they think it’s a good idea, where this impression comes from, why their audience should take their opinion on the matter into consideration. A casual example could be a post that goes like, “Apple should implement this feature” and the ultimate reason is because such feature would be handy for how the author uses their Apple products. Tunnel vision galore. When people read my articles, I really want to avoid giving them the impression that, technically speaking, I’m talking out of my ass.
  2. I want to be as clear as possible with regard to the subject I’m elaborating on. If sometimes my tone feels pedantic, it’s because I want people to understand exactly what I mean. I still don’t always succeed, but in some cases it turns out that it’s because some people are just incapable of reading long-form pieces, or are so prejudiced about a topic that they’ll never accept my viewpoint no matter how clearly I explain it.

There are times, however, where I feel that I have enough observations on a subject to write an article but that, past a certain threshold, I can’t be as thorough as I’d like because I’ve hit the ceiling of my knowledge and venturing further would clearly show that I’m out of my depth.

This is what happened with my previous article, A brief reflection on Mac software stagnation, where I could provide a few impressions as a power user and observer, try to explain where those impressions came from, and then stop there, because to venture further without being a developer, without having the technical know-how and perspective of a developer, just felt like a bad idea. Still, I hoped to receive some feedback from developers, so that I could understand whether my impressions were on the right track or not.

The feedback

I noticed some very good reactions and received valuable feedback, which I wanted to share here so that it doesn’t get lost in Twitter’s ephemerality.

1.

Sam Rowlands, a long-time Mac developer, reacted with this short thread on Twitter:

It’s my belief, the App Store has caused irrevocable harm to the Mac software industry. There is ‘sideloading’, but the Mac Media is just a shell of its former self after Apple gutted it, with a bait & switch campaign of affiliate links.

Many indie developers can’t afford the kind of advertising that Apple’s “preferred” developers can. So we’re forced to adopt a Minimize Risk attitude, which reduces indie devs’ incentives to allocate years into building a complex, complete, great Mac application.

Not to mention that [when you] support more than one year macOS version [I think Sam means supporting backwards for more than just the previous year’s Mac OS version], your code ends up littered with minor performance degradation as you need to use this API on this version, that API on that version, and flip the result for version, sometimes these need to done for point releases of the macOS, not just major versions.

All in all, developing for the macOS is not as great as it was 10 years ago, it’s become expensive to maintain a macOS application, which eats into the time an indie can be creative, and eats away at them emotionally.

I believe I’ve watched more indie give up in the last few year, than I’ve met new excited macOS developers, who’re ready to tear up the rule book and bring the Mac back into the spotlight.

I believe Apple needs a CEO upgrade.

Sam also pointed me to three relevant articles he has written on his blog over time:

  • No one downloads, no one sees — Sam told me: “[Here is] where I discuss how I feel the App Store has changed in a way to the detriment of indie developers”.
  • Improving Mac app exposure — “My second article was trying to put a positive spin on it, and list some things that indies could do to help promote their apps”.
  • The Mac App Store in 2022 — “The third article is about the changes I feel Apple could make, which would make the App Store a better environment for indie developers and customers”.

In the first of these articles, among other things, Sam says, While a great many things changed over the next decade, negatively affecting Apple’s indie developer industry, the massive reduction in exposure is a major problem.

This has got me thinking about how I used to discover Mac software before the Mac App Store was launched in January 2011. Essentially, there were four main channels that led to discovery:

  • Word of mouth, both online and offline. It could be a friend’s recommendation, an email from one of my readers, a message in a mailing list, a mention on Twitter, etc.
  • Computer magazines. It could be an in-depth review, a mention within a particular, larger feature (e.g. The best utilities for compressing files), or simply a freeware/shareware included in the magazine’s CD-ROM.
  • Online reviews: in Mac-oriented tech websites (like the digital counterparts of the same computer magazines I used to buy — Macworld, Mac User, MacFormat, etc.) or personal blogs. By the way, there used to be many more blogs and sites specifically dedicated to app reviews, curation, and discovery. I’ll always lament the loss of places like AppStorm and AppShopper. (I’ve linked to archived pages so that you get an idea of what I’m talking about.)
  • Web search: a bit of a last resort, but still useful to discover new applications. If I stumbled onto something interesting and I wanted to know more before risking a download or purchase, I could always perform another search for reviews of such app.

Now here’s what I’m thinking. Given that app discovery is currently still terrible in the Mac App Store (the sheer crappiness of App Store search after more than a decade since its launch never ceases to baffle me); given that App Store exposure is still problematic for a lot of indie developers, for the reasons Sam outlined in his article; and finally, given that at least three of the four ways to discover new apps I’ve listed above are still perfectly valid to discover apps today, I’m really wondering if, for a developer, publishing their app on the Mac App Store is worth all the trouble.

Think about it: as a user, how many useful apps have you discovered only via the Mac App Store? I’m not talking about apps prominently featured on the App Store’s home page, of course, but apps you found by searching or actively browsing the App Store. In my case, I think it was just one. I needed an aspect ratio calculator and I was pressed for time, so I did a quick search in the App Store, found one after a while, it was generally mediocre but did the job I needed it to do at the time. One app in eleven years of the Mac App Store’s existence is just ludicrous. And honestly, it would still be ludicrous if it were five apps, or even ten.

Go read Sam’s articles now — he makes good points I very much agree with.

2.

Another reaction worth pointing out is Maciej’s, who told me on Twitter:

The sad part is that Catalyst and native iOS apps running on macOS aren’t even the largest offenders. That title goes easily to all those who push “apps” written in Electron and similar. Compared to those, Catalyst is marvellous.

There’s also the matter of crappy documentation. I’m not a developer but this has been a recurring theme for years. Introduction of APFS is a great example. Major change with piss-poor support articles. How can you write advanced software with crap like this?

There’s the lack of effort on Apple’s part too. Why would third parties be interested in developing exceptional software when even the OS maker doesn’t seem to be that interested? Music [the app] is a great example here. There are exceptions (iWork) but that’s what they are, exceptions.

[…] The Mac App Store has been a failure and Apple has largely failed on improving it in any meaningful way. Despite the influx of new Mac users the software discovery process seems to be worse than ever. The crap (normal) people get from the App Store is incredible.

There’s also Big Sur: a visual redesign that nobody asked for. And one I think nobody was super-positive about. If I were a small dev who actually cared about this I would have been seriously annoyed.

Here’s how this process looks like for laypeople, who are completely new to the platform:

  • get a Mac;
  • open Mac App Store;
  • type the name of their favorite iPhone app into the search box;
  • get some terrible knock-off version.

When even die-hard Mac companies refuse to put their products in the App Store (or do so and remove them after some time or put a castrated version in MAS) you know something’s seriously broken. And it’s often not even about the money.

I think that (all faults aside) Setapp has done more good in terms of curating and giving easy access to quality Mac software than Apple over the past few years. If I had a new user in front of me I’d rather they sourced their apps from there rather than MAS, less overall crap.

Again, I pretty much agree with everything Maciej points out here. The Music app (at least for how I typically organise and listen to music on my Macs) is so bad for me that Apple has managed to make me feel nostalgic about iTunes. In fact, I still use iTunes to manage my local music library, and I do so on Macs with older versions of Mac OS. For instance, on my PowerPC Macs running Leopard, iTunes 10.6.3 is a rock-solid app. iTunes 9.2.1 on my PowerPC Macs running Tiger is another very good, very stable version.

It’s funny how we always complained that iTunes had become a bloated, mastodontic piece of software, that it simply had too much stuff to manage for its own good, that it needed to be split into several smaller apps to better tackle certain tasks, and we ended up with a series of single-task apps that are just mediocre when taken separately, and still of lesser quality than iTunes when taken collectively.

I also agree with Maciej about Setapp, at least when it comes to app curation. I tend to avoid subscription services, but I’m sure that Setapp’s model and pricing is a really good deal for users and participating developers. I’m not a subscriber simply because, figuratively speaking, it’s a buffet that offers much, much more than I can eat.

3.

I confess I was nervous about how developers could respond to my article. I always triple-check every time I mention technical aspects of software development, because I’m not a developer and I’m afraid of basing an impression or an opinion on something that turns out to be technically wrong or misunderstood on my part.

I was particularly honoured by Tyler Hall, who not only did appreciate my initial article, but took the time to write a response: Half-assed Mac Apps. It’s a really great piece, and I felt that Tyler added the necessary knowledge and insights to pick up my sort-of unfinished reasoning and bring it to completion.

4.

Just as I was finishing this piece, I noticed another reaction on Twitter, by user teknisktsett:

For the English-speaking market, the Mac app market covers most users’ needs in terms of commercial apps like MS Office, Adobe or Affinity apps and other productivity apps, as well as Utilities. What’s missing is catering to the international market.

When was the last time you made a Mac app to serve the needs of users in India, Thailand, Greece or Italy? Just to name a few examples. Not being in US or UK, I’m pretty tired of yet another clone of a US-catered leisure app measuring how many cups or gallons of water I should drink, or how many miles I ran yesterday. Who knows? I measure in metric kilometers, so to make me a user, I need the metric system support.

I care about carefully-designed native apps, but it’s not the only thing to consider: localization (not AI-translated), cultural diff, metric and imperial — and properly native in Cocoa (using Objective‑C or Swift) or SwiftUI.

Localisation is an important point, and I feel especially bad for not having brought it up myself, given that I’m a translator and localisation specialist. In the section Why choosing a professional translator is important on my Services page, I even say, “A translator helps clients expand their audience: multilanguage versions of texts, websites and localised applications can thus reach a broader, international audience, which is always an advantage”. So I absolutely agree on what teknisktsett is saying here. In a sense, staying English-only limits your user base abroad to only those people who know enough English to use your apps. This generally means a demographic that’s largely made of younger people and/or people with higher education.

However, what I’ve noticed in my 10-year experience in localising applications, is that in at least 80% of the cases, only medium-to-big software studios can afford the investment of having their application(s) professionally localised. Most of the times that I’ve reached out to indie developers proposing a collaboration where I could localise their app in Italian or Spanish (which is the second most-spoken language in the US), the reaction I got was always the same — I’d love to, but can’t afford your services. And this is not because my fees are particularly demanding. I think it’s more related to that ’Minimising Risk’ approach Sam Rowlands mentioned when he responded to me on Twitter.

Because while I’m sure that many developers would be thrilled to offer their apps in more languages than English — the potential of extending their user base is undeniably there — investing money on that is still a risk for them because there still exists the issue of poor discoverability and exposure.

Update — About the localisation aspect, Jeff Johnson responded on Twitter:

  1. I can’t provide customer support in anything except English, which makes me wary of localization.
  2. I can’t just localize once and be done. I’m constantly updating my software, web site, and documentation. So it feels like I’d need a permanent translator on staff.

Fair enough, and it further corroborates my experience, that in most cases only medium-to-big software companies can afford the investment of professional localisation.

 

I think that’s it for now. I want to thank all those who have given me valuable feedback on the topic so far, and I’ll definitely be adding other personal thoughts and external contributions in the future. This is a subject that’s particularly close to my heart, because at this stage I think that Mac users — especially power users — do need more than just professional, ultra-fast Macs. Great hardware without great software isn’t enough to make a platform thrive or continue to thrive.