Buttons and dials are not the problem

Handpicked

Kirk McElhearn, in his DPReview article Opinion: Do we really need all those buttons and dials? :

Today’s cameras are computers with lenses, and like computers, they have a plethora of features, far more than any film camera. As with any computer, we need to be able to adjust these many settings. There are menus that allow us to enable, disable, and tweak the many features available, and buttons and dials give us quick access.

But with many modern cameras now offering a dozen or more control points – some customizable with no obvious markings – there’s a risk of overwhelming certain users. More importantly, the sheer complicatedness of digital cameras can get in the way of taking photos.

[…]

There will always be complicated cameras available for those who want the utmost control. But having a plethora of control points doesn’t necessarily make a camera any better – for some, it makes it worse.

I have too many cameras. Still more film cameras than digital cameras, but recently, thanks to a mixture of very good deals found on eBay, and the generosity of some photographer acquaintances and friends, I’ve also been adding quite a few digital cameras to my collection.

Shooting digital is certainly more practical, more instantaneous, and more affordable today, but one of the things I’ve always preferred about film photography is just how well-designed many film cameras are, whether we’re talking of simpler point-and-shoot cameras, rangefinders, or professional SLRs.

Many film cameras have buttons, dials, and controls that are so well-placed you can easily adjust everything manually and take your shot without even moving your eye away from the viewfinder.

The problem of too many digital cameras is not the sheer amount of buttons and dials — it’s their user interface in general. It’s their menus and settings. It’s the balance between physical controls and virtual controls. It’s the way user interaction is designed.

With all the photography literature and expertise accumulated so far, it should be easier to design a camera’s user interface, instead I often keep seeing unnecessary complexity.

Buttons and dials should be used for all basic functions, everything a photographer needs to quickly adjust in an intuitive way. Setting ISO speeds, changing the white balance, adjusting exposure compensation, focus lock, shooting modes, etc. — all these are functions the user should be able to change without having to look for them in a sprawling menu hierarchy.

Yet I’ve used a lot of digital cameras where the manufacturer chose menus over physical controls to offer a more minimalistic camera design, or perhaps to save costs. The results are cameras that look simpler to use, but ultimately they are not. Users tend to get more confused by several pages of menu options and settings (often even cryptically abbreviated due to space constraints), rather than the amount of buttons and dials.

An annoying design choice with some digital cameras is that you have both a button/dial and a menu option to access the same function or adjust the same parameter. This just increases confusion and clutters the on-screen interface.

An increasing number of digital cameras today feature a touchscreen and their main interface relies a lot on touch input. While I do think that some touch controls can be handy, especially while filming yourself with cameras with fully articulating screens, this is another instance where I greatly prefer having buttons and dials.

There’s a sort of obsession with touch interfaces today, and manufacturers seem to want to put them everywhere in their products; but they’re not a panacea. They may look cool and futuristic, but from a sheer usability standpoint they’re simply terrible. Cameras and cars are the first examples coming to mind. Cameras and cars have one rather important thing in common: in both cases you want the operator to be able to perform the highest number of tasks without averting their gaze, from the viewfinder and from the road ahead respectively.

It’s kind of semantically ironic how touch controls, despite the name, are the ones lacking feedback. So, every time you want to change something, move a virtual switch, activate or deactivate a feature, you need to be looking at the screen first. And the problem with many cameras with touchscreens is that these controls are generally tiny and not really easy to select. Or they’re crammed together and you may end up changing some other parameter by mistake. And in the best case scenario they slow you down noticeably. What do you think it’s more convenient for changing ISO or WB settings, having to tap repeatedly a small target in the corner of a 3‑inch touchscreen, or turning a dial with your left hand while you see updated information in the viewfinder and you still have your right index finger on the shutter button, ready to take the shot?

So yes, I think we need all those buttons and dials — provided they’re laid out thoughtfully, and provided their arrangement is part of a greater interface design that includes the menu layout and navigation, a balance between direct physical controls and what you set up by navigating menus and submenus, and deciding a hierarchy of priority. Buttons and dials should always prioritise frequently-used and frequently-changed controls, and the access to the provided functions should always be as direct as possible. You want to push a button or move a dial, and the associated parameter to change on the fly. You don’t want to push a button or move a dial and have a mini-menu appear, and have to navigate that menu with the arrow pad and press OK to confirm.

When the arrangement of buttons and dials is well designed and their functions clearly marked (or very easily memorised), then you end up with a better overall interface, which is much preferable over cameras that are heavily menu-based and whose few buttons have to necessarily serve multiple purposes. Here you have an interface that slows you down and is harder to memorise.

 

A headroom so high you’ll never see it again

Handpicked

In a linked-list item called Max Headroom, Nick Heer quotes this bit from TechCrunch’s Matthew Panzarino interview with Greg Joswiak and John Ternus about the new iPad Pro:

One of the stronger answers on the ‘why the aggressive spec bump’ question comes later in our discussion but is worth mentioning in this context. The point, Joswiak says, is to offer headroom. Headroom for users and headroom for developers.

One of the things that iPad Pro has done as John [Ternus] has talked about is push the envelope. And by pushing the envelope that has created this space for developers to come in and fill it. When we created the very first iPad Pro, there was no Photoshop,” Joswiak notes. “There was no creative apps that could immediately use it. But now there’s so many you can’t count. Because we created that capability, we created that performance — and, by the way sold a fairly massive number of them — which is a pretty good combination for developers to then come in and say, I can take advantage of that. There’s enough customers here and there’s enough performance. I know how to use that. And that’s the same thing we do with each generation. We create more headroom to performance that developers will figure out how to use.

The customer is in a great spot because they know they’re buying something that’s got some headroom and developers love it.”

Nick then comments:

I buy this argument, particularly as the iPad is the kind of product that should last years. Since the first-generation iPad Pro, iPads have seemed to be built for software and workflows that are two or three years down the road. But the question about the iPad for about that same length of time is less can you? and more would you want to?, and I hope the answer to that comes sooner than a few years out.

I mulled over this bit for a while and have produced a few observations.

1.

I kind of buy that argument too, in the sense that it’s the only possible argument Apple can elaborate at this point. But this headroom Joswiak and Ternus are talking about is getting so ridiculously high that I truly wonder whether the whole thing is starting to lose sense. This is the absolute polar opposite of planned obsolescence. It’s a form of utter future-proofing that, theoretically, begins to transcend the device itself and the user’s needs.

Suppose a car maker comes out with a new electric car with a battery that lasts one month before you need to charge it, and allows you to easily reach speeds of 500 km/h (310 mph). Wonderful, yes? Imagine how fast you could travel, imagine the performance, the efficiency of this car. The problem is, of course, where on earth could you actually drive this car to maximise that performance? There is an (infra)structural issue that has little to do with the car, with the device, itself. The car does indeed have great power and great efficiency, but nowhere to — realistically, practically — demonstrate it.

And suppose that the context, the infrastructure, the applications (= uses) for that car all do get updated years later, so that you can take advantage of specially fast lanes to travel around with that car. Would you buy that car today? Would you keep it for years, use it at maybe 20% of its potential, just waiting for the right opportunity, application, status quo update, to see it truly shine, to finally get what you paid for? I don’t know. I wouldn’t. Especially if it’s a comparatively costly investment.

2.

Let’s focus again on this bit:

When we created the very first iPad Pro, there was no Photoshop,” Joswiak notes. “There was no creative apps that could immediately use it. But now there’s so many you can’t count. Because we created that capability, we created that performance…”

Nah. When Procreate (and many other creative apps) was first released — 2011, ten years ago! — it could capably run on an iPad 2 and iPad 3. I still have (and use) a bunch of drawing/painting apps on my old iPad 3, and they run fairly smoothly all things considered. I’ve drawn and sketched a lot on that iPad using apps like Paper by Fifty-Three (now Paper by WeTransfer), Bamboo Paper, or Penultimate. Pixelmator for iPad was launched in October 2014, in the iOS 8 era, and one year before the first iPad Pro was introduced. Pixelmator is such a good app that I can use it on one of my iPhone 4S units running iOS 8.4.1.

Apple may have ‘created that capability’, and certainly all high-profile creative apps that are currently available have now the opportunity to introduce features that can take advantage of all this sheer hardware power. But the fact that there was no Photoshop before Apple created the first iPad Pro is not because Photoshop wouldn’t have been possible without the first iPad Pro. Adobe could have released Photoshop for the iPad years before the appearance of an iPad Pro. But they couldn’t be bothered; maybe they thought iOS was still an immature operating system for such an application. Maybe they were too busy focusing their efforts (and their many apps) on traditional computers. I don’t know.

But I don’t think that Apple introducing the first iPad Pro was enough reason to make Adobe go, Yeah, that’s more like it. That’s what we were waiting for. The first iPad Pro came out in September 2015. Photoshop for iOS was introduced in late 2019, if I’m not mistaken. Four years to release an iOS app which, while powerful, still doesn’t have feature parity with Photoshop for Mac or Windows, doesn’t strike me as Adobe scrambling to make it available as soon as the iPad Pro appeared.

Apologies if I’m getting unbearably pedantic here, but I do think that Apple’s narrative here is like You know, the chicken did indeed come before the egg, while I’m rather certain the opposite is true. Creative apps and iOS developers never really waited for Apple; I’ve purchased creative apps for iOS since 2008, and what I’ve noticed is that developers in general, and especially developers of creative apps, have always tried to stay ahead of the curve. And all the iPads I’ve handled in the past ten years have never really struggled when running such creative apps.

3.

I want to belabour one thing I wrote in my first observation above, when I said that the M1 iPad Pro’s hardware capabilities are a form of utter future-proofing that, theoretically, begins to transcend the device itself and the user’s needs. And this ties to Nick Heer’s observation when he says, But the question about the iPad for about that same length of time is less can you? and more would you want to?.

Hardware-wise, an M1 iPad Pro is essentially a Mac with a touch interface. Software-wise, this incredibly powerful iPad is as capable as a 2014 iPad Air 2 (the oldest iPad model that can run iPadOS 14). There is still, in my opinion, a substantial software design gap preventing iPads from being as flexible as they are powerful. Software-wise, iPadOS still lacks flow. Don’t wave Shortcuts in my face as a way of objecting. Shortcuts are a crutch. A good one, no doubt, but a crutch nonetheless. Software automation can do great things for an operating system, but if an operating system comes to depend on it to become usable, then maybe you have to rethink a thing or two.

iPad is still a device that mostly appeals to people who have embraced it as their primary device for a while now. People who have by now got accustomed to its software quirks. People who have patiently built custom workflows to avoid jumping in and out of three apps, tapping and swiping around, just to complete a task that on a Mac takes 2–3 keyboard shortcuts and you don’t even have to move your hands away from the keyboard.

A creative professional who is trying to switch from traditional computers to a device like the iPad Pro is — I believe — more interested in the power of the iPad’s interface rather than the weight its CPU can metaphorically lift. (Not that the latter doesn’t matter, but I think you know what I’m trying to say here). iPad has extreme portability and multi-touch on its side. iPad has Apple Pencil on its side, and the immediacy of letting you draw — sketch — paint on its surface, directly.

All these are real advantages over a computer aided by a graphic tablet. But if these advantages aren’t paired with an operating system and a user interface that give at the very least the same amount of freedom of movement the user is afforded in Mac OS, for example, then things get awkward. Because it’s not just the GPU and CPU power that makes the experience fast and enjoyable, or the device compelling. An important part of the equation is how good, smooth, seamless the interface is. How well it allows interoperation among apps. How well the whole interaction makes the workflow… flow.

The computer mainframes of the 1950s and 1960s had, for the time, amazing computational capabilities, but to operate them, to present them the data to work with and to extract meaningful uses out of their output, you had to put up with abstruse and counterintuitive interfaces. Those were different times, and people really didn’t think of workflows the way we do today. Those were machines to be operated that way, and there were typically multiple people assigned to one specific task. Efficiency was a concerted effort.

But back to the iPad. The paradox now is that the path of least resistance to make iPadOS more usable is to keep borrowing from Mac OS. But the more you make the iPad experience similar to a Mac’s, the more you blur the lines, and you make the iPad become essentially a touch-enabled Mac with a good stylus. You make it less distinctive. You make it just a bit less compelling. Ah, but no, Apple executives say, we don’t intend for the iPad and the Mac to converge. Then I do hope iPadOS is in for a deep redesign, to make the experience truly effortless and reduce this friction I keep noticing whenever I switch from a Mac to an iPad. (I really look forward to the preview we’ll hopefully get at the WWDC.)

It’s not enough to have an impressive chip, RAM and storage space on the hardware side, and just say, Here you are. Now you just need a sprinkle of amazing third-party apps, and you will have an astounding device and experience. Having amazing third-party apps is fantastic, but without the connective tissue of an optimised OS underneath all that, you end up with siloed experiences, lack of interoperability, and fragmented workflows.

Apple is certainly betting on this iPad Pro’s future. I’m interested in seeing how many users are willing to invest and bet on a device that — as Nick observes — seems to be built for software and workflows that are two or three years down the road. I’ll never be able to obtain a meaningful statistic out of this, though, because many will purchase the iPad Pro for ‘the potential’; many will purchase it because it’ll seem a meaningful upgrade to them; many because they can and because they unquestioningly purchase the latest and greatest device with an Apple logo on it. And so forth. As things stand today, Apple is selling a device that promises a lot, and by the time it fully delivers on what it promises today, more years and more iPad iterations will have come and gone, but its hardware power will always be out of reach, maybe for its own OS too. Meanwhile, this ultra-powerful iPad Pro, another chapter of an endless work in progress, becomes a feedback loop of potential and promise, of forward-looking-ness. And Apple will monetise the hell out of it.

Related reading

Apple's Spring Loaded event: a few observations

Tech Life

 

Looking back at the April 20 Spring Loaded Apple event, of all the things that have been introduced, the new 24-inch colourful iMacs are what piqued my interest the most. And as usual with Apple in recent years, I got my regular dose of excitement mixed with frustration. Of course, some of the things I find annoying may be non-issues for many other people. But the event was more than just new iMacs, so let’s go over the main products.

1. AirTags

Finally, the long-rumoured accessory was unveiled. It’s an inexpensive tracking device with a user-replaceable battery, two features I didn’t expect to associate with an Apple product. I’m assuming you saw the event and know already how AirTags work. You can put them in or on the personal items you want to keep track of, and use the Find My app to track them. One cool privacy-related feature is that you can’t use an AirTag to surreptitiously track someone else. As Apple’s AirTag page explains:

AirTag is designed to discourage unwanted tracking. If someone else’s AirTag finds its way into your stuff, your iPhone will notice it’s traveling with you and send you an alert. After a while, if you still haven’t found it, the AirTag will start playing a sound to let you know it’s there.
Of course, if you happen to be with a friend who has an AirTag, or on a train with a whole bunch of people with AirTag, don’t worry. These alerts are triggered only when an AirTag is separated from its owner.

AirTags are a nifty gadget. I don’t really need one, but given its low price I wanted to buy one anyway. Sometimes it’s after you own an accessory that you stumble on a use case for it. Unfortunately, the coolest aspect of the AirTag experience — Precision Finding, that interface that points you to the exact location of your AirTag — is made possible by Ultra Wideband technology, which is currently available only on the iPhones 11 and 12.

I guess that if you own an older device or another device that doesn’t have the U1 chip, you could still be able to track an AirTag via the Find My app, just as you can track your other devices. But you’re definitely not getting the best experience.

2. Updated Apple TV 4K

If you want a short and sweet summary of what’s new, Ars Technica is your friend. Highlight №1 is that it’s powered by an A12 Bionic chip (the previous model featured an A10X chip), which offers an increased GPU and video performance, making it a better device for playing games, something Apple Arcade subscribers will be happy to hear. Highlight №2 is that it comes with a redesigned remote, and this is probably something 90% of Apple TV owners will be happy to hear, given the amount of criticism faced by the previous touchpad remote design.

I personally care very little about the Apple TV ecosystem. To make the most of an Apple TV, first I would have to buy a much better TV than the one I currently own. And given how little time my wife and I spend watching TV, both a new TV set and an Apple TV 4K (or HD) would just be a waste of money.

At $179 for the 32 GB model, and $199 for the 64 GB model, Apple TV remains one of the most expensive products in its category. Apple insists on offering a premium device with sophisticated features (High Frame Rate HDR, colour calibration using the iPhone, etc.), and that’s fine, yet I can’t help but feel that these only appeal to a niche audience. Most regular folks I know [Content warning: anecdotal evidence] simply look for inexpensive, get-the-job-done devices they can connect to their TV sets to watch something on Prime Video, Netflix, Hulu.

I didn’t particularly mind the old remote design. I held one once or twice and I remember thinking it didn’t look or feel that bad. But the new design, from what I can see on Apple’s site, seems better thought-out.

3. The new, colourful, 24-inch M1 iMacs

I’m not going to lie: when I first saw the introductory video during the Spring Loaded event, I had a big smile on my face. Finally Apple was bringing back something from their past that was worth recapturing: computers that come in a varied selection of vivid, beautiful colours. A touch of whimsical fun after years of basically silver-and-black Macs.

As the new iMacs were presented in more detail, though, that big smile progressively faded. Mind you, I’m not entirely disappointed, and part of me still wants to put one of these on my desk (at the moment I’m torn: yellow, orange, or purple?). But when you look past the bright colours and sleek design, you start to encounter a few details which — for me — are irritating compromises.

Thin or bust

Apple has created what’s possibly the thinnest all-in-one desktop computer, and I’m sure they’re still patting themselves on the back. But was it necessary to produce a computer that is so thin that its power supply has to be external? I find that rather inelegant. My 21.5‑inch 4K retina iMac is less than 3 cm thicker at its thickest point, and has an internal power supply. I believe that Apple could have easily built a new iMac with this new, flat design, by making it 2.5 cm thicker and putting the power supply inside. It still would have retained a thin and elegant profile, and Apple would have spared us the external brick.

The year of MagSafe on the desktop

I find it a bit ironic that these new iMacs have what is essentially a MagSafe-like proprietary power connector, and current MacBooks (where such a magnetic connector would certainly prove more useful) do not. I suppose they chose a magnetic connector for the new iMacs because their chassis is so thin that inserting a regular power plug would exert too much force on it. (Also, a regular power plug may be simply too deep for these very thin iMacs). On the other hand, I hope those magnets are strong, because in case of an accidental pull on the power cable, iMacs are not laptop computers with a battery — they would be abruptly switched off.

I know, I know this connector was designed to be strong enough, exactly because it’s for a desktop computer and therefore wasn’t designed to behave like the MagSafe connector on Mac laptops, but still, I wouldn’t have minded a kind of security mechanism to keep the connector in place when plugged in, like in a breech-lock system for mounting camera lenses, where you have an outer ring that rotates and secures the lens to the camera mount.

Port austerity

Connections-wise, these new iMacs offer a surprisingly (or unsurprisingly) restricted selection. At best, you get two Thunderbolt/USB 4 ports, two USB 3 ports (USB‑C connector), and Gigabit Ethernet (on the external power supply brick, in case you were wondering). At worst, you get two Thunderbolt/USB 4 ports and that’s it. No SD Card slot, not even one single USB‑A port. I suspect it’s because such connections would require a bigger motherboard or even a slightly thicker chassis. And these machines are essentially MacBooks in a desktop form factor. At this point I’m almost surprised they still have a 3.5mm headphone jack, to be honest.

From stripped-down to mmkay

The new iMacs come in three configurations, but I wouldn’t call them goodbetterbest.

  • Rather, the cheap-and-atrocious $1,299 low-tier configuration is available in fewer colours, has 8 GB of RAM and 256 GB of flash storage, and the GPU in the M1 SoC is limited to 7 cores like the M1 MacBook Air. It only has two Thunderbolt/USB 4 ports. It also comes with just a regular Magic Keyboard, not even one with Touch ID.
  • The barely-good-enough middle tier $1,499 iMac has an 8‑core GPU, two more USB 3 ports, Gigabit Ethernet, and a Magic Keyboard with Touch ID, but still has 8 GB and 256 GB of flash storage.
  • The finally-decent top-tier $1,699 iMac has features that, in comparison, would have made it the middle-tier iMac just a few years ago. It’s exactly like the $1,499 iMac but this time you get a 512 GB SSD.

Put it another way, the $200 difference between the low-tier and the middle-tier iMacs buys you 1 GPU core, three more ports, Touch ID on the included keyboard, and more colour choices. The $200 difference between the middle-tier and top-tier iMacs buys you… 256 GB more of internal storage. As you can see, when it comes to base RAM and storage, once again we’re reunited with good old stingy Apple.

I know there is a market for the low-tier iMacs. They’re for people with restricted budget, or maybe for bulk solutions in an education environment. The monolithic nature of the M1 chip doesn’t appear to allow much room for meaningful differentiation when it comes to offering various configuration options for the same machine.

That’s why these three iMac tiers feel so contrived to me. It’s not an additive configuration model, where you start with a decent base machine (the good tier) and you add meaningful features to it to create the better tier machine, and then you add some more perks to end up offering the best tier machine. With the new iMac you have a subtractive configuration model: you start by what is essentially a reasonable configuration by 2021 standards — the most expensive $1,699 iMac — and then remove functionalities from it to offer two more lower tiers which do cost less, yes, but also leave you with a machine with the bare minimum of ports, performance, memory and storage — and neither of these latter features are upgradable down the road.

I realise that computers are de facto household appliances by now, but this trend towards devices with immutable innards once you pick a configuration at the time of purchase still feels annoying and ridiculous to me. Computers aren’t devices you replace every year. Your needs may change over time. If there’s something you constantly need more of as time goes by is storage space. Internal drives should always be upgradable. Yes, yes, You can always add external storage. And have a USB or Thunderbolt port permanently or semi-permanently occupied? That wouldn’t be so bad if Apple made computers with more ports. Memory should always be upgradable, too, but I’m afraid that this cause is lost for good today, what with the internal structure of the M1 SoC.

The bang, the buck, and the performance

If you look at the technical specifications, the new iMacs are essentially M1 Mac minis with a good-quality display attached. If you already own a good-quality display, and if you can look past the unquestionable attractiveness of the new M1 iMacs, then probably the rational choice is to get an M1 Mac mini. A base $699 Mac mini has the same 8‑core CPU, 8‑core GPU M1 chip; the same 8 GB of base RAM and 256 GB base storage; the same two Thunderbolt/USB 4 ports, two USB 3 ports (but with the still-versatile USB‑A connector) and Gigabit Ethernet as the $1,499 M1 iMac, plus of course an HDMI port.

Sure, the new iMacs come in an attractive package, and perhaps you can’t find a 24-inch 4.5K display with the same quality and integrated 1080p HD webcam as the one in the iMac for $800. But if you need a machine that delivers the same performance, and you already own a display you’re comfortable with, then the base M1 Mac mini very much looks like a better deal.

Less blurred lines

It is abundantly clear by now that the transition from Intel to Apple Silicon architecture is being carried out by releasing a first wave of refreshed products all revolving around the M1 SoC, and that such products are to be considered consumer-oriented lineups. At the moment, though, I’m even more intrigued by what Apple hasn’t released yet. It doesn’t take much acuity to predict that the second wave of Mac computers will be the professional line, and will revolve around an even more powerful Apple-designed SoC.

And another thought: how much more powerful? Consider: the current performance of an M1 ‘consumer’ Mac is already an order of magnitude better than the performance of many Intel machines in the ‘pro’ market. Apple has to produce a chip that will outperform the M1 by a perceptible margin — CPU-wise, GPU-wise, or both — otherwise why bother getting a ‘pro’ machine in the first place?

So perhaps, when the Intel-to-Apple-Silicon transition is over, we may see again that clear distinction between consumer and professional Macs we first witnessed in the early 2000s, with PowerPC G3 Macs in the consumer slot, and G4/G5 CPUs powering more professional Macs. And like in the PowerPC era, we’ll be back to having a simpler, more clear-cut distinction between consumer-class Macs and pro Macs based on the chip class and denomination. These Macs have M1 CPUs, so they belong to this tier. These Macs have (M1X? M2?) CPUs, so they’re in a higher tier. People won’t really have to analyse spec sheets to assess how powerful a Mac is and what it can really do for them.

4. The new iPad Pros

Unless I’ve missed something, the main new features of the new iPad Pros are:

  • They’re now powered by an M1 chip.
  • The 12.9‑inch model is equipped with a much improved display with technology derived from the Pro Display XDR.
  • The iPad’s main connector has been upgraded from USB‑C to support Thunderbolt 3 and USB 4.
  • The cellular models offer 5G wireless data.

So fast, it’s ridiculous

For me, the iPad Pro is like that friend who is already so affluent you don’t know what to gift them for their birthday. What do you give to those who already have everything? In the case of the iPad Pros, the answer apparently is, More of what they already have.

In Faster than its own OS, the article I published after the release of the 2018 iPad Pros, I wrote:

But ever since I started hearing this argument [that these iPad Pros are so powerful they can outperform most PCs and even MacBook Pros], a question that’s been nagging me — a question I still haven’t found a satisfactory answer to — is this: All this staggering performance… to do what, exactly?

And:

Many power iPad users seem to want this device to become more like a Surface. A 2‑in‑1 device. A shape-shifting tablet that can act like a real laptop when needed. They require a versatility that the iPad, in my opinion, cannot provide yet because it still has an operating system that treats it like a big iPhone, most of the time. iPad is this hybrid entity with the hardware capabilities of a traditional computer, and the mind (software) of a smartphone. That’s one part of the identity crisis. The other part is that iPad still doesn’t know what it wants to be when it grows up. And on the hardware front, it has definitely grown up now.

iOS, in its current form, is still inadequate for the iPad. To truly shine as a tablet, as a portable powerhouse, iPad needs adequate software. It needs apps that really take advantage of all these unparalleled tech specs and hardware capabilities. But it also needs an operating system that’s more thoughtfully tailored to the iPad’s format and user experience. Sure, iOS has become more iPad-friendly with the last two releases, but it just can’t keep up with the hardware. The iPad should have started to have its iOS branch or flavour years ago — I’d say around the time of the third- or fourth-generation iPad at the latest. Just as it makes sense for the AppleTV to have tvOS and for the Watch to have watchOS, it would make even more sense if the iPad had its custom iOS with controls, behaviours, user interface, that treated it like the powerful tablet that it is.

Apologies for the extended quote, but after seeing the presentation of these new M1-powered iPad Pros, these very same thoughts and observations came again to mind. After three years we now have iPad Pros with yet another astounding hardware boost, and essentially the same software capabilities of three years ago.

I feel that, with the iPad, Apple is increasingly painting themselves into a corner. There comes a moment where they feel compelled to come out with an upgrade, and since hardware is what Apple does best, here we have new iPad Pros with significant hardware improvements. But in terms of perceived performance in real-life use we’ve already reached a ceiling with the iPad. I could be wrong, but I’d love to see a use case, a set of circumstances, where a user really notices the difference between the previous A12Z Bionic chip in the 2018 iPad Pros and the current M1 chip.

What the iPad needs is a leap forward in the software department. This is where a fundamental redesign is needed, in my opinion. An iOS-based variant that starts treating the iPad more like a touch-based traditional computer and less like a big smartphone. I still stand by the conclusion I wrote in that 2018 article quoted above:

I think that a good set of apps and a more versatile, more tightly integrated OS with these new iPad Pros should be a given. Instead, we’re still ‘getting there’, while the hardware (design and tech specs) is already beyond there, waiting for the software to catch up. This combination still requires a fair amount of versatility on the user’s part to be effective. This software/hardware gap is especially frustrating when you have iPad Pro configurations that cost more than a traditional, high-end laptop.

This is why I’m really, really looking forward to WWDC this year. I want to see a preview of iPadOS 15 going in this direction, aimed at harnessing all this incredible potential afforded by truly powerful hardware specs. I’m tired of using the term potential when talking about the iPad.

The M1 chip in an iPad! This must mean…

Yes, I thought about that as soon as Apple did the reveal at the event. It was more of a reflex than a real thought. What if Apple’s next move is making iPad Pros run Mac OS!? I mean, on a technical level it’s not exactly impossible. If you’re cynical enough you could even say that this could certainly be a nice shortcut on Apple’s part to give the iPad Pro immediate access to a lot of truly professional applications… the ones made for Mac OS.

But if you stop and think, this whole picture would be a bit of a mess. Sure, Mac OS on an Apple Silicon Mac can run iOS/iPadOS apps (with the debatable results and utility we’ve all seen so far), and generally speaking, when you put apps designed for Multi-touch interaction on a traditional computer with keyboard and mouse/trackpad input, the inherently more precise input system of the latter won’t give you too much trouble interacting with such apps. When you do the opposite, though, things get tricky fast.

It’s not completely unfeasible, mind you. If you take a Microsoft Surface device, you’re basically interacting with apps designed for traditional computers on a device that lets you interact with them with your fingers or a stylus. But the experience is not that great, and you tend to favour a more traditional keyboard and trackpad interaction, maybe using your fingers to scroll inside windows or tap the occasional target, and the stylus to draw more precisely in certain areas.

In other words, after the initial What if!? moment, I don’t think that the M1 chip in an iPad necessarily means that Mac OS and iOS are going to become one hybrid system, or that iPad Pros are going to become Mac OS devices. I think that,

  1. The time to release an iPad Pro update had come, and that at this point the only viable successor to both the A12Z Bionic and the A14 Bionic chips (you don’t release an iPad Pro that has the same processor as the ‘lesser’ iPad Air…) was the M1, so Apple put it in the new iPad Pros, and that’s that, end of story.
  2. The presence of an M1 could potentially mean that more Mac-like features could be more easily implemented for these iPad Pros with the next version of iPadOS. Or that iPadOS 15 on an M1-equipped iPad could run not the entire Mac OS verbatim, but single Mac apps by adapting certain UI elements to be more iPad- and touch-friendly. With the ultimate goal of having ‘universal apps’ that get optimised according to which system it’s running them. It would be sort of merging things and not merging them at the same time, if you know what I mean. Not having one hybrid operating system, but hybrid uses across different devices.

Stray observations

1.

During the event I tweeted: Apple’s current hardware design language: “Everything is an iPad.” Some asked me privately what I meant by that. Well, it’s rounded rectangles all over the place again. Look at the iPhone 12 line, look at the iPad Pro and Air, look at the new iMacs… It’s the same concept, the same primitive, in different sizes. This is not inherently a bad thing. A certain design language is much of what makes a brand recognisable. You see those lines, you see that æsthetic, and you know it’s an Apple product even if it doesn’t have an Apple logo on the front.

On the other hand, what’s next? Apple is currently exploring colour again to make their hardware design striking, appealing, recognisable; but I also see a lot of recycled & remixed design details, while the last impactful design changes and refreshes were made when Steve Jobs was still alive. Maybe it’s just me, but I’d like to see another design leader like Jonathan Ive directing Apple’s Design division. Someone who could come up with an unexpected, opinionated design like the original iMac or the clamshell iBook.

2.

I love Centre Stage for video calls, it’s a very cool feature implemented in that seamless way I still like about Apple. In case you missed what it is, Apple explains it in the iPad Pro’s page:

Center Stage. The all‑new Center Stage uses the Ultra Wide camera and machine learning to change the way you participate in video calls. As you move around, it automatically pans to keep you centered in the frame. When others join in or leave the call, the view expands or zooms in. Center Stage works with FaceTime and other video conferencing apps for an even more engaging experience.

3.

When I first saw the new 24-inch iMacs appear on screen, I admit I was puzzled by the clear bezels. When I look at my 21.5‑inch 4K retina iMac, I really love that it has a black bezel around the display. But the more I look at these new colourful iMacs, the more I think that the clear/white bezel is a better match to their general æsthetic. It reinforces their visual message of ‘lightness’ and casual fun, so to speak.

The logo-less ‘chin’ on the front of the iMac, however, still perplexes me. Oh well, if I ever purchase one of these iMacs, I’ll put one of the old ‘Rainbow Apple Computer logo’ stickers I still have around.

4.

I like that now there are Magic keyboards, mice, and trackpads that come in colours to match the iMacs. And I like that now even desktop Macs get Touch ID via the new Magic Keyboard. Too bad Apple has decided to maintain that awful arrow-key design instead of going back to the ‘inverted T’ layout like they’ve done with the MacBooks.

→ Softdroid: Interview with Riccardo Mori

Et Cetera

About a month ago, I was contacted by the owner of Softdroid, a Russian website focused above all on data recovery and reviews of software applications for Android, iOS, and Windows. He told me he found my blog via Hacker News, and that he particularly enjoyed one of my recent articles, The reshaped Mac experience. But before sharing it with his website’s audience, he asked me if I was willing to answer a few questions, just to give his readers some background about me — you know, who I am and where I come from.

I thought it was a great idea and, after a brief email exchange, here’s the resulting interview (in English). I hope you’ll enjoy it.

While I understand that most of my readers don’t speak Russian, in case you want to check out the website, Google Translate does a pretty good job at translating the contents at Softdroid.net. Maybe you’ll find something interesting or some valuable tips & tricks.

My thanks to Vladislav for featuring me on Softdroid.

A brief ode to Stickies

Software

As I was reflecting some more about my favourite features of Mac OS X over the 20 years of its history, I realised that I needed to add a very special mention to the list — the Stickies app.

And as soon as I thought of Stickies, I remembered that it’s an application that’s even older than Mac OS X. The first version of Stickies was written in 1994 by Jens Alfke, and debuted as part of System 7.5 in the same year. He developed the application in his spare time while working at Apple. Stickies was originally called Antler Notes, and in the classic Mac OS version of the app, there’s a nice Easter Egg reminding you of its origins.

By the way, during his time at Apple, Alfke contributed to many important features of the Mac operating system. Between 1991 and 1993 he was part of the team that developed AppleScript; he specifically helped to design the Open Scripting Architecture and created the Script Editor. Between 1993 and 1997 he worked on the development of the OpenDoc framework.

Later, in 2000, he started developing an instant messaging client that became iChat for Mac OS X, that was first introduced with Mac OS X 10.2 Jaguar. He and his team later worked on expanding iChat’s features, resulting in the release of iChat AV in 2003. After leaving the iChat project, in late 2003 he joined the Safari team and worked on what then became Safari RSS, the RSS/Atom news reader and aggregator built into Apple’s Safari, which debuted in Safari 2.0, released with Mac OS X 10.4 Tiger. This feature was then removed with Safari 6.0 in OS X 10.8 Mountain Lion.

Back to Stickies, the amazing thing about this application is that it hasn’t essentially changed for the past 27 years. In Mac OS X, its icon has remained the same from version 10.0 to version 10.15. It was redesigned in Big Sur to better fit its æsthetic:

Stickies app icon up to Mac OS 10.15 Catalina

Stickies app icon in Mac OS 11 Big Sur

As for the app’s interface, apart from slight changes from the classic Mac OS to Mac OS X, it’s always been the same, in appearance and fundamental behaviour. Just to show you a few examples, here’s Stickies in Mac OS 7.6.1:


The About Stickies dialog box.


Note information


Stickies’ Preferences.


When you quit Stickies, this dialog appears.

Here’s Stickies in Mac OS X 10.5.8 Leopard (PPC):

Here’s Stickies in OS X 10.11.6 El Capitan:

And here’s Stickies in Mac OS 11 Big Sur. As you can see, the default notes have stayed the same over the years:

As far as note taking goes, Stickies is a bit of an unsung hero among Mac applications. Often overlooked or dismissed, I’ve been using it on a rather constant basis probably since Mac OS 8.6 on my iMac G3 back in 1999. The amazing thing is that, backup after backup, migration after migration, when I now launch Stickies on my iMac, I can see all the notes I’ve been retaining for the past 20 years or so. ‘Sticky notes’ indeed.

Stickies are really versatile when you need to write down something quickly, and at the same time you want to keep notes right next to what you’re working on, exactly like their physical counterpart.

  • They support formatted text, so you can write using different fonts and styles.
  • You can easily create bulleted/numbered lists within a note.
  • You can import images in notes.
  • You can (of course) choose different colours for the notes, and choose to have notes always in front of other windows or keep them semi-transparent.
  • You can search text within a note or all the notes by pressing ⌘-F.
  • To save space, notes can be minimised by double-clicking on their title bars, just like you could collapse Finder windows in System 7.5 with the WindowShade feature.
  • More importantly, sticky notes are persistent. You don’t need to use a Save command — you never needed to. All the notes you create are retained when quitting the app. All the notes are there when you reopen it. Stickies keeps all information in a self-contained database, which can be easily backed up and migrated by copying the file StickiesDatabase, which is located in your Library (~/Library/StickiesDatabase).

I used to take advantage of Stickies’ flexibility especially during the first years of my career as a freelance translator, when I had to translate entire manuals, and needed to maintain a consistent technical glossary. I started with paper notebooks but, especially when working out of the home office, it was handier to have all my notes on the Mac, right beside the word processor or whatever application I was using to carry out my translations.

So cheers, Stickies! I’m glad you’re still around. And cheers to Jens Alfke — what a legacy.