The Mac’s next transition: further considerations

Tech Life


 

In a couple of months or so, Mac OS 10.15 Catalina will be released, and so the next Mac transition will begin — perhaps slowly, but surely. I’m trying to put together some pieces to pre-visualise a picture of how things will go. So far I’m not really thrilled by what I’m seeing, but I’m doing my best to stay mildly optimistic.

Howard Oakley, talking about SwiftUI, observes:

The real sting comes in system requirements: the moment that you choose to use SwiftUI, your app can’t run on Mojave or earlier, it’s committed to Catalina. macOS users are generally rather slower to upgrade to the latest operating system than are iOS users. Read the comments to articles on this blog and you’ll find that there are plenty of advanced users here, with Macs perfectly capable of running Mojave, who are still running High Sierra or Sierra, and a few who are back on El Capitan or earlier. This isn’t because Mac users are stick-in-the-muds, but because many are in complex situations which have to wait for key software and other requirements to change before they can upgrade.

Assuming that half of you here were to upgrade to Catalina by the start of 2020, which could be optimistic, would I want to invest my time and effort into building apps which the other half couldn’t run until they too had upgraded?

If there were other compelling reasons to upgrade, as there are in Mojave’s much-improved APFS, maybe. But for most users, the major trade-off coming with Catalina is going to be whether its better security and speed outweigh the loss of all 32-bit software. I’m not convinced that will lure many to upgrade early. Are you?

On the hardware side, it’s worth sharing Pierre Dandumont’s observations. The original post is in French, but I did my best in translating and summarising the essential parts as follows:

  • To release a Mac of this type in 2019 [the new Mac Pro] implies at least one thing: even if ARM-based Macs arrive soon, the two architectures may coexist for a while.
  • Currently, Macs seem to be at a technological stalemate for a variety of reasons. First, because Intel has big problems: the architecture has not really changed since 2015 with Skylake. A 2019 MacBook Pro has more cores and higher CPU frequency than a 2016 Mac, but these are just superficial adjustments — it’s the same processor, with just a slightly better graphics part. To offer better performance, you either increase the number of CPU cores or the processor’s frequency, or both, and in all these cases you end up with increased power consumption, which is bad. Typically, Apple can’t offer a really competitive MacBook because Intel doesn’t have a really better CPU than it had in 2015.
  • Macs also suffer from a GPU problem, but here it’s Apple’s fault. The company keeps using AMD GPUs that evolve little and consume a lot of energy. Nvidia makes chips that are faster and consume less, yet Apple stays with AMD. I don’t know why this is (we will surely discover the reason one day); we can assume that Nvidia is part of the problem, but the fact is that staying with AMD limits a lot of things. Even with the work that Apple does on the drivers and the custom cards that AMD makes for Apple, that’s still a problem. From what we know about Navi (RDNA, Radeon RX 5700, coming out soon), the new architecture is not the solution.
  • Finally, Macs suffer from some of Intel’s choices, especially when it comes to memory support. Some Macs still use LPDDR3 or DDR4, whereas they should use LPDDR4. It’s Intel’s fault. Some developments that seem natural on the iPad do not happen on the Mac because Intel can’t (or won’t) do it. Things such as variable refresh screens, high definitions, native USB support at 10 Gb/s, etc.
  • The transition to ARM would solve some of the problems. Even if we consider just the Apple A12 (from 2018), Apple could offer a faster CPU (and in some cases, noticeably faster) with the same frequency as the Skylake CPUs, with a year-over-year evolution that would bring significant gains and the integration of modern technologies (LPDDR4, etc.). All this on consumer platforms, with a much lower consumption than what Intel offers.
  • The only unknown currently is the scalability of chips. An A12X consumes much less than a MacBook Pro CPU for an at least equivalent performance (and higher in many cases), but it’s a CPU with four cores (the low-power cores are negligible) and an awesome GPU with regards to power consumption, yet limited in absolute terms.

    I don’t know if the same chip with eight cores and a GPU at least doubled — to reach the level of the Radeon Polaris 15-inch MacBook Pro — would maintain the same power consumption advantages. I also don’t know whether Apple can easily provide a version with many more cores. Because an A12X has the downside of being rather big: 122 mm² in 7 nm, for four cores (eight in reality, but again in this case the low-consumption cores can be ignored) and a GPU, when a modern Core i7 has the same size in 14 nm.

    In any case, from a consumer standpoint, we can perfectly replace the components of a MacBook Pro or Air while maintaining a comparable performance and gaining autonomy significantly.

  • The biggest downside of a transition to ARM architecture is software compatibility. Changing architecture involves recompiling all programs (Apple seems to be working on this) and being able to run all old programs efficiently. Switching to ARM wouldn’t allow this. The example of laptops under Windows ARM speaks volumes: if it is vaguely usable with ARM software, it is horrible to use with x86 software. In addition, switching to ARM effectively eliminates virtualisation and the ability to install Windows, which remains a thing that reassures many people and is very useful to many others.

If we put together both these software- and hardware-related concerns, we can see how the appearance of a new Mac with an ARM processor and running Mac OS 10.15 Catalina would effectively represent a point of reboot for the Mac platform. (The 12-inch retina MacBook could be an excellent candidate, by the way, and it would be an interesting reappearance). On the outside, it would be Apple’s wet dream: an extremely power-efficient Mac with decent performance, instant operation like an iOS device, ridiculously long battery life, and a tightly controlled software environment, because only new ARM-compatible apps, or older-but-updated apps would run on it.

The PowerPC-to-Intel transition was announced in mid-2005, and Macs with Intel chips started appearing in 2006. By the end of 2006, all product lines were updated with Intel chips. Software-wise, things went gently and smoothly, from what I remember. Mac OS X 10.4 Tiger shipped in April 2005, officially for the PowerPC platform only (but at the WWDC 2005 in June, in revealing the transition to Intel, Jobs said that they had been developing and testing an Intel version of Tiger internally). When the first Intel Macs were introduced, they were running Mac OS X 10.4.4, and Tiger went on until its final minor update 10.4.11 being available for PowerPC and Intel Macs.

Mac OS X 10.5 Leopard shipped in October 2007. Despite appearing more than a year and a half after the official switch to Intel, it still supported both architectures, and 10.6 Snow Leopard — the first Intel-only Mac OS X version – would appear at the end of August 2009, and it would still support PowerPC software via an internal emulator called Rosetta. The PowerPC platform was ultimately left behind with Mac OS X 10.7 Lion, introduced in July 2011. If you think about it, it was an extremely generous grace period. This way, people who purchased a new PowerPC Mac in 2005 could at least use it with sufficiently updated software until 2009, and those who purchased expensive PowerPC software could use it at least until 2011.

I have the feeling that with the Intel-to-ARM things will go differently. Assuming Apple will develop custom ARM chips that deliver optimal performance for Macs, it’s likely the hardware transition will happen like a wave, starting with more consumer-oriented Macs (like the 12-inch MacBook and the MacBook Air, maybe even a base configuration of the Mac mini), then the Pro laptops and maybe a base configuration of the iMac. But I expect an undefined period of time where we’ll have all ARM Macs except the high-end Pro machines (iMac Pro and Mac Pro), which will probably still rely on Intel processor for sheer multi-core performance.

If the hardware side of the transition happens this way, perhaps we’ll have a reasonably long grace period, software-wise, with Mac OS running in both x86 and ARM flavours. A period where, while new SwiftUI apps (and SwiftUI-updated older apps) can be properly developed, older x86 (64-bit) apps can still work and be used on Intel Macs with an updated version of Mac OS (just like PowerPC Macs that ran Leopard from 2007 to 2009).

So, keeping the speculation levels high, if we assume the first ARM-based Mac appears sometime in 2020, it’s possible this transition will take at least two years to complete from both a hardware (consumer and semi-pro) and software standpoints. This could mean at least three Mac OS versions supporting the Intel platform (10.15, released in 2019; 10.16, released in 2020; and 10.17, released in 2021), where by ‘supporting’ I mean that they can be installed on all Intel machines available now and introduced from now on, and that can run existing, traditional 64-bit Intel applications.

At the same time, those same versions of Mac OS could also run on ARM-based Macs, and be the future-facing side of the transition.

This is sort of a best-case, hopeful scenario. I hope the next transition can be carried out this smoothly, and as smoothly as the previous one. What I fear is that today’s Apple could turn out to be much more impatient, and rush things in a way that backwards compatibility isn’t going to be retained for very long. Today’s Apple seems very focused on the future-facing side of things, even more than before. In a sense, the official introduction of the Mac Pro this autumn is my one hope that the next transition can happen at a reasonable pace for all users and can be handled thoughtfully by Apple.

Addendum: The future of Mac OS, security, and new problems

As I was finally about to publish this piece, Pierre Dandumont wrote another very interesting article on his blog. Translating from the French, its title should be The future of Mac OS, security, and the problems that will emerge. Here are the essential points:

  • Mac OS Catalina, the next Mac OS, will bring a lot of changes under the hood, and for Pierre (someone who likes to tinker and hack), things look pretty annoying.
  • Catalina completely drops support for 32-bit apps, and this poses two problems: first, obviously, all your 32-bit-only apps will no longer work. It may be an old application, like iPhoto, or even EyeTV. Second, and perhaps a bit less obvious, all those applications that rely on 32-bit software will no longer work either, or only partially work. Pierre makes the example of apps that still use QuickTime 7 frameworks to decode video or audio. “In fact, a lot of old codecs (and old software) depend heavily on QuickTime 7, unfortunately”, he writes.
  • Catalina also drops support for HFS (aka “Mac OS Standard”), so if you, like me and Pierre, still need to access old read-only media like CD-ROMs, it’s a bit of a hindrance (my workaround here is to use an older Mac, of course). Support for AFP file sharing should also be dropped in Catalina (according to Pierre it still works for now).
  • The most annoying changes are related to security. Pierre emphasises a key distinction: “Let me be clear: these are more annoying for an advanced user who likes to hack. For a normal daily use of macOS, such changes shouldn’t be a nuisance and will actually improve security”. And then he adds: “But it makes macOS closer and closer to iOS, and from my point of view it’s not necessarily the macOS I want”. And oh yes, I feel the same.
  • Changes in Gatekeeper: in early versions, when it came to allow software to run on your machine, it was possible to choose between unsigned apps (no protection), signed apps, or Mac App Store apps. With Yosemite, the first option got disabled periodically (and automatically). With Sierra, it was hidden. In a future version of Mac OS, it is going to disappear entirely. The wording [see the screenshot on Pierre’s blog, taken from the presentation Advances in macOS Security (701)] indicates that it should still be possible to enable the option to launch unsigned applications, “but that by default macOS will prevent it… like iOS”.

    Pierre then observes: “This is frankly a problem: it will become impossible to develop (or compile) a program without paying [an Apple developer’s fee] and with anything other than Xcode (as I’m told, Xcode signs programs for the user), and many open (and not open) source applications will not launch anymore. Sure, this is clearly a security advancement, but it’s not flawless: there is signed malware, you know”.

  • Another annoying thing is that in Mac OS Catalina, the system will be read-only. “Catalina uses APFS to separate a data volume from the volume containing the system, in a completely transparent way. The volume containing the system will be read-only, which will prevent modification of the OS. Again, this is interesting for security, but not for all the different macOS hacks”. Pierre also notes that, for now, disabling the SIP (System Integrity Protection) removes this function, but Apple’s presentation (What’s New in Apple Filesystems — 710) is very clear on this point: it will be impossible to do it permanently. Disabling the SIP will mount the write partition until the next restart. And that’s it. It will then go back to being read-only.
  • Finally, another change Pierre finds potentially problematic is related to the new driver format (System Extensions and DriverKit — 702). He writes: “Specifically, Apple is pushing a new generation of drivers for certain types of devices: HID devices (in general), USB devices, serial devices (more specifically USB-to-serial adapters), and everything network-related. I’m simplifying a bit, but basically, if you have a device that communicates via serial with a built-in controller (for example an Arduino), a Thunderbolt or USB network card or a USB device like a TV card, the drivers will have to be rewritten. Not in the very short term, since Catalina is still going to support kexts, but in the short term nonetheless, as Catalina’s successor will leave them behind for devices that fall into these categories.”

    Pierre observes that “Quite frankly, seeing how keeping track and maintaining drivers is often pretty bad under macOS when a new version changes little things — if you have a Wacom tablet, you’ll understand — I think it’s going to be a nightmare for the users. And I see only one solution for now: trying to use only devices with macOS native drivers. It’s not that easy, unfortunately: in the case of serial adapters, for example, Apple natively supports only FTDI models, and they are expensive. Same thing for network cards: hopefully, manufacturers will follow suit, but it’s not a given.”

If we go back to the now-trite analogy Steve Jobs made when he talked about the Post-PC era, about traditional computers as trucks, and mobile devices as cars, I think this could keep making sense, but only if trucks are allowed to continue carrying out their truck duties, so to speak. Putting all these pieces together is a tedious task, but when it comes to the future of Mac OS and the Mac platform, the picture I’m starting to make out here is of a severely locked-down, exceedingly streamlined system. And if you start transforming trucks to make them behave more like big cars, then what’s the point?

Yes, Mac OS is not the bulletproof platform it once was, security-wise, but it’s still far more secure as it is today than most of what’s out there. And while aiming for a bulletproof system is a good thing nowadays, I believe it shouldn’t be done at the expense of the inherent and historical flexibility of such system. Why have UNIX underpinnings to then utterly dumb down whatever is built over them? And it also shouldn’t be done at the expense of third-party developers of such platform, who are expected to jump through increasingly narrow hoops to create software that works as it should and simultaneously complies with Apple’s progressively stifling demands. This is the part that keeps me apprehensive as this next excruciating transition approaches.

 

The little MacBook that couldn’t

Tech Life

MacBook

In April 2015, Apple introduces the 12-inch MacBook with retina display. It’s a marvel of thinness and lightness. It’s also a display of ‘courageousness’ that predates the headphone jack removal in the iPhone 7. Because this laptop, while indeed retaining the headphone jack, has only one other port, and it’s USB‑C. And it’s for everything, including charging. But the pressing question is, Is this the MacBook Air killer?

No, not really. For about one year and a half, until October 2016, Apple still features both the 11- and 13-inch MacBook Air models together with the 12-inch MacBook in the lineup. But the only good thing this MacBook has to win people over is essentially the retina display. The MacBook Air is the better proposition for literally everything else. More ports, MagSafe, better keyboard, better performance, equally great battery life, negligible difference in size and weight (especially the 11-inch model), more affordable.

The 11-inch MacBook Air gets discontinued in October 2016. But not the 13-inch model. The 13-inch model gets discontinued today, in 2019. But so does the 12-inch MacBook.

 

But as far as compact laptops with retina display go, surely the 12-inch MacBook is the best proposition?

Not really. When first introduced in April 2015, the 13-inch retina MacBook Pro (introduced one month earlier) is a better machine for the money. The little MacBook wins in thinness and lightness. But the 13-inch retina MacBook Pro is better in every other regard. Better screen resolution, more ports, better keyboard, much better performance, even better battery life (at least on paper), and the base model costs $1,299, exactly like the 12-inch MacBook (for that price you only get 128 GB of storage in the 13-inch retina MacBook Pro, versus the 256 GB in the MacBook, but still).

In 2016, even the redesigned 13-inch MacBook Pro (without Touch Bar) is an overall better choice than the 12-inch MacBook.

 

The 12-inch retina MacBook hasn’t been an iPad killer, either. Not that it was ever meant to be, mind you, but for regular people who don’t need specific Mac OS software for their work, a 12.9‑inch iPad Pro of the same vintage as the 12-inch MacBook was an overall better value than the little laptop. Even that iPad Pro’s Smart Keyboard is probably better than the MacBook’s butterfly keyboard.

If anything, the 12-inch MacBook has lost to the iPad Pro. By removing the 11-inch MacBook Air first, and the 12-inch MacBook now, if you’re looking for an Apple ‘ultrabook’, today Apple has made sure that you take a good look at the iPad Pro. Not that the new 13-inch retina MacBook Air is a giant heavyweight, but the 11-inch iPad Pro is svelter, more compact, and some people will no doubt rush to add that it’s also more versatile and the future of everything. 

Neither fish nor flesh

When I think about the 12-inch MacBook, it’s really an odd one. With Apple devices, my reactions have historically been very clear-cut: I’ve either loved them or hated them. This little MacBook is perhaps the first really ‘meh’ device, if you’ll forgive the highly technical jargon here. More seriously, I find it rather representative of Apple’s vague product focus under Tim Cook’s guidance. The 12-inch MacBook is a proof-of-design machine. The more I look back at it, the more its sole raison d’être seems to be: We made this because we could or, This is how much thin and light we can go. Strategically, though? I’m sorry to say, but it’s been little more than luggage.

(On a last personal note, this MacBook was remarkably useful to me for one thing: when I tested one for a few days back in 2015, it made me realise just how bad the then-new keyboard design was, and has spared me a bunch of unnecessary headaches ever since, as I’ve carefully avoided all MacBooks with that terribly-designed keyboard.)

Post-WWDC thoughts

Tech Life

The WWDC 2019 keynote was interesting and juicy. For once, it felt well organised and with an enjoyable pace. I liked a lot of what was showcased. I hated the audience, seemingly cheering for whatever was said on stage, but I actually cheered myself when they presented the Sign in with Apple initiative[1]. And the new font management features in iOS made me blurt a Finally! as I was watching the event.

The Mac Pro

The Mac Pro introduction made me happy. Apple could have screwed up the Mac Pro redesign in hundreds of ways. The company — whew — did the Right Thing.

Of course, according to many, Apple screwed up one aspect of the redesign: the redesigned price. I have only two complaints here: one, 256 GB as base storage, for such a machine, and for the entry price of $6,000, is insultingly ungenerous. It’s not a MacBook Pro, Apple. Two, $999 for the Pro Display stand is ludicrous. Okay, you’re a premium brand, but even, say, a battery conditioner for a Rolls-Royce Phantom doesn’t cost that much (it’s about $580 if you’re curious). As other have said, it would have been better to mask that price by including it in the total price of the Pro Display.

As for the rest, no, I don’t believe the 2019 Mac Pro is an expensive machine. And neither is the Pro Display XDR. For their intended audience, they’re priced quite reasonably. Check out this video by Jonathan Morrison about the Pro Display for an informed perspective on the matter.

A lot of words have already been spent about putting the Mac Pro pricing in context, but just look at this progression:

  • In 2012, an eight-core Mac Pro cost $3,499 (and a twelve-core was $4,999 by the way).
  • In 2013/2014, an eight-core Mac Pro configuration wasn’t available, but the six-core variant cost $3,999.
  • In 2017/2018, an eight-core iMac Pro cost $4,999.

Honestly, I was expecting a price tag of $5,999. Maybe a sweeter pill to swallow (especially with that meagre 256 GB SSD) would have been an entry-level Mac Pro priced at $4,999 and an iMac Pro reduced at either $3,999 or $4,499. I still think Apple should make the iMac Pro a little bit more affordable now that there’s also a new, powerful Mac Pro back in the lineup. As for the display, I agree with those who’d like to see a standalone version of the iMac’s 27-inch 5K panel. That would be an excellent complement to any kind of desktop setup, in conjunction with a MacBook Pro, a Mac mini, or even a Mac Pro for those who don’t need the esoteric performance of the Pro Display XDR.

Mac OS

As you may recall, in the days preceding the WWDC, I was still apprehensive about the Mac. After the WWDC, I’m… conflicted. And I realise my conflict is directly related to what’s happening to the Mac platform: hardware and software are becoming two very different beasts. Apple is still capable of coming up with impressive hardware (the Mac Pro and the Pro Display XDR are obvious examples) — and that’s what’s making me a bit more optimistic. But on the software side, things couldn’t be more disappointing — and that’s what’s still fuelling my pessimism. Whatever few new features are introduced in Mac OS 10.15 Catalina, in my eyes they are outweighed by what’s being taken away (or locked down, or made unnecessarily complicated):

  • Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. In future versions of macOS, scripting language runtimes won’t be available by default, and may require you to install an additional package.” [From the XCode 11 beta release notes | Commentary on Michael Tsai’s blog]
  • Notarising command-line tools: I’m no developer, but when I read this piece by Howard Oakley, I almost felt pangs in my stomach. Again, check the associated commentary on the always-excellent Michael Tsai’s blog.
  • From the Mac OS Catalina Preview page:
    • Dedicated system volume. macOS Catalina runs in its own read-only volume, so it’s separate from all other data on your Mac, and nothing can accidentally overwrite your system files. And Gatekeeper ensures that new apps you install have been checked for known security issues before you run them, so you’re always using good software.
    • Data. Apps must now get your permission before directly accessing files in your Documents and Desktop folders, iCloud Drive, and external volumes, so you’re always in control of your data. And you’ll be prompted before any app can capture keyboard activity or a photo or video of your screen.

    I know many won’t agree with me, but these security measures — while understandable on paper — are cumulatively overkill, and there should really be a simple switch for power users to disable at least all the folder authentication madness. The user experience here is starting to resemble Windows and its barrage of confirmation dialog boxes. (For the related discussion, see Security & Privacy in macOS 10.15 Beta on Michael Tsai’s blog.)

These are just quick examples. But my general impression about where Mac OS is going is that Apple wants to turn it into a sort of low-maintenance system. The pretext is security: lock down this and that because it could be exploited; remove this and that because it’s code we can’t be bothered to update or optimise, it could potentially represent a vector for an attack, blah blah. Meanwhile, let’s also use these security measures to make the life of the already stressed-out Mac developers even harder. 

In 30 years as a Mac power user, what I have been appreciating about Mac software was the ability to think and act outside the box, so to speak. In recent times, Apple seems hell-bent on keeping Mac software inside the box. The walled-garden model and paranoid security made and make definitely more sense on mobile systems. I appreciate being able to look for and install apps on my iPhone that won’t mess with my device or present a security risk for the operating system or for me as a user (although Apple hasn’t done a great job at keeping scams away from the App Store); but on the Mac I want to have more freedom of movement. I’m an expert user, I know the risks involved. Let me tinker. Give the option to have a locked-down Mac for novice users who expect to use it like an appliance, or in the same way they use their phones and tablets. Leave the ‘root’ door open for those who know what they’re doing.

iPad and the Mac

You certainly know this rather famous Steve Jobs quote: I think Henry Ford once said, “If I’d asked customers what they wanted, they would have told me, ‘A faster horse!’ ” People don’t know what they want until you show it to them. I look at what iPad is becoming and I see ‘a faster horse.’ 

Many seem happy that now Apple is listening. No doubt about that. But I also see it as kind of a bad thing. This might be completely off the mark, but I feel that today’s Apple pays too much attention to the input from an élite of tech pundits who are also iPad power users. On the one hand it’s nice that Cook’s Apple is more receptive to external suggestions. On the other, lately the company has seemed a bit too keen on pleasing the afore-mentioned élite. Sometimes I even get the feeling that the iPad’s path is pretty much a design by committee. 

Steve Jobs was less receptive to external input, probably because he knew what he wanted and typically had clearer ideas about the path ahead. (Again, no, I don’t think he was infallible. I simply preferred his leading style.) 

Anyway. A lot of iPad fans misunderstand one important thing about where I stand as a Mac user. I don’t want iPad to fail. I want all Apple platforms to succeed. But I don’t believe Apple’s homogenisation plan is a good way of achieving that. It may be a convenient way, for Apple and perhaps for developers too. But the various platforms have their unique strengths and unique strings to pull to make each of them progress healthily. But dedicated differentiation is hard, apparently. A multi-billion dollar company seemingly can’t afford enough resources to develop two major platforms concurrently, prioritising what’s best for each platform and for the users of each platform.

So we have a Mac platform that was doing fine until it was basically put on hold because the iPad had to grow, evolve, be revolutionary. iPad was course-corrected to become more pro. Meanwhile the Mac was neglected and iPad has been too slow to catch up than originally planned. Think about the time that has been wasted for this and because of this. It hasn’t been good for either the Mac or the iPad. Sure, maybe all is well now, and I’m worrying too much; yet I can’t help thinking it could have been different — and better. 

If iPadOS just becomes a Mac OS clone, that’s not progress, however you look at it. And at the moment I’m not really trusting Apple when it comes to having a clear plan to make iOS on the iPad evolve and shine. Adding Mac-like features is the easy way out. What’s next? That’s hard.

I’ve been upset with Apple for all the time the company wasted ‘pushing’ the iPad from a marketing/lifestyle standpoint, instead of concentrating on building a truly ‘pro’ variant. iPad Pro should have been a new device with a different iOS flavour/fork rethought from the ground up at the time iOS 7 came out. Instead they started doing something around iOS 9/10.

Another pain point in some discussions I’ve had with iPad fans is when I mention my general disappointment in the iPad as a system. What they always believe is that I’m making a direct comparison with the Mac, implying that the Mac is better. It’s. Not. That.

My disappointment is in the general lack of evolution at the operating system level. I don’t have any problem recognising the iPad as a ‘real computer’. Of course it is. That’s precisely why it’s also not a groundbreaking innovation. Let’s put aside all its hardware advantages for a moment — extreme portability, instant operation, magnificent display, desktop-class peak performance. The software it runs on is conceptually old. The way things happen when you interact with the operating system and applications and files is the same way we’ve been seeing on traditional computers since the Xerox Alto and Star, since the Apple Lisa, back in the early 1980s. Yes, it got much better visually. Of course. It’s the least that one could expect after thirty years! In the medium term, the iPad will reach a stage where it will be like using a Mac that also has Multi-Touch support. And while cool, it will still be anchored to decades-old paradigms and metaphors. 

The post-PC era I’ve had in mind since Jobs introduced the concept, is something else. From a user-interface, user-interaction standpoint, I expected (perhaps unrealistically) a different plan for iOS on the iPad. Hiding the filesystem in the first versions of iOS made me hopeful: let’s use the iPad as a tabula rasa for the computing experience. Let’s give people a tool to do ‘computery stuff’ on it without even realising they have a computer in their hands. John Gruber had the best insight in all recent history of punditry when he said It’s the heaviness of the Mac that allows iOS to remain light.

Let’s look at the whole paragraph: 

The bigger reason, though, is that the existence and continuing growth of the Mac allows iOS to get away with doing less. The central conceit of the iPad is that it’s a portable computer that does less — and because it does less, what it does do, it does better, more simply, and more elegantly. Apple can only begin phasing out the Mac if and when iOS expands to allow us to do everything we can do on the Mac. It’s the heaviness of the Mac that allows iOS to remain light.

I’ve always thought that a better plan would have been to keep the Mac around (always refining it, always keeping its power up-to-date and relevant), while using iOS and the iPad to push the computing envelope. When I say that the iPad isn’t the future, iPad fans get upset because they think I’m looking down on it, or outright dissing it. I’m not. I look at it, and see traditional computing; maybe done a bit differently, maybe done with a cooler veneer, by touching a screen instead of using a mouse[2], but still pretty traditional. 

Hiding the filesystem and having users interact with applications and documents in a different way — in a fashion that made both applications and documents sort of get out of the way, disappear as constructs because you have a full-screen environment and a series of actions to handle whatever you’re doing with the device — was an excellent starting point, in my book. But then things started to stagnate here, more complex workflows made a lot of collateral friction emerge, and now, in iOS 13, handling files on an iPad is pretty much the same as on a Mac. It’s a practical victory, but a theoretical defeat.

Criticising this stuff is hard, because there comes a point where I’m asked So, what do you propose, instead? I don’t have a clear solution or alternative fully designed and ready to be implemented here. (I recently shared some ideas about a kind of tablet I’d be eager to use). What I would love to see is more research to achieve a different and more evolved computing experience, one that is capable of letting go of old metaphors and paradigms so that people can interact with these tools even more naturally and in more immediate ways, instead of visualising the computer workspace as an eternal office. 

Some observations on iPadOS and its gestures

1.

The new way of selecting text looks simpler and more straightforward than the old way. It’s like you’re pointing at the text you want selected. I currently don’t have the means or opportunity to test this in person, but I’m curious to know about how efficient this method is for selecting large blocks of text, especially when they’re longer than what’s displayed on screen at the beginning of the selection gesture. I’m also curious to know about the efficiency of this new method when you want to make a precise selection of just a couple of sentences or words inside a paragraph. The old method wasn’t necessarily clunky per se; the problem was that it worked inconsistently. Sometimes it worked like a charm in Safari or Mail, but not so much in other text-based apps like RSS readers or PDF viewers.

2.

The 3‑finger pinch to copy, 3‑finger spread to paste, 3‑finger swipe to undo all looked like cool new gestures in the pre-recorded bits Federighi was showing on the big screen, but to me they feel like unnecessary additions to an ever-expanding gesture lexicon, and I also wonder about their precision — copy and paste in particular. What happens when you have selected the text bit you want to copy, and one of your fingers touches the screen and deselects the text (and maybe also re-selects a single word or another unwanted portion of the text) before the 3‑finger pinch copy gesture is completed? And don’t get me started on the ‘cut’ gesture: two consecutive 3‑finger pinches? Come on.

If you think I’m splitting hairs here, rewatch the first moments of the Apple Pencil demo by Toby Paterson (Apple’s Senior Director in charge of the iPad system experience): 

…And then to move the cursor, you’ll just grab it with your finger… whoops… and he tries again. 

And then, shortly after: Now, to select text, just hold your finger on a word… Hold your fing— aah, sorry…

He’s clearly struggling with these gestures, and while I concede he must be nervous given the context, other gestures like dragging out the virtual keyboard to turn it into a compact keyboard are clearly easier and less hit-and-miss.

3.

IPadOS tap drag
Window management and multitasking in iPadOS are clearly borrowing heavily from the Mac, but since the gestures on the iPad have to be keyboard-independent, there is a lot of tapping & dragging involved. Curiously, when there is indeed a keyboard attached to the iPad, there doesn’t seem to be a fallback set of keyboard shortcuts to make things easier. 

And as I was watching Federighi tapping, dragging, moving, and split-viewing on an iPad Pro propped on the table in landscape orientation with a Smart Keyboard attached, I was reminded of what Federighi himself said about not wanting to introduce a MacBook with a touch screen; he brought up usability reasons, and the fact that it’s not a great user experience because having to raise your arm to directly manipulate the screen gets tiring quickly. And it’s true! Yet that is exactly what’s happening when you’re working with an iPad set up this way. And you can tell me But with the iPad it’s different all you want: it is exactly the same experience, but suddenly those legitimate usability concerns have vanished.

4.

IPadOS new Safari shcts
Safari shortcuts on iPadOS

Speaking of shortcuts, I was about to leave this screenshot here without comment, but I have to point out those terrible hybrid shortcuts that involve one or two keys and a tap on the screen. They look unnecessarily counterintuitive, and I can’t believe there wasn’t a better option. There is a keyboard — just make shortcuts that involve only keys. Better yet, use the same shortcuts as in Safari on the Mac. What’s the problem?

5.

In general, if you count the new gestures you do with your fingers, and the new gestures you perform with the Pencil, there isn’t much that can be intuitively discovered without at least a brief tutorial in the Apple Store when you’re purchasing an iPad. And even if all this is well explained in an online guide, or by an Apple retail employee, I wonder how many of these gestures are going to stick with users. This is just an observation, and maybe I’m wrong. Maybe all these gestures end up being far more intuitive than they seem to me at first glance. My worry, of course, is that all this increasing complexity accumulates to a point where there’s a thin, yet persistent layer of friction when using an iPad, which inevitably brings frustration. One of the key differentiators of Apple devices is their software but also the fluidity of their experience. That’s what may convince a prospective customer (with no particular affiliation to a platform) to buy an iPad over a Microsoft Surface.

What about the rest?

The rest was good. I liked it. I don’t have an Apple TV or an Apple Watch, and I’m not really interested in having either, but the new features are nice, and I like where these two platforms are going (though tvOS is the slowest-advancing platform I’ve ever seen). iOS 13 looks like a very promising release and I look forward to upgrade later this year. Apologies for not having been exhaustive regarding everything that was announced at the WWDC 2019, but what’s really capturing my attention at the moment is how Apple is handling Mac OS and iOS on the iPad. 

 


  • 1. During a recent trip in Italy, I had to use the Sign in with Google option on a site (it was the lesser evil), and since then I’ve been getting an average of 20 more spam messages per day in my gmail account. So, unlike others, I don’t really care about the behind-the-scenes of Sign in with Apple, I just want it to see widely implemented as soon as possible. ↩︎
  • 2. Oh, wait… ↩︎

 

Still apprehensive about the Mac

Handpicked

I usually take my time to ponder things before publishing a post here. But this time I just wanted to write down a few brief raw thoughts before the WWDC. I’m leaving for a short trip in a few hours, and I’ll probably won’t have time to write anything else before June 3. 

 

Brent Simmons has written a succinct, spot-on reaction to Steve Troughton-Smith’s piece (Don’t Fear) The Reaper.

So, knowing how this has worked out in the past, why do I fear the reaper?

Because bringing UIKit brings no new power. If anything, it subtracts power. UIKit apps — at least so far — are all sandboxed and available only via the App Store. They don’t offer everything AppKit offers.

[…]

Getting the Mac OS X transition right was a priority for the company: if it failed, the company would fail. But with this? Not the same story at all. [Emphasis mine here.]

Much of the debate surrounding Marzipan so far has mainly focused on the fear of the decline in Mac software quality. What veteran Mac users are afraid of is a new wave of Mac apps that are little more than crude iOS ports, that don’t look and don’t behave like Mac apps. 

It’s an understandable concern, and a concern I share as well. I’m especially wary of iOS-only developers who limit their use of the Mac to the possible minimum (coding their apps and little more). How can they provide a good experience in the Mac apps they’ll develop via Marzipan if they have little familiarity with Mac OS, its interface and — for lack of a better term — its flow?

They’re not entirely to blame: Apple itself isn’t certainly leading by example on this front lately. Home, Stocks, News, and Voice Memos are apps that look as if they were assembled over the course of a few days by a novice iOS developer or a group of interns at Apple.

But I have other fears.

I fear that Apple’s plan for the Mac is to further close the platform down, so that — like on iOS — the Mac App Store becomes the only source for Mac software. That would be unfortunate to say the least. I want the freedom to purchase, download, and install Mac apps from wherever. I want to be able to give my support directly to a developer by buying their software from their website.

Also, as a consequence, I fear that the Mac App Store is going to become more like iOS’s App Store in every way — with thousands of crappy apps, and terrible pricing trends. Where by ‘terrible pricing trends’ I mean the race to the bottom on the one hand, and on the other hand an increase in subscriptions as the only payment method even for simple utilities and single-purpose apps. (I hope more people realise how subscriptions aren’t sustainable on a large scale for customers). 

I fear that iOS is going to become the new model that dictates how the Mac user interface has to behave. That Macs are going to be considered just as ‘big iPads’, and that paradigms and behaviours that are tailored for iOS and belong to iOS come to replace those paradigms, principles, and behaviours that made the Mac’s user interface great. 

Though of course not all at once, I fear this is going to happen eventually because I have the feeling that Apple — while maybe not reaching the point of merging the two systems completely — wants to somehow ‘unify’ iOS and Mac OS visually and behaviourally in the name of ecosystem homogeneity and the ‘seamless experience’. Whereas I believe both platforms should maintain their own specific strengths, their different ways to be simple and user-friendly, and their different way to be powerful and versatile.

I’ve said it again and again — I’m not necessarily afraid of change, but I’m afraid of change for change’s sake. I’m all for change if it brings unequivocal progress. But I’m afraid that Mac OS is getting repurposed and repackaged more to fit inside an agenda than to keep thriving as a platform with its history, characteristics, and unique features. 

I’ve experienced firsthand all the transitions the Mac platform has gone through, and this is the one that’s leaving me the most apprehensive. Because all past transitions brought clear advantages to the Mac, either from a hardware or software standpoint. The signals were of progress for the Mac platform; or, at the very least, of having to take a step sideways to then take two steps forward. This time it feels that things have to change simply to benefit the advancement of another platform. 

Never before have I hoped so much to be completely wrong about something. As Simmons concludes, I hope so very badly that I’m wasting my time with my worries.

 

Further reading

My kind of tablet

Tech Life

An opinion I’ve held for a long time is that Apple so far has done a mediocre job in turning the iPad into a ‘pro’ device. The hardware is fine, the current specifications for the iPad Pro models are more than fine. But the software — and to a certain extent the user interface and usability — is the weak spot. I won’t repeat myself about this; I think I said enough in Faster than its own OS back in November.

At the end of January 2010, this is how Steve Jobs introduced the iPad:

…And so all of us use laptops and smartphones now. Everybody uses a laptop and/or a smartphone. And a question has arisen lately: is there room for a third category of device in the middle? Something that’s between a laptop and a smartphone? And of course we pondered this question for years as well. The bar is pretty high. In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re gonna have to be far better at doing some really important things: better than the laptop, better than the smartphone. 

What kind of tasks? Well, things like browsing the Web. That’s a pretty tall order; something that’s better at browsing the Web than a laptop? Okay. Doing email. Enjoying and sharing photographs. Watching videos. Enjoying your music collection. Playing games. Reading eBooks. If there’s going to be a third category of device, it’s going to have to be better at these kinds of tasks than a laptop or a smartphone, otherwise it has no reason for being. 

Now, some people have thought that that’s a netbook. The problem is netbooks aren’t better at anything. They’re slow, they have low-quality displays, and they run clunky old PC software. So they’re not better than a laptop at anything, they’re just cheaper; they’re just cheap laptops. And we don’t think they’re a third category device. But we think we’ve got something that is.

I could use this quote to emphasise how all the tasks Jobs enumerates are consumption-related. That the drive to create such a device came from the need of having some hardware that was more convenient and capable at delivering certain content for people to enjoy. The creative angle came later. Again, in retrospect it’s crucial to notice just how the iPad was not conceived as a creation tool. It’s interesting to realise how Steve Jobs didn’t mention production tasks or creative tasks when he was talking about the thought process leading to the creation of this ‘third category device’. I’m sure Jobs was aware that, with the right applications, the iPad could do more than just being a vehicle for content consumption. Still, that didn’t seem to have been a priority.

Instead, what I want to emphasise in this quote is this part: In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re gonna have to be far better at doing some really important things: better than the laptop, better than the smartphone.

Far better at doing some key tasks. Better than the laptop (but let’s just say better than a Mac or any other traditional computer), and better than the smartphone. Think about that.

For the first few iterations of its existence, the iPad and iOS delivered on their mission. In 2010 I had a brand-new MacBook Pro and I was still making the most of my iPhone 3G, but I couldn’t wait to get an iPad. I wanted to use it especially for reading, so I waited very patiently for an iPad with a retina display. And in 2012, with iOS 5, the iPad was still a great device to do everything it was designed for. A fast device with an intuitive operating system with an extremely low learning curve. Some apps for more creative tasks had appeared, and with the addition of a Wacom stylus I had fun at drawing and painting some stuff.

Then some people got very excited about the iPad, and another question arose: why can’t we use the iPad for all kinds of tasks?

That’s when things started to go awry, in my opinion. 

Because while it’s technically still true that an iPad is better at doing some key tasks — better than a traditional computer and better than a smartphone — it’s not better at doing everything.

The integration between the hardware and the software Apple is renowned for means that the software running on an Apple device is (ideally) optimised to grant the user the best experience of what the device has been designed to accomplish. “The iPad is just a big iPhone” was a common criticism back then, and it was a misguided remark, because a few of the iPad’s key strengths came exactly from it being like a big iPhone. The familiar gestures people had quickly learnt to master the iPhone’s user interface still worked very well to operate an iPad. At the time, there weren’t any significant functional changes in how iOS 5 or iOS 6 worked on an iPhone and on an iPad. Apps optimised for the iPad needed a bit of user interface retouching and rethinking, but as far as the user interaction was concerned, there was nothing particularly disruptive. Things worked well. Users didn’t need additional training or additional attention to master an iPad.

But in order to accomplish additional tasks — especially complex tasks that require a certain degree of interoperability among apps and services — just resorting to third-party ingenuity was not enough. The iPad’s operating system needed changes and improvements. Which of course, inescapably, meant an added layer of complexity. As I observed back in 2016:

In iOS’s software and user interface, the innovative bit happened at the beginning: simplicity through a series of well‐designed, easily predictable touch gestures. Henceforth, it has been an accumulation of features, new gestures, new layers to interact with. The system has maintained a certain degree of intuitiveness, but many new features and gestures are truly intuitive mostly to long‐time iOS users. Discoverability is still an issue for people who are not tech‐savvy.

[See also Tap swipe hold scroll flick drag drop]

I don’t mean to dismiss the efforts Apple has done to make iOS work better on supposedly ‘pro’ iPads, but it’s undeniable that iOS has matured very slowly on this front. On iPhones, I believe it’s still a great operating system, because it still delivers on what you’re supposed to accomplish with a smartphone. The hardware/software integration is tighter there. On the iPad, my impression is that things have been messier, less focused, less optimised to make the most of it. Now, if you caught me in a more exasperated mood, I’d probably put the blame on Apple, saying that they could have done a better job, etc.

But the thing is, a touch interface can only do so much. There are still a lot of tasks for which a traditional computer is better and more versatile, and there are tasks for which a smartphone is better, because (among other things) certain touch gestures are simply more effective on its smaller screen. Some will undoubtedly insist that an iPad today can do anything a traditional computer can, and I may even agree on a theoretical level, but the fact is: just because an iPad is better than a computer or a smartphone at certain tasks, it’s not necessarily better at doing everything these other two kinds of devices were designed to do.

While successful, the iPad hasn’t been as revolutionary as many hoped (including some Apple executives, I presume), and in recent years Apple has made repeated efforts to turn it into a revolutionary device, perhaps paying too much attention to some hardcore iPad fans in the tech sphere. Apple has even neglected the Mac in the process, but so far the outcome has been underwhelming on both fronts. We have a generally weaker Mac, with serious hardware design flaws in its laptop line, and an operating system that hasn’t really evolved since probably Mac OS X 10.9 Mavericks. Then we have an iPad platform that hasn’t really improved all that much — the main differentiator between a regular iPad and an iPad Pro is essentially their technical specifications; it’s a hardware thing. Not a revolutionary new user interface or paradigm. Not even a tighter hardware/software integration (if anything, we’re in for yet another layer of complexity and new gestures and actions to memorise).

21st Century tablet

My habits and preferences betray my somewhat long history with computers and technology. I didn’t grow up with smartphones and tablets. My first home computer was a Commodore VIC-20. I was 27 when I first used a mobile phone. Despite what some people may think, I’m not averse to change and my brain is still flexible enough to pick up new habits or change old ones. What happens when you get older, though, is you tend to consider more often whether changing a habit or rethinking a workflow is actually worth it. And what I’ve always said about the iPad in this regard is this: if I’m faster, more efficient, more productive with a Mac (or, in certain fringe cases, with an iPhone), why should I learn a more convoluted path to be able to do the same thing — but more slowly and less efficiently — on an iPad? 

This state where you can simply have an iPad that does everything you need, without compromises, and does it better than any other class of device, is still pretty much ideal. Unless we witness a major hardware or software redesign, the trajectory the iPad is following is that this device is going to progressively resemble a Mac with a touch interface. We’ll ultimately have a device whose operating system will reflect a general reinvention of the wheel, feature- and functionality-wise, and whose distinctive features will be its touch interface and its extreme portability, and… that’s it? Where exactly is the progress in this scenario? You may tell me I’m simply not considering all the amazing new technologies that can still be added to the iPad in the coming years. Okay. But for now I look at what we’ve got. And what we’ve got is a tablet that at its very core is still the same iPad of almost ten years ago. Sure, it has got cooler to use and more powerful than the original 2010 model. But as someone who looks at technology as a forest and not at this or that tree, I see the iPad as an enormous waste of potential. 

While having a tablet as the iPad was originally intended to be — a convenient consumption device — has been a great addition, I feel that a general mistake on the whole industry’s part (allow me a bold statement every now and then) has been to focus on the iPad paradigm with too much tunnel vision, and not consider other ways to approach the idea of a tablet, both from a functional standpoint and from a user interface standpoint. Other manufacturers just followed Apple’s example and now we have a lot of mediocre alternatives that look and feel just like big phones and try to ape traditional computers for certain tasks. We have a third category of device that, instead of evolving into being something distinctive and even independent from the other two, has become a mix of smartphone and traditional computer envy. (I’m generalising and I’m aware there are exceptions in some of the iPad’s features.)

When it comes to ambitions for a tablet device, I keep thinking that the Newton was on a way more intriguing path than the iPad has been for the past nine years. I know that the technology had limitations. But don’t just put a Newton MessagePad and an iPad side by side and compare the two. Of course it’s going to be an unfair comparison — there’s a technology gap of about 15 years between them. I’m talking about vision. Just take one of the Newton’s basic features: handwriting recognition. Yes, on the first generation of Newton devices it wasn’t great, you may remember the jokes, and so on and so forth. Few people seem to be aware that it got much, much better with NewtonOS 2.x and on later, more powerful devices. I’ve been a Newton user since 2001, and to this day I can turn on my MessagePad 2100, create a new note or document, start writing on the device with its stylus as if it were a paper notebook, and the Newton will correctly understand and translate 99% of my scribbles into legible typewritten characters. It’s something I still can’t do on an iPad. 

And that’s because one day the industry decided that pen computing had no future. So, while using a stylus to draw, paint, or as an input device in certain specialised settings and applications is considered normal and natural, apparently writing on a flat surface with a stylus — something humans have been doing for at least 7000 years — is not. 

Well, if I had to describe my kind of tablet, a tablet I may consider using for productive tasks, I think it would be some sort of Newton on steroids at its core, with an input interface that would use touch where appropriate, and stylus where appropriate. That includes gestures: imagine splitting the screen between two open applications simply by drawing a line with the stylus instead of memorising some sequence you have to do with your fingers and in a certain way otherwise it’s not registered.

It would have an amazing handwriting recognition engine: so fast and accurate that it would make a virtual keyboard redundant. Mistranslated words could be corrected with a tap, and the tablet’s autocorrect would learn from those mistakes. Machine learning finally put to good use.

It could be easily connected to a Mac/PC, and you could use it as a giant trackpad, as a graphics tablet, as an additional display, even as backup device for sensible data and projects, which could be encrypted on the fly if needed by using biometric identification such as TouchID or FaceID. The exchange of files and documents would be of course seamless.

The user interface would feature a healthy selection of ‘drawing gestures’ and certain drawn elements could be smartly interpreted and subsequently rendered by the OS. Imagine you’re putting together a report and you’re making a draft with a series of items that will have to be organised in a table. You start handwriting the items and the associated data in different columns, just like you would do on a paper notebook. Once finished, you would draw lines along and across the items and the OS would ask you if you want to create a table; you would confirm and you’d end up with a perfectly laid out table you can drag inside the document (if it’s a separate object, otherwise it would already be part of it and you could drag it around to fit in the document’s layout). Once a series of items has been transformed into a table, the system could also handle it with its built-in spreadsheet feature, or you could export it to your favourite application for further editing and refinements.

As you may have guessed, I’m a fan of the old document-centric approach. The application-centric model has its advantages, of course, but I believe that an ideal tablet with an enhanced pen-based input interface could use some document-centric paradigms and it would feel very natural. The tablet’s OS could have a series of core functionalities (or services) that are invoked by what you intend to do. You create a new document and it could be a letter, an email, a financial report, the chapter of a novel, the page design for a magazine, a post on your preferred social network, a spreadsheet, a new webpage for your site, a new post for your blog, etc., and the tablet — via a series of ‘smart agents’ — would either understand what you’re doing or you’d simply tell it via a Create as… / Save as… command once you’re done. The OS could have some basic built-in services (e.g. an HTML/CSS editor for when you’re creating a new webpage), but you could also integrate third-party apps and services to have a richer experience and achieve more specialised results.

Visually, you would have a sort of desktop, but think of it as more of a workspace, not as a container of apps and files. A workspace where you can create things from scratch directly, or invoke/import things to ‘consume’. But even when you’re consuming content, imagine having this intermediate, invisible layer, that lets you manipulate whatever you’re reading or watching or looking at, in case you find something you need. You’re reading an amazing article on a website and want to save that insightful quote for yourself or to reuse in one of your articles? You highlight it with the stylus, either by underlining words or simply by enclosing the quote in a rectangle, and now you have a clipping you can reuse (the system could also save the original URL as metadata, so that the source is always retained). This could work with different kind of content: text, audio, video, still images, etc. You could use your finger or the stylus as an eyedropper tool when you see a particular colour you want to save or use for a project. These are just a few basic examples, but you get the idea. 

I’ve been thinking of an interface and operating system like these for years, and I confess I was excited when the Microsoft Courier research project surfaced back in 2008 [you can still find concept videos on YouTube, like this one or this one (truncated, sadly); also check this video about Microsoft’s Codex prototype, which predates Courier]; and I still think that some of its gestures and user interface ideas are more innovative — at least more intuitive — than what Apple has done with iOS on the iPad. Courier ended up being little more than an investigation, a concept, but it treated the tablet as a tablet, not as a wannabe traditional computer with a multi-touch interface on top of it. 

With this approach, my ideal tablet would certainly have a potentially complex interface, but by including a more robust stylus input and gestures that heavily borrow from the fundamentals of drawing when it comes to manipulating content and indicating intention, I think a lot of the user interface and interactions would be easier to grasp and master. There could be even a ‘tutorial mode’ the user could toggle, and when it’s enabled, certain parts of the tablet’s interface would be subtly highlighted; by tapping on them, the user could be presented with labels or tooltips explaining how to interact with that particular element. 

More importantly, the tablet would share part of the burden when a user wants to accomplish a task — imagine something like predictive text, but applied to many other different actions. Instead of being confused as to how to perform a certain action, the user could start doing something with the stylus, and the OS could offer some suggestions about which actions can be carried out from there. Or, if all else fails, it could ask the user if they want additional help. This, of course, should be a last-resort scenario, because ideally the interface would be so intuitive and discoverable that users wouldn’t need help or tutorials — but at least help and tutorials would be planned and included, and people wouldn’t be left on their own to figure out how to do something. 

In case my examples haven’t been clear enough, my kind of tablet would be strongly focused on applications and services interoperability. Precise, rigorous interface guidelines would ensure a great integration with third-party solutions. Developers could write standalone apps, but also services and system extensions to expand the tablet’s functionality and scope, ultimately contributing to its overall flexibility. In a model like this, workflows would have less friction because you would be adding functionalities and ‘actions’ made available either by the manufacturer or by third parties. 

If this is getting too abstract, imagine an even more reliable and ‘hardwired’ version of what iOS currently offers with Siri Shortcuts or with the older Workflow app. You download/purchase additional modules to accomplish specific tasks. Once added to the system, these modules or ‘actions’ (or whatever you want to call them) would in turn be available and accessible to third-party applications. For example, imagine you could add a “Markdown to HTML” module to the OS. From then on, that action would be available to the built-in text editor, but also to any third-party text editor you may get in the future. If a third-party developer wanted to write a text editor using their own Markdown-to-HTML converter, they could do so, and the user could choose which to use by changing a preference setting. But if a third-party developer wanted to write a certain kind of text editor that is more focused on beautiful typography or other specific aspects, they could do that without feeling the need to also offer text converters. Again, these are just crude examples off the top of my head. Perhaps a few user interface mockups would tell a clearer story, but I hope you’re getting my drift nonetheless.

I think that a tablet with this kind of OS that prioritises modularity, tasks, and app integration, and with a user interface that treats the tablet as a tablet and lets you interact with it by ‘speaking a tablet language’, would make for a versatile device with a good degree of extensibility, and a good degree of independence. You could attach any kind of accessory to it to make your life easier, such as an external keyboard, but the idea is that all you’d need to have is the main device and its stylus. And if you wanted to use such tablet as a mere ancillary device, you could do so by seamlessly connecting it to your computer, and the tablet would become an accessory or extension as needed. 

There is nothing particularly sci-fi in my ideal tablet — just perhaps a rearrangement of a few conceptual pieces — but I understand if some of my ideas sound weird or unfamiliar or unfeasible, especially if you’re satisfied with the way the iPad and iOS-on-the-iPad work today. I think the time has come for Apple to either embrace the interface limitations of the iPad and try to make the best hardware/software integration within those limitations, or to start designing something new from scratch with the express purpose of being a creation-/production-oriented device and operating system.

Let me know what you think, if you like.