The little MacBook that couldn’t

Tech Life

MacBook

In April 2015, Apple introduces the 12-inch MacBook with retina display. It’s a marvel of thinness and lightness. It’s also a display of ‘courageousness’ that predates the headphone jack removal in the iPhone 7. Because this laptop, while indeed retaining the headphone jack, has only one other port, and it’s USB‑C. And it’s for everything, including charging. But the pressing question is, Is this the MacBook Air killer?

No, not really. For about one year and a half, until October 2016, Apple still features both the 11- and 13-inch MacBook Air models together with the 12-inch MacBook in the lineup. But the only good thing this MacBook has to win people over is essentially the retina display. The MacBook Air is the better proposition for literally everything else. More ports, MagSafe, better keyboard, better performance, equally great battery life, negligible difference in size and weight (especially the 11-inch model), more affordable.

The 11-inch MacBook Air gets discontinued in October 2016. But not the 13-inch model. The 13-inch model gets discontinued today, in 2019. But so does the 12-inch MacBook.

 

But as far as compact laptops with retina display go, surely the 12-inch MacBook is the best proposition?

Not really. When first introduced in April 2015, the 13-inch retina MacBook Pro (introduced one month earlier) is a better machine for the money. The little MacBook wins in thinness and lightness. But the 13-inch retina MacBook Pro is better in every other regard. Better screen resolution, more ports, better keyboard, much better performance, even better battery life (at least on paper), and the base model costs $1,299, exactly like the 12-inch MacBook (for that price you only get 128 GB of storage in the 13-inch retina MacBook Pro, versus the 256 GB in the MacBook, but still).

In 2016, even the redesigned 13-inch MacBook Pro (without Touch Bar) is an overall better choice than the 12-inch MacBook.

 

The 12-inch retina MacBook hasn’t been an iPad killer, either. Not that it was ever meant to be, mind you, but for regular people who don’t need specific Mac OS software for their work, a 12.9‑inch iPad Pro of the same vintage as the 12-inch MacBook was an overall better value than the little laptop. Even that iPad Pro’s Smart Keyboard is probably better than the MacBook’s butterfly keyboard.

If anything, the 12-inch MacBook has lost to the iPad Pro. By removing the 11-inch MacBook Air first, and the 12-inch MacBook now, if you’re looking for an Apple ‘ultrabook’, today Apple has made sure that you take a good look at the iPad Pro. Not that the new 13-inch retina MacBook Air is a giant heavyweight, but the 11-inch iPad Pro is svelter, more compact, and some people will no doubt rush to add that it’s also more versatile and the future of everything. 

Neither fish nor flesh

When I think about the 12-inch MacBook, it’s really an odd one. With Apple devices, my reactions have historically been very clear-cut: I’ve either loved them or hated them. This little MacBook is perhaps the first really ‘meh’ device, if you’ll forgive the highly technical jargon here. More seriously, I find it rather representative of Apple’s vague product focus under Tim Cook’s guidance. The 12-inch MacBook is a proof-of-design machine. The more I look back at it, the more its sole raison d’être seems to be: We made this because we could or, This is how much thin and light we can go. Strategically, though? I’m sorry to say, but it’s been little more than luggage.

(On a last personal note, this MacBook was remarkably useful to me for one thing: when I tested one for a few days back in 2015, it made me realise just how bad the then-new keyboard design was, and has spared me a bunch of unnecessary headaches ever since, as I’ve carefully avoided all MacBooks with that terribly-designed keyboard.)

Post-WWDC thoughts

Tech Life

The WWDC 2019 keynote was interesting and juicy. For once, it felt well organised and with an enjoyable pace. I liked a lot of what was showcased. I hated the audience, seemingly cheering for whatever was said on stage, but I actually cheered myself when they presented the Sign in with Apple initiative[1]. And the new font management features in iOS made me blurt a Finally! as I was watching the event.

The Mac Pro

The Mac Pro introduction made me happy. Apple could have screwed up the Mac Pro redesign in hundreds of ways. The company — whew — did the Right Thing.

Of course, according to many, Apple screwed up one aspect of the redesign: the redesigned price. I have only two complaints here: one, 256 GB as base storage, for such a machine, and for the entry price of $6,000, is insultingly ungenerous. It’s not a MacBook Pro, Apple. Two, $999 for the Pro Display stand is ludicrous. Okay, you’re a premium brand, but even, say, a battery conditioner for a Rolls-Royce Phantom doesn’t cost that much (it’s about $580 if you’re curious). As other have said, it would have been better to mask that price by including it in the total price of the Pro Display.

As for the rest, no, I don’t believe the 2019 Mac Pro is an expensive machine. And neither is the Pro Display XDR. For their intended audience, they’re priced quite reasonably. Check out this video by Jonathan Morrison about the Pro Display for an informed perspective on the matter.

A lot of words have already been spent about putting the Mac Pro pricing in context, but just look at this progression:

  • In 2012, an eight-core Mac Pro cost $3,499 (and a twelve-core was $4,999 by the way).
  • In 2013/2014, an eight-core Mac Pro configuration wasn’t available, but the six-core variant cost $3,999.
  • In 2017/2018, an eight-core iMac Pro cost $4,999.

Honestly, I was expecting a price tag of $5,999. Maybe a sweeter pill to swallow (especially with that meagre 256 GB SSD) would have been an entry-level Mac Pro priced at $4,999 and an iMac Pro reduced at either $3,999 or $4,499. I still think Apple should make the iMac Pro a little bit more affordable now that there’s also a new, powerful Mac Pro back in the lineup. As for the display, I agree with those who’d like to see a standalone version of the iMac’s 27-inch 5K panel. That would be an excellent complement to any kind of desktop setup, in conjunction with a MacBook Pro, a Mac mini, or even a Mac Pro for those who don’t need the esoteric performance of the Pro Display XDR.

Mac OS

As you may recall, in the days preceding the WWDC, I was still apprehensive about the Mac. After the WWDC, I’m… conflicted. And I realise my conflict is directly related to what’s happening to the Mac platform: hardware and software are becoming two very different beasts. Apple is still capable of coming up with impressive hardware (the Mac Pro and the Pro Display XDR are obvious examples) — and that’s what’s making me a bit more optimistic. But on the software side, things couldn’t be more disappointing — and that’s what’s still fuelling my pessimism. Whatever few new features are introduced in Mac OS 10.15 Catalina, in my eyes they are outweighed by what’s being taken away (or locked down, or made unnecessarily complicated):

  • Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. In future versions of macOS, scripting language runtimes won’t be available by default, and may require you to install an additional package.” [From the XCode 11 beta release notes | Commentary on Michael Tsai’s blog]
  • Notarising command-line tools: I’m no developer, but when I read this piece by Howard Oakley, I almost felt pangs in my stomach. Again, check the associated commentary on the always-excellent Michael Tsai’s blog.
  • From the Mac OS Catalina Preview page:
    • Dedicated system volume. macOS Catalina runs in its own read-only volume, so it’s separate from all other data on your Mac, and nothing can accidentally overwrite your system files. And Gatekeeper ensures that new apps you install have been checked for known security issues before you run them, so you’re always using good software.
    • Data. Apps must now get your permission before directly accessing files in your Documents and Desktop folders, iCloud Drive, and external volumes, so you’re always in control of your data. And you’ll be prompted before any app can capture keyboard activity or a photo or video of your screen.

    I know many won’t agree with me, but these security measures — while understandable on paper — are cumulatively overkill, and there should really be a simple switch for power users to disable at least all the folder authentication madness. The user experience here is starting to resemble Windows and its barrage of confirmation dialog boxes. (For the related discussion, see Security & Privacy in macOS 10.15 Beta on Michael Tsai’s blog.)

These are just quick examples. But my general impression about where Mac OS is going is that Apple wants to turn it into a sort of low-maintenance system. The pretext is security: lock down this and that because it could be exploited; remove this and that because it’s code we can’t be bothered to update or optimise, it could potentially represent a vector for an attack, blah blah. Meanwhile, let’s also use these security measures to make the life of the already stressed-out Mac developers even harder. 

In 30 years as a Mac power user, what I have been appreciating about Mac software was the ability to think and act outside the box, so to speak. In recent times, Apple seems hell-bent on keeping Mac software inside the box. The walled-garden model and paranoid security made and make definitely more sense on mobile systems. I appreciate being able to look for and install apps on my iPhone that won’t mess with my device or present a security risk for the operating system or for me as a user (although Apple hasn’t done a great job at keeping scams away from the App Store); but on the Mac I want to have more freedom of movement. I’m an expert user, I know the risks involved. Let me tinker. Give the option to have a locked-down Mac for novice users who expect to use it like an appliance, or in the same way they use their phones and tablets. Leave the ‘root’ door open for those who know what they’re doing.

iPad and the Mac

You certainly know this rather famous Steve Jobs quote: I think Henry Ford once said, “If I’d asked customers what they wanted, they would have told me, ‘A faster horse!’ ” People don’t know what they want until you show it to them. I look at what iPad is becoming and I see ‘a faster horse.’ 

Many seem happy that now Apple is listening. No doubt about that. But I also see it as kind of a bad thing. This might be completely off the mark, but I feel that today’s Apple pays too much attention to the input from an élite of tech pundits who are also iPad power users. On the one hand it’s nice that Cook’s Apple is more receptive to external suggestions. On the other, lately the company has seemed a bit too keen on pleasing the afore-mentioned élite. Sometimes I even get the feeling that the iPad’s path is pretty much a design by committee. 

Steve Jobs was less receptive to external input, probably because he knew what he wanted and typically had clearer ideas about the path ahead. (Again, no, I don’t think he was infallible. I simply preferred his leading style.) 

Anyway. A lot of iPad fans misunderstand one important thing about where I stand as a Mac user. I don’t want iPad to fail. I want all Apple platforms to succeed. But I don’t believe Apple’s homogenisation plan is a good way of achieving that. It may be a convenient way, for Apple and perhaps for developers too. But the various platforms have their unique strengths and unique strings to pull to make each of them progress healthily. But dedicated differentiation is hard, apparently. A multi-billion dollar company seemingly can’t afford enough resources to develop two major platforms concurrently, prioritising what’s best for each platform and for the users of each platform.

So we have a Mac platform that was doing fine until it was basically put on hold because the iPad had to grow, evolve, be revolutionary. iPad was course-corrected to become more pro. Meanwhile the Mac was neglected and iPad has been too slow to catch up than originally planned. Think about the time that has been wasted for this and because of this. It hasn’t been good for either the Mac or the iPad. Sure, maybe all is well now, and I’m worrying too much; yet I can’t help thinking it could have been different — and better. 

If iPadOS just becomes a Mac OS clone, that’s not progress, however you look at it. And at the moment I’m not really trusting Apple when it comes to having a clear plan to make iOS on the iPad evolve and shine. Adding Mac-like features is the easy way out. What’s next? That’s hard.

I’ve been upset with Apple for all the time the company wasted ‘pushing’ the iPad from a marketing/lifestyle standpoint, instead of concentrating on building a truly ‘pro’ variant. iPad Pro should have been a new device with a different iOS flavour/fork rethought from the ground up at the time iOS 7 came out. Instead they started doing something around iOS 9/10.

Another pain point in some discussions I’ve had with iPad fans is when I mention my general disappointment in the iPad as a system. What they always believe is that I’m making a direct comparison with the Mac, implying that the Mac is better. It’s. Not. That.

My disappointment is in the general lack of evolution at the operating system level. I don’t have any problem recognising the iPad as a ‘real computer’. Of course it is. That’s precisely why it’s also not a groundbreaking innovation. Let’s put aside all its hardware advantages for a moment — extreme portability, instant operation, magnificent display, desktop-class peak performance. The software it runs on is conceptually old. The way things happen when you interact with the operating system and applications and files is the same way we’ve been seeing on traditional computers since the Xerox Alto and Star, since the Apple Lisa, back in the early 1980s. Yes, it got much better visually. Of course. It’s the least that one could expect after thirty years! In the medium term, the iPad will reach a stage where it will be like using a Mac that also has Multi-Touch support. And while cool, it will still be anchored to decades-old paradigms and metaphors. 

The post-PC era I’ve had in mind since Jobs introduced the concept, is something else. From a user-interface, user-interaction standpoint, I expected (perhaps unrealistically) a different plan for iOS on the iPad. Hiding the filesystem in the first versions of iOS made me hopeful: let’s use the iPad as a tabula rasa for the computing experience. Let’s give people a tool to do ‘computery stuff’ on it without even realising they have a computer in their hands. John Gruber had the best insight in all recent history of punditry when he said It’s the heaviness of the Mac that allows iOS to remain light.

Let’s look at the whole paragraph: 

The bigger reason, though, is that the existence and continuing growth of the Mac allows iOS to get away with doing less. The central conceit of the iPad is that it’s a portable computer that does less — and because it does less, what it does do, it does better, more simply, and more elegantly. Apple can only begin phasing out the Mac if and when iOS expands to allow us to do everything we can do on the Mac. It’s the heaviness of the Mac that allows iOS to remain light.

I’ve always thought that a better plan would have been to keep the Mac around (always refining it, always keeping its power up-to-date and relevant), while using iOS and the iPad to push the computing envelope. When I say that the iPad isn’t the future, iPad fans get upset because they think I’m looking down on it, or outright dissing it. I’m not. I look at it, and see traditional computing; maybe done a bit differently, maybe done with a cooler veneer, by touching a screen instead of using a mouse[2], but still pretty traditional. 

Hiding the filesystem and having users interact with applications and documents in a different way — in a fashion that made both applications and documents sort of get out of the way, disappear as constructs because you have a full-screen environment and a series of actions to handle whatever you’re doing with the device — was an excellent starting point, in my book. But then things started to stagnate here, more complex workflows made a lot of collateral friction emerge, and now, in iOS 13, handling files on an iPad is pretty much the same as on a Mac. It’s a practical victory, but a theoretical defeat.

Criticising this stuff is hard, because there comes a point where I’m asked So, what do you propose, instead? I don’t have a clear solution or alternative fully designed and ready to be implemented here. (I recently shared some ideas about a kind of tablet I’d be eager to use). What I would love to see is more research to achieve a different and more evolved computing experience, one that is capable of letting go of old metaphors and paradigms so that people can interact with these tools even more naturally and in more immediate ways, instead of visualising the computer workspace as an eternal office. 

Some observations on iPadOS and its gestures

1.

The new way of selecting text looks simpler and more straightforward than the old way. It’s like you’re pointing at the text you want selected. I currently don’t have the means or opportunity to test this in person, but I’m curious to know about how efficient this method is for selecting large blocks of text, especially when they’re longer than what’s displayed on screen at the beginning of the selection gesture. I’m also curious to know about the efficiency of this new method when you want to make a precise selection of just a couple of sentences or words inside a paragraph. The old method wasn’t necessarily clunky per se; the problem was that it worked inconsistently. Sometimes it worked like a charm in Safari or Mail, but not so much in other text-based apps like RSS readers or PDF viewers.

2.

The 3‑finger pinch to copy, 3‑finger spread to paste, 3‑finger swipe to undo all looked like cool new gestures in the pre-recorded bits Federighi was showing on the big screen, but to me they feel like unnecessary additions to an ever-expanding gesture lexicon, and I also wonder about their precision — copy and paste in particular. What happens when you have selected the text bit you want to copy, and one of your fingers touches the screen and deselects the text (and maybe also re-selects a single word or another unwanted portion of the text) before the 3‑finger pinch copy gesture is completed? And don’t get me started on the ‘cut’ gesture: two consecutive 3‑finger pinches? Come on.

If you think I’m splitting hairs here, rewatch the first moments of the Apple Pencil demo by Toby Paterson (Apple’s Senior Director in charge of the iPad system experience): 

…And then to move the cursor, you’ll just grab it with your finger… whoops… and he tries again. 

And then, shortly after: Now, to select text, just hold your finger on a word… Hold your fing— aah, sorry…

He’s clearly struggling with these gestures, and while I concede he must be nervous given the context, other gestures like dragging out the virtual keyboard to turn it into a compact keyboard are clearly easier and less hit-and-miss.

3.

IPadOS tap drag
Window management and multitasking in iPadOS are clearly borrowing heavily from the Mac, but since the gestures on the iPad have to be keyboard-independent, there is a lot of tapping & dragging involved. Curiously, when there is indeed a keyboard attached to the iPad, there doesn’t seem to be a fallback set of keyboard shortcuts to make things easier. 

And as I was watching Federighi tapping, dragging, moving, and split-viewing on an iPad Pro propped on the table in landscape orientation with a Smart Keyboard attached, I was reminded of what Federighi himself said about not wanting to introduce a MacBook with a touch screen; he brought up usability reasons, and the fact that it’s not a great user experience because having to raise your arm to directly manipulate the screen gets tiring quickly. And it’s true! Yet that is exactly what’s happening when you’re working with an iPad set up this way. And you can tell me But with the iPad it’s different all you want: it is exactly the same experience, but suddenly those legitimate usability concerns have vanished.

4.

IPadOS new Safari shcts
Safari shortcuts on iPadOS

Speaking of shortcuts, I was about to leave this screenshot here without comment, but I have to point out those terrible hybrid shortcuts that involve one or two keys and a tap on the screen. They look unnecessarily counterintuitive, and I can’t believe there wasn’t a better option. There is a keyboard — just make shortcuts that involve only keys. Better yet, use the same shortcuts as in Safari on the Mac. What’s the problem?

5.

In general, if you count the new gestures you do with your fingers, and the new gestures you perform with the Pencil, there isn’t much that can be intuitively discovered without at least a brief tutorial in the Apple Store when you’re purchasing an iPad. And even if all this is well explained in an online guide, or by an Apple retail employee, I wonder how many of these gestures are going to stick with users. This is just an observation, and maybe I’m wrong. Maybe all these gestures end up being far more intuitive than they seem to me at first glance. My worry, of course, is that all this increasing complexity accumulates to a point where there’s a thin, yet persistent layer of friction when using an iPad, which inevitably brings frustration. One of the key differentiators of Apple devices is their software but also the fluidity of their experience. That’s what may convince a prospective customer (with no particular affiliation to a platform) to buy an iPad over a Microsoft Surface.

What about the rest?

The rest was good. I liked it. I don’t have an Apple TV or an Apple Watch, and I’m not really interested in having either, but the new features are nice, and I like where these two platforms are going (though tvOS is the slowest-advancing platform I’ve ever seen). iOS 13 looks like a very promising release and I look forward to upgrade later this year. Apologies for not having been exhaustive regarding everything that was announced at the WWDC 2019, but what’s really capturing my attention at the moment is how Apple is handling Mac OS and iOS on the iPad. 

 


  • 1. During a recent trip in Italy, I had to use the Sign in with Google option on a site (it was the lesser evil), and since then I’ve been getting an average of 20 more spam messages per day in my gmail account. So, unlike others, I don’t really care about the behind-the-scenes of Sign in with Apple, I just want it to see widely implemented as soon as possible. ↩︎
  • 2. Oh, wait… ↩︎

 

Still apprehensive about the Mac

Handpicked

I usually take my time to ponder things before publishing a post here. But this time I just wanted to write down a few brief raw thoughts before the WWDC. I’m leaving for a short trip in a few hours, and I’ll probably won’t have time to write anything else before June 3. 

 

Brent Simmons has written a succinct, spot-on reaction to Steve Troughton-Smith’s piece (Don’t Fear) The Reaper.

So, knowing how this has worked out in the past, why do I fear the reaper?

Because bringing UIKit brings no new power. If anything, it subtracts power. UIKit apps — at least so far — are all sandboxed and available only via the App Store. They don’t offer everything AppKit offers.

[…]

Getting the Mac OS X transition right was a priority for the company: if it failed, the company would fail. But with this? Not the same story at all. [Emphasis mine here.]

Much of the debate surrounding Marzipan so far has mainly focused on the fear of the decline in Mac software quality. What veteran Mac users are afraid of is a new wave of Mac apps that are little more than crude iOS ports, that don’t look and don’t behave like Mac apps. 

It’s an understandable concern, and a concern I share as well. I’m especially wary of iOS-only developers who limit their use of the Mac to the possible minimum (coding their apps and little more). How can they provide a good experience in the Mac apps they’ll develop via Marzipan if they have little familiarity with Mac OS, its interface and — for lack of a better term — its flow?

They’re not entirely to blame: Apple itself isn’t certainly leading by example on this front lately. Home, Stocks, News, and Voice Memos are apps that look as if they were assembled over the course of a few days by a novice iOS developer or a group of interns at Apple.

But I have other fears.

I fear that Apple’s plan for the Mac is to further close the platform down, so that — like on iOS — the Mac App Store becomes the only source for Mac software. That would be unfortunate to say the least. I want the freedom to purchase, download, and install Mac apps from wherever. I want to be able to give my support directly to a developer by buying their software from their website.

Also, as a consequence, I fear that the Mac App Store is going to become more like iOS’s App Store in every way — with thousands of crappy apps, and terrible pricing trends. Where by ‘terrible pricing trends’ I mean the race to the bottom on the one hand, and on the other hand an increase in subscriptions as the only payment method even for simple utilities and single-purpose apps. (I hope more people realise how subscriptions aren’t sustainable on a large scale for customers). 

I fear that iOS is going to become the new model that dictates how the Mac user interface has to behave. That Macs are going to be considered just as ‘big iPads’, and that paradigms and behaviours that are tailored for iOS and belong to iOS come to replace those paradigms, principles, and behaviours that made the Mac’s user interface great. 

Though of course not all at once, I fear this is going to happen eventually because I have the feeling that Apple — while maybe not reaching the point of merging the two systems completely — wants to somehow ‘unify’ iOS and Mac OS visually and behaviourally in the name of ecosystem homogeneity and the ‘seamless experience’. Whereas I believe both platforms should maintain their own specific strengths, their different ways to be simple and user-friendly, and their different way to be powerful and versatile.

I’ve said it again and again — I’m not necessarily afraid of change, but I’m afraid of change for change’s sake. I’m all for change if it brings unequivocal progress. But I’m afraid that Mac OS is getting repurposed and repackaged more to fit inside an agenda than to keep thriving as a platform with its history, characteristics, and unique features. 

I’ve experienced firsthand all the transitions the Mac platform has gone through, and this is the one that’s leaving me the most apprehensive. Because all past transitions brought clear advantages to the Mac, either from a hardware or software standpoint. The signals were of progress for the Mac platform; or, at the very least, of having to take a step sideways to then take two steps forward. This time it feels that things have to change simply to benefit the advancement of another platform. 

Never before have I hoped so much to be completely wrong about something. As Simmons concludes, I hope so very badly that I’m wasting my time with my worries.

 

Further reading

My kind of tablet

Tech Life

An opinion I’ve held for a long time is that Apple so far has done a mediocre job in turning the iPad into a ‘pro’ device. The hardware is fine, the current specifications for the iPad Pro models are more than fine. But the software — and to a certain extent the user interface and usability — is the weak spot. I won’t repeat myself about this; I think I said enough in Faster than its own OS back in November.

At the end of January 2010, this is how Steve Jobs introduced the iPad:

…And so all of us use laptops and smartphones now. Everybody uses a laptop and/or a smartphone. And a question has arisen lately: is there room for a third category of device in the middle? Something that’s between a laptop and a smartphone? And of course we pondered this question for years as well. The bar is pretty high. In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re gonna have to be far better at doing some really important things: better than the laptop, better than the smartphone. 

What kind of tasks? Well, things like browsing the Web. That’s a pretty tall order; something that’s better at browsing the Web than a laptop? Okay. Doing email. Enjoying and sharing photographs. Watching videos. Enjoying your music collection. Playing games. Reading eBooks. If there’s going to be a third category of device, it’s going to have to be better at these kinds of tasks than a laptop or a smartphone, otherwise it has no reason for being. 

Now, some people have thought that that’s a netbook. The problem is netbooks aren’t better at anything. They’re slow, they have low-quality displays, and they run clunky old PC software. So they’re not better than a laptop at anything, they’re just cheaper; they’re just cheap laptops. And we don’t think they’re a third category device. But we think we’ve got something that is.

I could use this quote to emphasise how all the tasks Jobs enumerates are consumption-related. That the drive to create such a device came from the need of having some hardware that was more convenient and capable at delivering certain content for people to enjoy. The creative angle came later. Again, in retrospect it’s crucial to notice just how the iPad was not conceived as a creation tool. It’s interesting to realise how Steve Jobs didn’t mention production tasks or creative tasks when he was talking about the thought process leading to the creation of this ‘third category device’. I’m sure Jobs was aware that, with the right applications, the iPad could do more than just being a vehicle for content consumption. Still, that didn’t seem to have been a priority.

Instead, what I want to emphasise in this quote is this part: In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re gonna have to be far better at doing some really important things: better than the laptop, better than the smartphone.

Far better at doing some key tasks. Better than the laptop (but let’s just say better than a Mac or any other traditional computer), and better than the smartphone. Think about that.

For the first few iterations of its existence, the iPad and iOS delivered on their mission. In 2010 I had a brand-new MacBook Pro and I was still making the most of my iPhone 3G, but I couldn’t wait to get an iPad. I wanted to use it especially for reading, so I waited very patiently for an iPad with a retina display. And in 2012, with iOS 5, the iPad was still a great device to do everything it was designed for. A fast device with an intuitive operating system with an extremely low learning curve. Some apps for more creative tasks had appeared, and with the addition of a Wacom stylus I had fun at drawing and painting some stuff.

Then some people got very excited about the iPad, and another question arose: why can’t we use the iPad for all kinds of tasks?

That’s when things started to go awry, in my opinion. 

Because while it’s technically still true that an iPad is better at doing some key tasks — better than a traditional computer and better than a smartphone — it’s not better at doing everything.

The integration between the hardware and the software Apple is renowned for means that the software running on an Apple device is (ideally) optimised to grant the user the best experience of what the device has been designed to accomplish. “The iPad is just a big iPhone” was a common criticism back then, and it was a misguided remark, because a few of the iPad’s key strengths came exactly from it being like a big iPhone. The familiar gestures people had quickly learnt to master the iPhone’s user interface still worked very well to operate an iPad. At the time, there weren’t any significant functional changes in how iOS 5 or iOS 6 worked on an iPhone and on an iPad. Apps optimised for the iPad needed a bit of user interface retouching and rethinking, but as far as the user interaction was concerned, there was nothing particularly disruptive. Things worked well. Users didn’t need additional training or additional attention to master an iPad.

But in order to accomplish additional tasks — especially complex tasks that require a certain degree of interoperability among apps and services — just resorting to third-party ingenuity was not enough. The iPad’s operating system needed changes and improvements. Which of course, inescapably, meant an added layer of complexity. As I observed back in 2016:

In iOS’s software and user interface, the innovative bit happened at the beginning: simplicity through a series of well‐designed, easily predictable touch gestures. Henceforth, it has been an accumulation of features, new gestures, new layers to interact with. The system has maintained a certain degree of intuitiveness, but many new features and gestures are truly intuitive mostly to long‐time iOS users. Discoverability is still an issue for people who are not tech‐savvy.

[See also Tap swipe hold scroll flick drag drop]

I don’t mean to dismiss the efforts Apple has done to make iOS work better on supposedly ‘pro’ iPads, but it’s undeniable that iOS has matured very slowly on this front. On iPhones, I believe it’s still a great operating system, because it still delivers on what you’re supposed to accomplish with a smartphone. The hardware/software integration is tighter there. On the iPad, my impression is that things have been messier, less focused, less optimised to make the most of it. Now, if you caught me in a more exasperated mood, I’d probably put the blame on Apple, saying that they could have done a better job, etc.

But the thing is, a touch interface can only do so much. There are still a lot of tasks for which a traditional computer is better and more versatile, and there are tasks for which a smartphone is better, because (among other things) certain touch gestures are simply more effective on its smaller screen. Some will undoubtedly insist that an iPad today can do anything a traditional computer can, and I may even agree on a theoretical level, but the fact is: just because an iPad is better than a computer or a smartphone at certain tasks, it’s not necessarily better at doing everything these other two kinds of devices were designed to do.

While successful, the iPad hasn’t been as revolutionary as many hoped (including some Apple executives, I presume), and in recent years Apple has made repeated efforts to turn it into a revolutionary device, perhaps paying too much attention to some hardcore iPad fans in the tech sphere. Apple has even neglected the Mac in the process, but so far the outcome has been underwhelming on both fronts. We have a generally weaker Mac, with serious hardware design flaws in its laptop line, and an operating system that hasn’t really evolved since probably Mac OS X 10.9 Mavericks. Then we have an iPad platform that hasn’t really improved all that much — the main differentiator between a regular iPad and an iPad Pro is essentially their technical specifications; it’s a hardware thing. Not a revolutionary new user interface or paradigm. Not even a tighter hardware/software integration (if anything, we’re in for yet another layer of complexity and new gestures and actions to memorise).

21st Century tablet

My habits and preferences betray my somewhat long history with computers and technology. I didn’t grow up with smartphones and tablets. My first home computer was a Commodore VIC-20. I was 27 when I first used a mobile phone. Despite what some people may think, I’m not averse to change and my brain is still flexible enough to pick up new habits or change old ones. What happens when you get older, though, is you tend to consider more often whether changing a habit or rethinking a workflow is actually worth it. And what I’ve always said about the iPad in this regard is this: if I’m faster, more efficient, more productive with a Mac (or, in certain fringe cases, with an iPhone), why should I learn a more convoluted path to be able to do the same thing — but more slowly and less efficiently — on an iPad? 

This state where you can simply have an iPad that does everything you need, without compromises, and does it better than any other class of device, is still pretty much ideal. Unless we witness a major hardware or software redesign, the trajectory the iPad is following is that this device is going to progressively resemble a Mac with a touch interface. We’ll ultimately have a device whose operating system will reflect a general reinvention of the wheel, feature- and functionality-wise, and whose distinctive features will be its touch interface and its extreme portability, and… that’s it? Where exactly is the progress in this scenario? You may tell me I’m simply not considering all the amazing new technologies that can still be added to the iPad in the coming years. Okay. But for now I look at what we’ve got. And what we’ve got is a tablet that at its very core is still the same iPad of almost ten years ago. Sure, it has got cooler to use and more powerful than the original 2010 model. But as someone who looks at technology as a forest and not at this or that tree, I see the iPad as an enormous waste of potential. 

While having a tablet as the iPad was originally intended to be — a convenient consumption device — has been a great addition, I feel that a general mistake on the whole industry’s part (allow me a bold statement every now and then) has been to focus on the iPad paradigm with too much tunnel vision, and not consider other ways to approach the idea of a tablet, both from a functional standpoint and from a user interface standpoint. Other manufacturers just followed Apple’s example and now we have a lot of mediocre alternatives that look and feel just like big phones and try to ape traditional computers for certain tasks. We have a third category of device that, instead of evolving into being something distinctive and even independent from the other two, has become a mix of smartphone and traditional computer envy. (I’m generalising and I’m aware there are exceptions in some of the iPad’s features.)

When it comes to ambitions for a tablet device, I keep thinking that the Newton was on a way more intriguing path than the iPad has been for the past nine years. I know that the technology had limitations. But don’t just put a Newton MessagePad and an iPad side by side and compare the two. Of course it’s going to be an unfair comparison — there’s a technology gap of about 15 years between them. I’m talking about vision. Just take one of the Newton’s basic features: handwriting recognition. Yes, on the first generation of Newton devices it wasn’t great, you may remember the jokes, and so on and so forth. Few people seem to be aware that it got much, much better with NewtonOS 2.x and on later, more powerful devices. I’ve been a Newton user since 2001, and to this day I can turn on my MessagePad 2100, create a new note or document, start writing on the device with its stylus as if it were a paper notebook, and the Newton will correctly understand and translate 99% of my scribbles into legible typewritten characters. It’s something I still can’t do on an iPad. 

And that’s because one day the industry decided that pen computing had no future. So, while using a stylus to draw, paint, or as an input device in certain specialised settings and applications is considered normal and natural, apparently writing on a flat surface with a stylus — something humans have been doing for at least 7000 years — is not. 

Well, if I had to describe my kind of tablet, a tablet I may consider using for productive tasks, I think it would be some sort of Newton on steroids at its core, with an input interface that would use touch where appropriate, and stylus where appropriate. That includes gestures: imagine splitting the screen between two open applications simply by drawing a line with the stylus instead of memorising some sequence you have to do with your fingers and in a certain way otherwise it’s not registered.

It would have an amazing handwriting recognition engine: so fast and accurate that it would make a virtual keyboard redundant. Mistranslated words could be corrected with a tap, and the tablet’s autocorrect would learn from those mistakes. Machine learning finally put to good use.

It could be easily connected to a Mac/PC, and you could use it as a giant trackpad, as a graphics tablet, as an additional display, even as backup device for sensible data and projects, which could be encrypted on the fly if needed by using biometric identification such as TouchID or FaceID. The exchange of files and documents would be of course seamless.

The user interface would feature a healthy selection of ‘drawing gestures’ and certain drawn elements could be smartly interpreted and subsequently rendered by the OS. Imagine you’re putting together a report and you’re making a draft with a series of items that will have to be organised in a table. You start handwriting the items and the associated data in different columns, just like you would do on a paper notebook. Once finished, you would draw lines along and across the items and the OS would ask you if you want to create a table; you would confirm and you’d end up with a perfectly laid out table you can drag inside the document (if it’s a separate object, otherwise it would already be part of it and you could drag it around to fit in the document’s layout). Once a series of items has been transformed into a table, the system could also handle it with its built-in spreadsheet feature, or you could export it to your favourite application for further editing and refinements.

As you may have guessed, I’m a fan of the old document-centric approach. The application-centric model has its advantages, of course, but I believe that an ideal tablet with an enhanced pen-based input interface could use some document-centric paradigms and it would feel very natural. The tablet’s OS could have a series of core functionalities (or services) that are invoked by what you intend to do. You create a new document and it could be a letter, an email, a financial report, the chapter of a novel, the page design for a magazine, a post on your preferred social network, a spreadsheet, a new webpage for your site, a new post for your blog, etc., and the tablet — via a series of ‘smart agents’ — would either understand what you’re doing or you’d simply tell it via a Create as… / Save as… command once you’re done. The OS could have some basic built-in services (e.g. an HTML/CSS editor for when you’re creating a new webpage), but you could also integrate third-party apps and services to have a richer experience and achieve more specialised results.

Visually, you would have a sort of desktop, but think of it as more of a workspace, not as a container of apps and files. A workspace where you can create things from scratch directly, or invoke/import things to ‘consume’. But even when you’re consuming content, imagine having this intermediate, invisible layer, that lets you manipulate whatever you’re reading or watching or looking at, in case you find something you need. You’re reading an amazing article on a website and want to save that insightful quote for yourself or to reuse in one of your articles? You highlight it with the stylus, either by underlining words or simply by enclosing the quote in a rectangle, and now you have a clipping you can reuse (the system could also save the original URL as metadata, so that the source is always retained). This could work with different kind of content: text, audio, video, still images, etc. You could use your finger or the stylus as an eyedropper tool when you see a particular colour you want to save or use for a project. These are just a few basic examples, but you get the idea. 

I’ve been thinking of an interface and operating system like these for years, and I confess I was excited when the Microsoft Courier research project surfaced back in 2008 [you can still find concept videos on YouTube, like this one or this one (truncated, sadly); also check this video about Microsoft’s Codex prototype, which predates Courier]; and I still think that some of its gestures and user interface ideas are more innovative — at least more intuitive — than what Apple has done with iOS on the iPad. Courier ended up being little more than an investigation, a concept, but it treated the tablet as a tablet, not as a wannabe traditional computer with a multi-touch interface on top of it. 

With this approach, my ideal tablet would certainly have a potentially complex interface, but by including a more robust stylus input and gestures that heavily borrow from the fundamentals of drawing when it comes to manipulating content and indicating intention, I think a lot of the user interface and interactions would be easier to grasp and master. There could be even a ‘tutorial mode’ the user could toggle, and when it’s enabled, certain parts of the tablet’s interface would be subtly highlighted; by tapping on them, the user could be presented with labels or tooltips explaining how to interact with that particular element. 

More importantly, the tablet would share part of the burden when a user wants to accomplish a task — imagine something like predictive text, but applied to many other different actions. Instead of being confused as to how to perform a certain action, the user could start doing something with the stylus, and the OS could offer some suggestions about which actions can be carried out from there. Or, if all else fails, it could ask the user if they want additional help. This, of course, should be a last-resort scenario, because ideally the interface would be so intuitive and discoverable that users wouldn’t need help or tutorials — but at least help and tutorials would be planned and included, and people wouldn’t be left on their own to figure out how to do something. 

In case my examples haven’t been clear enough, my kind of tablet would be strongly focused on applications and services interoperability. Precise, rigorous interface guidelines would ensure a great integration with third-party solutions. Developers could write standalone apps, but also services and system extensions to expand the tablet’s functionality and scope, ultimately contributing to its overall flexibility. In a model like this, workflows would have less friction because you would be adding functionalities and ‘actions’ made available either by the manufacturer or by third parties. 

If this is getting too abstract, imagine an even more reliable and ‘hardwired’ version of what iOS currently offers with Siri Shortcuts or with the older Workflow app. You download/purchase additional modules to accomplish specific tasks. Once added to the system, these modules or ‘actions’ (or whatever you want to call them) would in turn be available and accessible to third-party applications. For example, imagine you could add a “Markdown to HTML” module to the OS. From then on, that action would be available to the built-in text editor, but also to any third-party text editor you may get in the future. If a third-party developer wanted to write a text editor using their own Markdown-to-HTML converter, they could do so, and the user could choose which to use by changing a preference setting. But if a third-party developer wanted to write a certain kind of text editor that is more focused on beautiful typography or other specific aspects, they could do that without feeling the need to also offer text converters. Again, these are just crude examples off the top of my head. Perhaps a few user interface mockups would tell a clearer story, but I hope you’re getting my drift nonetheless.

I think that a tablet with this kind of OS that prioritises modularity, tasks, and app integration, and with a user interface that treats the tablet as a tablet and lets you interact with it by ‘speaking a tablet language’, would make for a versatile device with a good degree of extensibility, and a good degree of independence. You could attach any kind of accessory to it to make your life easier, such as an external keyboard, but the idea is that all you’d need to have is the main device and its stylus. And if you wanted to use such tablet as a mere ancillary device, you could do so by seamlessly connecting it to your computer, and the tablet would become an accessory or extension as needed. 

There is nothing particularly sci-fi in my ideal tablet — just perhaps a rearrangement of a few conceptual pieces — but I understand if some of my ideas sound weird or unfamiliar or unfeasible, especially if you’re satisfied with the way the iPad and iOS-on-the-iPad work today. I think the time has come for Apple to either embrace the interface limitations of the iPad and try to make the best hardware/software integration within those limitations, or to start designing something new from scratch with the express purpose of being a creation-/production-oriented device and operating system.

Let me know what you think, if you like.

How the MacBook’s keyboard fiasco has reshuffled my whole upgrade path

Tech Life

In mid-2018, continuing to use my 2009 15-inch MacBook Pro wasn’t feasible anymore. That Mac had started manifesting serious reliability problems, including the inability to switch between graphic cards without crashing, faulty thermal sensors, a failing battery, and the internal main SSD randomly not recognised at boot. I wanted to keep using a laptop as my main machine, but purchasing a new MacBook Pro with that terrible keyboard design was out of the question. Also, my available budget at the time would have been enough to get a base configuration of the 13-inch MacBook Pro without Touch Bar, which wouldn’t have been ideal for my needs anyway. If I wanted a Mac laptop with the good old keyboard design my options were essentially two: either purchase the 13-inch MacBook Air, or a used 2015 MacBook Pro.

My concern was that, while they were both decent candidates, both clearly having much better specs than my ageing MacBook Pro, neither represented a ‘future-proof’ option. I don’t need extreme CPU power for my work, but I also don’t upgrade my main Mac every year. I needed a solution that would last me a few years. That’s when I decided — not without mulling over it for weeks — to change strategy completely and get a 21.5‑inch 4K retina iMac. 

It was a good purchase and I really love the iMac. Having a 21.5‑inch retina display was the best gift I could make to my eyes. But the workaround plan wasn’t over, since I still needed a laptop for when I had to work away from home. My original idea was to get the 2009 MacBook Pro fixed as soon as possible, and keep using it as my mobile workstation. The iMac has enough power for my more complex photo editing, my occasional dabbling in video editing (still learning the basics), those work sessions needing a particularly extensive multitasking, and the occasional gaming. The 2009 MacBook Pro would still be powerful enough to handle work assignments when out and about.

But then I started wondering: what if the repairs end up being expensive enough that it actually makes more sense to look for a used, newer MacBook? Since it wouldn’t be my main Mac anyway, display size isn’t an issue, so I could search for a more compact MacBook. Long story short, I found a 2013 11-inch MacBook Air in great condition and at a bargain price. And, five months later and with hindsight, I can definitely say it has felt like adding the final piece to a giant, complex puzzle.

I know, when I tell people I had to upgrade my MacBook Pro, and to do that I purchased an iMac and a MacBook Air, it does sound a bit overkill, but it is a solution that has cost me less money than a new 15-inch MacBook Pro, and it has simultaneously solved different problems. Yes, it’s handy to have just one machine that can turn into a desktop workstation when at home, and that can also be carried around everywhere I need to use it. But with the iMac and the MacBook Air, I can have the power and comfort at home, and extreme portability with still enough power (and connections!) when on the go.

It’s already been a win-win, but in the past months I’ve noticed something else. Last year I was quite worried because it had come to a point where I needed to upgrade all my devices: 

  • I couldn’t keep using my iPhone 5 as my main phone because, while there wasn’t and isn’t anything wrong with it, while it’s still a capable device today, it can’t be updated past iOS 10.3.3, and it’s a 32-bit device. For work I sometimes have to test iOS apps, so I needed to upgrade to a newer iPhone.
  • With regard to the Mac situation, you know the story by now.
  • I also felt the need to get a newer iPad, partly because of the same reasons I needed a newer iPhone, but also because my third-generation iPad is a device that, sadly, hasn’t aged as well as the iPhone 5, despite both being 2012 devices.

And when your budget isn’t great, you don’t want to end up in a situation like that. Not having €4,000 to invest in this general multi-device upgrade, I had to prioritise, and the Mac had to come first, the iPhone second, and the iPad third. Several months have passed, and after upgrading my Mac setup and later my iPhone, I have realised that I’m in no particular hurry to upgrade my iPad anymore.

Why? Mostly because of the 11-inch MacBook Air. When I first talked about it, I made the joke about it being my 11-inch iPad Pro. As time passed, that went from joke, to half joke, to actual truth. 

In recent times, I have asked myself many times why I should get a new iPad, and I haven’t found a compelling reason to spend money on a new iPad or iPad Air, much less an iPad Pro. It keeps feeling just like a ‘nice to have’ device. 

While I have much respect for those who have successfully managed to use an iPad as sole computing device for work and leisure, I still believe the iPad has important user interface limitations that prevent it from becoming as versatile and ‘scalable’ as a Mac, unless its operating system is transformed to a point where it essentially becomes ‘Mac OS with touch support’. 

I use my first- and third-generation iPads as consumption devices 90% of the time. While sometimes the iPad 3 gets frustratingly sluggish, it’s still capable of playing videos, surfing the Web, being an ebook reader and a tool for some photo retouching and even creative work. The big question here is: would purchasing a more powerful iPad lead me to use it in more ‘Pro’ ways? I have thought about this for weeks, I kid you not. In the end, the answer is no. Of course I’d buy one if I had money to burn. iPads are great devices, hardware-wise. (iOS is a whole other story. The operating system is on a path of greater complexity and poorer usability, and that is another factor which has dampened the enthusiasm I used to feel about the iPad in its first years). 

But since purchasing the 11-inch MacBook Air, I already have an ultraportable pro device, with a fantastic battery life and the best keyboard it can have. It doesn’t have a retina display, but that (to my great astonishment, I’ll add) hasn’t been a big deal at all.

In fact, since purchasing the 11-inch MacBook Air, I’ve almost stopped bringing an iPad with me when I go out. My iPads are now mostly household devices used for basic-to-intermediate tasks. A new iPad would simply be employed to do the same tasks, just in a faster, smoother fashion. Would this make things better? Yes. Would this be enough to justify the expense? Probably not. At this point, the only compelling reason to upgrade to a newer iPad for me would be strictly work-related, or if my current iPads would stop being useful (third-party apps/services ceasing to work) or stop functioning (hardware failure).

Tech nerds, especially iOS fundamentalists, love to talk about this amazing Post-PC era we’re all supposedly living today. I know I can’t use my personal habits and preferences as evidence to the contrary, but I’m writing this in a university library, in a room with at least 350 other people. I look around and I see laptops everywhere. If I had to estimate how many people are using tablets here, I’d say no more than 30. The guy sitting next to me has both, an old MacBook Air and an old iPad mini. Again, it could be anecdotal. I’ve been doing a lot of photo walks as of late. I’ve observed a lot of people in public places. I hardly see tablets, but I do see a lot of smartphone usage. While for some this is enough to validate their ‘Post-PC era’ stance, I still have the distinct impression that traditional computers aren’t going away anytime soon; and neither are smartphones. Are perhaps tablets the ones destined to turn into niche devices?