If iOS were a game franchise, its eleventh iteration coming in a few months would perhaps be called iOS 11 — A Release for Geeks. I want to make clear something from the start: I do think iOS 11 is one of the most interesting and exciting versions of iOS I’ve seen at least since iOS 7. And particularly for the iPad, it’s possibly the best iOS version ever. I like how finally the potential of the iPad is also recognised in software by the addition of iPad-specific features.
But what I also noticed, both in the WWDC 2017 keynote demos and on Apple’s iOS 11 Preview page, is a new layer of complexity which makes me wonder if maybe with iOS 11 we’re witnessing the start of a new phase that may ultimately bring a less intuitive — or at least less immediate — iOS user interface. Especially for new users.
Before I proceed, I wanted to recap a few things I’ve been pointing out over time.
In Rebuilding the toy box, an article I wrote in March 2012 in the era of iOS 5, I observed:
Now, in my opinion, Apple has managed to do something incredibly difficult: on one side it had to make the toy box [iOS’s user interface, the ‘container’] more complex to accommodate new features and keep it manageable as users pour more applications into it; on the other, Apple has been able to maintain the user interface as simple and as consistent as possible. Compared to the first-generation iPhone, very few new gestures or commands have appeared during these years. The learning curve has remained consistently low. Yet, feature-wise, if you put the first iPhone with iPhone OS 1.x and an iPhone 4S with iOS 5 side by side, there’s an abyss between them.
In my final observations in The Mac is just as compelling (February 2016), I wrote:
iOS is praised but at the same time you hear that it needs certain refinements — especially when it comes to the iPad Pro — to fully take advantage of the hardware and the possibilities it opens. As I said previously, to achieve that, to become a more powerful, less rigid system, iOS will have to get a bit more complex. If you want a higher degree of freedom when multitasking in iOS, things will have to behave in a more Mac-like way. And that’s ironic, is it not? A few days back I saw someone posting a design for a drag-and-drop interface between windows in iOS, and the general reaction seemed to be: That’s a cool idea. I thought: Of course, it’s like on the Mac. It’s one of the simple things of the Mac. You can tell me it’s great to move files from an app to another by using share sheets. Dragging and dropping is just as simple, and works basically everywhere.
In Trajectories (December 2016), I wrote:
How is iOS supposed to evolve to become as mature and versatile a platform as the Mac?
If how iOS has evolved until now is of any indication, the trajectory points towards the addition of Mac-like features and behaviours to the operating system. For example, iPads have become better tools for doing ‘serious work’ by adding more (and more useful) keyboard shortcuts, and by improving app multitasking with features like Slide Over, Split View, and Picture in Picture.
I may be wrong about this but my theory is that, in order for iOS to become more powerful and versatile, its user interface and user interaction are bound to get progressively more complex. The need may arise to increase the number of specialised, iPad-only features, features that would make little sense on the iPhone’s smaller footprint, or for the way people use iPhones versus iPads.
And further on:
In iOS’s software and user interface, the innovative bit happened at the beginning: simplicity through a series of well-designed, easily predictable touch gestures. Henceforth, it has been an accumulation of features, new gestures, new layers to interact with. The system has maintained a certain degree of intuitiveness, but many new features and gestures are truly intuitive mostly to long-time iOS users. Discoverability is still an issue for people who are not tech-savvy.
I think this is even more evident by taking a look at the new iPad-specific features and gestures introduced in iOS 11. In the first incarnations of iOS, what truly amazed me was how much you could accomplish with a relatively bare-bones gesture vocabulary, and how many of the core gestures of iOS’s multi-touch interaction were so intuitive, so instinctual, that a lot of regular users of any age could pick them up with surprising ease and speed, and be able to handle an iOS device in no time. Think gestures like pinch-to-zoom, swiping to browse items, tap-and-hold to select text, etc. When the iPhone was introduced, Apple produced a ‘welcome’ instructional video to teach people how to interact with the new device. It was very well made, but I was surprised that many non-techies in my circle of friends and acquaintances found it mostly redundant. Apple had managed to introduce a completely new interface and user interaction that needed very little explanation to be grasped by regular people. Exciting times, indeed. Revolutionary, even.
Then things got necessarily more complex at each iOS iteration. Still, as I noted in my past articles, how Apple could introduce more complexity while maintaining intuitiveness in the user interface and interaction was truly admirable. The core gestures and behaviours remained consistent for several iterations, but then an increasing number of panels, controls, screens, interactions, made matters more complex. The added complexity was sometimes mitigated by a predictable behaviour: when Notification Centre was introduced in iOS 5, invoking its panel was rather straightforward: you pulled it down from the top by swiping downwards. There was nothing associated with this gesture before, and the matte panel with linen motif was visually distinctive — it felt like pulling down a curtain or a virtual cover for the Springboard. It added depth as intuitively as possible.
Other times the added complexity went hand in hand with inconsistency or with the introduction of gestures that interfered with previously-learnt and ingrained ones. Take for example Spotlight search: since its introduction in iOS 3, it was presented as a separate screen virtually positioned on the left of the Home screen (and indicated by a small loupe icon instead of a dot, so that you knew you were accessing a different screen, and not just another screen with apps), and it was easily accessible by swiping right. Then, starting with iOS 7, that separate Spotlight screen disappeared, and you had to drag downwards from any of the screens of apps on your device to reveal a Spotlight search text field. A gesture that somewhat interfered with the ‘swipe down from the top edge to invoke Notification Centre’ gesture. (I know, I only have anecdotal data, but I witnessed many regular people struggle with this new gesture back then). This behaviour lasted from iOS 7 to iOS 9, and remains in iOS 10 with a twist: you can access Spotlight search by also swiping right from the Home screen, just like old times, but in iOS 10 you’ll also access the Today View.
As iOS’s interface got more complex in recent versions, with the addition of new layers, UI elements, behaviours, even hardware features (3D Touch), the gesture vocabulary has increased in richness. On the one hand, it’s fascinating how single gestures (or chain of gestures) have been kept relatively simple in themselves, considering the level of sophistication of the commands and instructions they carry out. On the other, it has become a bit more difficult to remember all these gestures — or to learn new gestures substituting old, ingrained ones, like Press Home to open replacing the tried-and-trusted Slide to Unlock. And new users who start getting acquainted with iOS today have to be shown many of such gestures, otherwise it’s unlikely they’d come naturally to them (another thing I personally witnessed in an Apple Store). It’s a very different scenario from the early days when regular people picked up a lot about handling an iOS device simply by playing with it for a while.
Let’s take some of iOS 11’s new features for the iPad. The Dock has been expanded to be more like Mac OS’s Dock, both in its appearance when viewed in the Springboard, and in its multitasking-related behaviour. Now, when you’re inside an app, you can invoke the Dock by simply swiping up from the bottom of the screen. A gesture that up to now was reserved to summon Control Centre, by the way. Yes, you can still access Control Centre from the bottom, together with the new app switcher, but you have to keep swiping up. As you have probably guessed, I’m not a fan of this kind of gesture overlap.
From the Dock, you can do things like this:
which is still a relatively simple thing to do. Swipe up, tap and hold, then select. (I’m sure the interface won’t appear as simple if more files or options are displayed when you tap and hold on different apps).
But take this series of gestures:
Not exactly an example of something that would come to you naturally if no one told you how to do it. And something you’d probably even have to practice a few times to pick it up or perform fluently.
Selecting multiple objects to drag to another app open in split view, or even across spaces, is another gesture that’s cleverly implemented if you consider what it achieves without using a mouse and a pointer, but takes a bit to practice, and it certainly needs to be performed with the iPad placed on a stable surface. (See Craig Federighi demoing it at about the 1:10:32 mark on the WWDC 2017 keynote video). It’s a gesture that makes sense once you stop and think about the dynamics, but can be physically intricate when you want to multi-select and drag, say, more than five objects. I wonder if it couldn’t be made more straightforward — especially when multi-selecting files in the new Files app — for example by doing this: you tap and hold on the first item, a series of radio buttons appear next to all the other items, you select all the items you want, and then you tap and drag the selection to the intended destination. It sounds more complex when described, but I think it’s simpler and less fatiguing in its execution.
Then there are gestures I frankly don’t get, such as how QuickType is supposed to improve your typing experience with the new virtual keyboard. The relevant paragraph on the iOS 11 Preview page reads: Letters, numbers, symbols, and punctuation marks are now all on the same keyboard — no more switching back and forth. Just flick down on a key to quickly select what you need.
I don’t find this method quick at all. You may save a tap or two when inserting the occasional symbol, but when writing long passages or sentences containing a mix of letters, numbers, symbols, I find this gesture to actually slow down typing, and a bit impractical in general. (Maybe it’s something you need some time to get accustomed to — in my admittedly quick test at an Apple Store, I often had to stop and think, and that flicking down didn’t exactly come naturally while typing).
A couple of stray observations
- On the iPhone, Apple should really stop fiddling with the Lock screen and Notification Centre for a bit. iOS 11 introduces subtle new behaviours when invoking notifications (you swipe down as usual, but you’re actually pulling down the Lock screen, which displays the most recent notification, but you can pull up to have a list of all recent notifications; and then of course you can pull right to access good old Today View… And I keep wondering what was wrong with iOS 10’s implementation of Notification Centre. I don’t see any meaningful improvement in iOS 11’s implementation, just a few redundant gestures added on top of an interface that’s getting progressively layered — from a cognitive standpoint — at each new iteration.
- 3D Touch continues to puzzle me as far as gestures are concerned. When Federighi showed how the redesigned Control Centre works, you can see that basically all the controls can be further explored, revealing additional settings, by using 3D Touch on them. On Twitter, I sarcastically remarked how iPhone SE owners are going to love this (the iPhone SE doesn’t feature 3D Touch — and neither do the iPhone 5s, iPhone 6, and 6 Plus, all devices that will be capable of running iOS 11). I soon learnt that, on devices that lack 3D Touch, the same can be achieved by tapping and holding. And that in turn made me wonder: why can’t the tap-and-hold gesture be used system-wide to act as a ‘poor man’s 3D Touch’ on iOS devices that lack that feature? Further: why not just use the tap-and-hold gesture instead of 3D Touch, everywhere? As it is, 3D Touch to me looks more like a showing off of technology, innovation for innovation’s sake, rather than a truly useful, irreplaceable gesture. It adds complexity, a new gesture to memorise, for mainly invoking contextual menus / options / settings. It is exclusive to a subset of iOS devices (only recent iPhones), and if you have a 3D Touch-equipped iPhone, and also own an iPad, I wonder how many times you’ve tried 3D Touching on the iPad as well, out of habit. If your objection is, But simply tapping and holding on apps to replace 3D Touch will interfere with the very old gesture to rearrange the apps on the Springboard, I will say that ‘Rearrange apps’ could very well become an option in the invoked contextual menu. Again, simpler to implement than to describe.
Not long ago, after a series of posts where I was particularly critical of Apple, I received some feedback privately that essentially amounted to: It seems that you’re always trying to find things to complain about Apple, meaning that I was simply turning into a contrarian when it came to Apple, their hardware and software, and so forth. I could quickly respond by saying that if I didn’t care deeply about Apple, I wouldn’t write articles of more than 2,400 words such as the one you’re reading now. As I warned at the beginning, I’m not belittling Apple’s efforts with regard to iOS 11. I really think it’s a feature- and technology-packed release, and it’s great that we finally have iPad-specific features that make certain tasks much easier and certain workflows smoother and more Mac-like.
In the process, I’m simply observing how the added functionality and Mac-likeness inevitably brings new, more complex gestures to memorise and master; something I predicted it would happen some time ago. iOS devices have made computing much more accessible to many tech-averse people, and they managed to achieve that thanks to their simplicity. Touch and the multi-touch interaction have played an important part in delivering such simplicity, immediacy, and intuitiveness, but we shouldn’t forget that the graphical user interface has played an equally essential part. iOS apps are more accessible than traditional computer apps, with more discoverable commands and options, and sometimes even a clearer, more direct way to present information and act on it.
As features and capabilities are added, part of me fears that we’ll lose a bit of that initial simplicity and intuitiveness that attracted so many people to iOS in the first place. A lot of gestures and UI paradigms at this stage assume familiarity with Mac OS and assume a long-time familiarity with iOS. People, especially tech-savvy people, who started with the first iPhone ten years ago, and have been iOS users ever since, have made an iterative evolution in their use of iOS that has gone hand in hand with the progression of iOS itself. But today, every time I see people approach iOS for the first time, I often notice a mild bewilderment (“How do I do this?” — “Oh, I didn’t know you could do that” — etc.) that I didn’t see previously.
I’m not necessarily saying that Apple’s doing something wrong in this regard. Complexity in software is inevitable for the iPad, if only to fully harness its impressive hardware and power. I only hope that down the road the interface won’t become unnecessarily complex and too similar to a traditional computer’s UI with all the problems this involves — such as feature discoverability or poorly-designed, overly-crowded interfaces. And that, if the gesture vocabulary is destined to grow, that it grows by remaining as consistent as possible throughout iOS releases, to avoid confusion, to avoid unnecessary cognitive load, and unnecessary re-training. And when this can’t be achieved, then obscurity has to be avoided at all costs. The interface should guide users to discover new gestures — especially if such gestures are simultaneously essential and complex — with visual cues and well-designed buttons and controls.
In the afore-linked Trajectories I argued that if we zoom out a bit and consider the big picture, the revolution in personal computing brought by iOS feels (to me) more like a reinvention of the wheel than a tangible progression. On the one hand, I find devices like the new iPad Pros really exciting, with incredible hardware and, with iOS 11, a more versatile operating system. On the other hand, I try to picture what’s next, and all I can see for now are devices and an operating system that, to be even more versatile, will have to implement features and paradigms we’ve already seen in traditional computers and systems. And this doesn’t really strike me as something revolutionary, conceptually speaking.