Back to the smartphone experience of 2006

Around this week ten years ago, the iPhone went on sale. It revolutionised everything about the mobile phone landscape. Year after year, iteration after iteration — both in the iPhone hardware and iOS — we have been getting so accustomed to the ‘new order’ and to the revolution brought by the iPhone that, understandably, there are a lot of things we now take for granted.

Just two days ago, I had a sobering reminder of what was life with a smartphone before the iPhone. My wife and I noticed a discarded Nokia E61 near her workplace. The phone looked a bit battered. The battery, while present, appeared damaged in one corner. I decided to bring it home anyway, like with any discarded electronic device I deem interesting, the idea being, Let’s see if it works. If it ends up not working, I can always throw it away myself. My wife used to own a Nokia 6630 and I remembered we still had its accessories around (charger, USB data cable, earphones). Now, Nokia is one of the best when it comes to accessory compatibility in their older phones, and I was happy to see that everything could be easily connected to this E61.

After a short while connected to the charger, surprise surprise, the E61 came alive:

Nokia E61 - alive

(Photo taken with an iPhone 4 in terrible lighting conditions — apologies for the quality.)

What’s even more surprising is that, despite the physical damage, the battery works and still holds an impressive charge.

So, I got lucky. Then I started fiddling with the phone, while looking for more information about it. I remember chasing after this series of Nokia smartphones in the pre-iPhone era. They looked ‘professional’ and capable. Now its tech specs definitely make us chuckle. It was 2006 after all. Still, thanks to the iPhone happening, smartphones like this feel even older now. To be fair, the E61 is capable enough to connect to WPA wireless networks; it easily paired via Bluetooth with my Mac; it is especially good at handling email; and has decent calendar/organiser apps, like many other similar smartphones/PDAs of the era. Even the physical keyboard is tolerable (better, say, than the Palm Pre’s) and the joystick is better than certain mushy ‘pads’ I operated on other dumbphones and pre-iPhone smartphones. Oh, and the speaker is surprisingly loud and crisp: music obviously sounds richer through the earphones, but it is otherwise quite listenable. The speaker is positioned on the side of the phone, so the music doesn’t come out muffled when you leave the phone on any kind of surface. 

Nokia E61

The real shock in going back to a phone that existed before the iPhone and iOS is with regard to the interface and the user interaction. Trivial tasks such as connecting to a wireless network involve convoluted trips to menus in different areas of the phone; it took me a while to figure things out because now I just turn Wi-Fi on, tell my iPhone which network to connect to, and I can easily see on the status bar if and when the iPhone is connected to a wireless network. On this Nokia, I had to revert to the previous concept of having connectivity ‘on demand’; I had to remember that logic to understand why it was necessary to first create a sort of profile to designate e.g. my wireless home network as access point to connect to when needed. You don’t just point at your network on the list of available networks and tell the phone to connect to it, like you do now.

As you can see in the picture above, the UI of these phones was more folder-based than app-based, and the folder structure a bit more rigid than how we’re accustomed to doing things now. It wasn’t worse per se, it made sense for how you used such phones, but it certainly was more of a pain to navigate.

Returning to a physical keyboard, despite being rather decent as this one, reminded me how I sorely don’t miss one and how grateful I am for the iPhone’s virtual keyboard. Sure, at this point I’d probably have to spend a week using only the Nokia E61 to properly get accustomed to its keyboard and gain typing speed, but again, I had forgotten how tedious and frustrating the ‘hunt for a symbol or diacritic’ game was. At least it’s a full keyboard and I didn’t have to go back to T9 input.

Then of course there’s the matter of the screen. It’s okay, and has enough contrast. The pixels are visible but not as conspicuously as the photo above may suggest. Still, it’s QVGA: 320×240 pixels, and browsing the Web is… well… not really pleasant. I managed to install the Opera Mini browser, which is more usable, but still: websites (those that are gracious enough to adjust to this kind of vintage device) are hard to navigate and difficult to read. Checking my Gmail account was tolerable, as is handling emails in general on this phone, but again, fonts were small and legibility low. You just miss a retina display badly.

As I’m sure you understand, I’m not writing a review of the Nokia E61. It wouldn’t be fair and wouldn’t make any sense, but checking out this smartphone from 2006 is useful for the perspective it gives. It’s fascinating to have this kind of throwback experience and be reminded of the sheer magnitude of the iPhone’s impact right thereafter and from that point onward. I had to spend a couple of days with a previous concept of smartphone to fully realise the amount of details, user interface and user interaction paradigms we take for granted today.

Category Tech Life Tags , , , ,

Tap swipe hold scroll flick drag drop

If iOS were a game franchise, its eleventh iteration coming in a few months would perhaps be called iOS 11 — A Release for Geeks. I want to make clear something from the start: I do think iOS 11 is one of the most interesting and exciting versions of iOS I’ve seen at least since iOS 7. And particularly for the iPad, it’s possibly the best iOS version ever. I like how finally the potential of the iPad is also recognised in software by the addition of iPad-specific features.

But what I also noticed, both in the WWDC 2017 keynote demos and on Apple’s iOS 11 Preview page, is a new layer of complexity which makes me wonder if maybe with iOS 11 we’re witnessing the start of a new phase that may ultimately bring a less intuitive — or at least less immediate — iOS user interface. Especially for new users.

Before I proceed, I wanted to recap a few things I’ve been pointing out over time.

In Rebuilding the toy box, an article I wrote in March 2012 in the era of iOS 5, I observed:

Now, in my opinion, Apple has managed to do something incredibly difficult: on one side it had to make the toy box [iOS’s user interface, the ‘container’] more complex to accommodate new features and keep it manageable as users pour more applications into it; on the other, Apple has been able to maintain the user interface as simple and as consistent as possible. Compared to the first-generation iPhone, very few new gestures or commands have appeared during these years. The learning curve has remained consistently low. Yet, feature-wise, if you put the first iPhone with iPhone OS 1.x and an iPhone 4S with iOS 5 side by side, there’s an abyss between them.

In my final observations in The Mac is just as compelling (February 2016), I wrote:

iOS is praised but at the same time you hear that it needs certain refinements — especially when it comes to the iPad Pro — to fully take advantage of the hardware and the possibilities it opens. As I said previously, to achieve that, to become a more powerful, less rigid system, iOS will have to get a bit more complex. If you want a higher degree of freedom when multitasking in iOS, things will have to behave in a more Mac-like way. And that’s ironic, is it not? A few days back I saw someone posting a design for a drag-and-drop interface between windows in iOS, and the general reaction seemed to be: That’s a cool idea. I thought: Of course, it’s like on the Mac. It’s one of the simple things of the Mac. You can tell me it’s great to move files from an app to another by using share sheets. Dragging and dropping is just as simple, and works basically everywhere.

In Trajectories (December 2016), I wrote:

How is iOS supposed to evolve to become as mature and versatile a platform as the Mac?

If how iOS has evolved until now is of any indication, the trajectory points towards the addition of Mac-like features and behaviours to the operating system. For example, iPads have become better tools for doing ‘serious work’ by adding more (and more useful) keyboard shortcuts, and by improving app multitasking with features like Slide Over, Split View, and Picture in Picture.

I may be wrong about this but my theory is that, in order for iOS to become more powerful and versatile, its user interface and user interaction are bound to get progressively more complex. The need may arise to increase the number of specialised, iPad-only features, features that would make little sense on the iPhone’s smaller footprint, or for the way people use iPhones versus iPads.

And further on:

In iOS’s software and user interface, the innovative bit happened at the beginning: simplicity through a series of well-designed, easily predictable touch gestures. Henceforth, it has been an accumulation of features, new gestures, new layers to interact with. The system has maintained a certain degree of intuitiveness, but many new features and gestures are truly intuitive mostly to long-time iOS users. Discoverability is still an issue for people who are not tech-savvy.

I think this is even more evident by taking a look at the new iPad-specific features and gestures introduced in iOS 11. In the first incarnations of iOS, what truly amazed me was how much you could accomplish with a relatively bare-bones gesture vocabulary, and how many of the core gestures of iOS’s multi-touch interaction were so intuitive, so instinctual, that a lot of regular users of any age could pick them up with surprising ease and speed, and be able to handle an iOS device in no time. Think gestures like pinch-to-zoom, swiping to browse items, tap-and-hold to select text, etc. When the iPhone was introduced, Apple produced a ‘welcome’ instructional video to teach people how to interact with the new device. It was very well made, but I was surprised that many non-techies in my circle of friends and acquaintances found it mostly redundant. Apple had managed to introduce a completely new interface and user interaction that needed very little explanation to be grasped by regular people. Exciting times, indeed. Revolutionary, even.

Then things got necessarily more complex at each iOS iteration. Still, as I noted in my past articles, how Apple could introduce more complexity while maintaining intuitiveness in the user interface and interaction was truly admirable. The core gestures and behaviours remained consistent for several iterations, but then an increasing number of panels, controls, screens, interactions, made matters more complex. The added complexity was sometimes mitigated by a predictable behaviour: when Notification Centre was introduced in iOS 5, invoking its panel was rather straightforward: you pulled it down from the top by swiping downwards. There was nothing associated with this gesture before, and the matte panel with linen motif was visually distinctive — it felt like pulling down a curtain or a virtual cover for the Springboard. It added depth as intuitively as possible.

Other times the added complexity went hand in hand with inconsistency or with the introduction of gestures that interfered with previously-learnt and ingrained ones. Take for example Spotlight search: since its introduction in iOS 3, it was presented as a separate screen virtually positioned on the left of the Home screen (and indicated by a small loupe icon instead of a dot, so that you knew you were accessing a different screen, and not just another screen with apps), and it was easily accessible by swiping right. Then, starting with iOS 7, that separate Spotlight screen disappeared, and you had to drag downwards from any of the screens of apps on your device to reveal a Spotlight search text field. A gesture that somewhat interfered with the ‘swipe down from the top edge to invoke Notification Centre’ gesture. (I know, I only have anecdotal data, but I witnessed many regular people struggle with this new gesture back then). This behaviour lasted from iOS 7 to iOS 9, and remains in iOS 10 with a twist: you can access Spotlight search by also swiping right from the Home screen, just like old times, but in iOS 10 you’ll also access the Today View.

As iOS’s interface got more complex in recent versions, with the addition of new layers, UI elements, behaviours, even hardware features (3D Touch), the gesture vocabulary has increased in richness. On the one hand, it’s fascinating how single gestures (or chain of gestures) have been kept relatively simple in themselves, considering the level of sophistication of the commands and instructions they carry out. On the other, it has become a bit more difficult to remember all these gestures — or to learn new gestures substituting old, ingrained ones, like Press Home to open replacing the tried-and-trusted Slide to Unlock. And new users who start getting acquainted with iOS today have to be shown many of such gestures, otherwise it’s unlikely they’d come naturally to them (another thing I personally witnessed in an Apple Store). It’s a very different scenario from the early days when regular people picked up a lot about handling an iOS device simply by playing with it for a while.

Let’s take some of iOS 11’s new features for the iPad. The Dock has been expanded to be more like Mac OS’s Dock, both in its appearance when viewed in the Springboard, and in its multitasking-related behaviour. Now, when you’re inside an app, you can invoke the Dock by simply swiping up from the bottom of the screen. A gesture that up to now was reserved to summon Control Centre, by the way. Yes, you can still access Control Centre from the bottom, together with the new app switcher, but you have to keep swiping up. As you have probably guessed, I’m not a fan of this kind of gesture overlap.

From the Dock, you can do things like this:

 
which is still a relatively simple thing to do. Swipe up, tap and hold, then select. (I’m sure the interface won’t appear as simple if more files or options are displayed when you tap and hold on different apps).

But take this series of gestures:

 

Not exactly an example of something that would come to you naturally if no one told you how to do it. And something you’d probably even have to practice a few times to pick it up or perform fluently.

Selecting multiple objects to drag to another app open in split view, or even across spaces, is another gesture that’s cleverly implemented if you consider what it achieves without using a mouse and a pointer, but takes a bit to practice, and it certainly needs to be performed with the iPad placed on a stable surface. (See Craig Federighi demoing it at about the 1:10:32 mark on the WWDC 2017 keynote video). It’s a gesture that makes sense once you stop and think about the dynamics, but can be physically intricate when you want to multi-select and drag, say, more than five objects. I wonder if it couldn’t be made more straightforward — especially when multi-selecting files in the new Files app — for example by doing this: you tap and hold on the first item, a series of radio buttons appear next to all the other items, you select all the items you want, and then you tap and drag the selection to the intended destination. It sounds more complex when described, but I think it’s simpler and less fatiguing in its execution.

Then there are gestures I frankly don’t get, such as how QuickType is supposed to improve your typing experience with the new virtual keyboard. The relevant paragraph on the iOS 11 Preview page reads: Letters, numbers, symbols, and punctuation marks are now all on the same keyboard — no more switching back and forth. Just flick down on a key to quickly select what you need.

 

I don’t find this method quick at all. You may save a tap or two when inserting the occasional symbol, but when writing long passages or sentences containing a mix of letters, numbers, symbols, I find this gesture to actually slow down typing, and a bit impractical in general. (Maybe it’s something you need some time to get accustomed to — in my admittedly quick test at an Apple Store, I often had to stop and think, and that flicking down didn’t exactly come naturally while typing).

A couple of stray observations

  • On the iPhone, Apple should really stop fiddling with the Lock screen and Notification Centre for a bit. iOS 11 introduces subtle new behaviours when invoking notifications (you swipe down as usual, but you’re actually pulling down the Lock screen, which displays the most recent notification, but you can pull up to have a list of all recent notifications; and then of course you can pull right to access good old Today View… And I keep wondering what was wrong with iOS 10’s implementation of Notification Centre. I don’t see any meaningful improvement in iOS 11’s implementation, just a few redundant gestures added on top of an interface that’s getting progressively layered — from a cognitive standpoint — at each new iteration.
  • 3D Touch continues to puzzle me as far as gestures are concerned. When Federighi showed how the redesigned Control Centre works, you can see that basically all the controls can be further explored, revealing additional settings, by using 3D Touch on them. On Twitter, I sarcastically remarked how iPhone SE owners are going to love this (the iPhone SE doesn’t feature 3D Touch — and neither do the iPhone 5s, iPhone 6, and 6 Plus, all devices that will be capable of running iOS 11). I soon learnt that, on devices that lack 3D Touch, the same can be achieved by tapping and holding. And that in turn made me wonder: why can’t the tap-and-hold gesture be used system-wide to act as a ‘poor man’s 3D Touch’ on iOS devices that lack that feature? Further: why not just use the tap-and-hold gesture instead of 3D Touch, everywhere? As it is, 3D Touch to me looks more like a showing off of technology, innovation for innovation’s sake, rather than a truly useful, irreplaceable gesture. It adds complexity, a new gesture to memorise, for mainly invoking contextual menus / options / settings. It is exclusive to a subset of iOS devices (only recent iPhones), and if you have a 3D Touch-equipped iPhone, and also own an iPad, I wonder how many times you’ve tried 3D Touching on the iPad as well, out of habit. If your objection is, But simply tapping and holding on apps to replace 3D Touch will interfere with the very old gesture to rearrange the apps on the Springboard, I will say that ‘Rearrange apps’ could very well become an option in the invoked contextual menu. Again, simpler to implement than to describe.

(Provisional) conclusion

Not long ago, after a series of posts where I was particularly critical of Apple, I received some feedback privately that essentially amounted to: It seems that you’re always trying to find things to complain about Apple, meaning that I was simply turning into a contrarian when it came to Apple, their hardware and software, and so forth. I could quickly respond by saying that if I didn’t care deeply about Apple, I wouldn’t write articles of more than 2,400 words such as the one you’re reading now. As I warned at the beginning, I’m not belittling Apple’s efforts with regard to iOS 11. I really think it’s a feature- and technology-packed release, and it’s great that we finally have iPad-specific features that make certain tasks much easier and certain workflows smoother and more Mac-like.

In the process, I’m simply observing how the added functionality and Mac-likeness inevitably brings new, more complex gestures to memorise and master; something I predicted it would happen some time ago. iOS devices have made computing much more accessible to many tech-averse people, and they managed to achieve that thanks to their simplicity. Touch and the multi-touch interaction have played an important part in delivering such simplicity, immediacy, and intuitiveness, but we shouldn’t forget that the graphical user interface has played an equally essential part. iOS apps are more accessible than traditional computer apps, with more discoverable commands and options, and sometimes even a clearer, more direct way to present information and act on it.

As features and capabilities are added, part of me fears that we’ll lose a bit of that initial simplicity and intuitiveness that attracted so many people to iOS in the first place. A lot of gestures and UI paradigms at this stage assume familiarity with Mac OS and assume a long-time familiarity with iOS. People, especially tech-savvy people, who started with the first iPhone ten years ago, and have been iOS users ever since, have made an iterative evolution in their use of iOS that has gone hand in hand with the progression of iOS itself. But today, every time I see people approach iOS for the first time, I often notice a mild bewilderment (“How do I do this?” — “Oh, I didn’t know you could do that” — etc.) that I didn’t see previously.

I’m not necessarily saying that Apple’s doing something wrong in this regard. Complexity in software is inevitable for the iPad, if only to fully harness its impressive hardware and power. I only hope that down the road the interface won’t become unnecessarily complex and too similar to a traditional computer’s UI with all the problems this involves — such as feature discoverability or poorly-designed, overly-crowded interfaces. And that, if the gesture vocabulary is destined to grow, that it grows by remaining as consistent as possible throughout iOS releases, to avoid confusion, to avoid unnecessary cognitive load, and unnecessary re-training. And when this can’t be achieved, then obscurity has to be avoided at all costs. The interface should guide users to discover new gestures — especially if such gestures are simultaneously essential and complex — with visual cues and well-designed buttons and controls.

In the afore-linked Trajectories I argued that if we zoom out a bit and consider the big picture, the revolution in personal computing brought by iOS feels (to me) more like a reinvention of the wheel than a tangible progression. On the one hand, I find devices like the new iPad Pros really exciting, with incredible hardware and, with iOS 11, a more versatile operating system. On the other hand, I try to picture what’s next, and all I can see for now are devices and an operating system that, to be even more versatile, will have to implement features and paradigms we’ve already seen in traditional computers and systems. And this doesn’t really strike me as something revolutionary, conceptually speaking.

Category Software Tags , ,

The price of upgrading a Mac

I will soon share my observations about the WWDC 2017, but more pressing matters had me looking into various options for a Mac upgrade. (My good old 2009 MacBook Pro lately has been giving me troubling signs that it may not be long for this world.)

During the WWDC 2017 keynote, Apple has announced widespread updates for the Mac line: the MacBook, MacBook Pro, and iMac product families all received better processors and hardware configurations — even the MacBook Air — and also a gentle retouch of certain prices, so that now there is a more affordable ‘entry-level’ 13-inch MacBook Pro (without Touch Bar) at $1,299, and a really interesting 21.5-inch iMac with retina 4K display model at the same price.

Given the retina display, the latest-generation 3 GHz quad-core Intel i5 CPU, the discrete Radeon Pro 555 GPU with 2 GB of video memory, and the generous port configuration (two Thunderbolt 3 ports, four USB 3 ports, an SDXC card slot, headphone jack and Gigabit Ethernet), this 21.5-inch 4K iMac has truly attracted my interest, and is the most probable candidate for my next upgrade. The problem is that it’s less affordable than it seems.

First off, let’s take a look at the price in Euro (I live in Spain) and with added taxes, and the base configuration of this 21.5-inch 4K iMac costs €1,499. To be fair, the $1,299 starting price in the US is before taxes. I simulated a purchase as if I were a New York resident, and the final estimate on the Apple Store page was about $1,400. Still, $1,400 are not €1,500. After a true currency conversion with today’s rates, $1,400 are €1,249. Now, that would be fair.

Then, let’s take a closer look at that base configuration: it has 8 GB of RAM and a 5400rpm 1 TB hard drive. Yes, a hard drive. If you’ve ever upgraded your Mac by switching from a hard drive to a solid-state drive, you know just what kind of a bottleneck a hard drive is with regard to the general responsiveness and speed of the Mac. So, let’s go to the customisation options for this iMac: a 1 TB Fusion Drive costs €120 more; a 256 GB SSD is €240 more; a 512 GB SSD is €480 more. If I settle for a 256 GB SSD, the price of the iMac becomes €1,739 already.

Then there’s the RAM. Now, I’m not saying 8 GB are bad, but given that a Mac’s upgrade cycle is generally slow, with machines lasting more than a few years (for regular customers without specific ‘pro’ needs), 8 GB may not be enough down the road. A wiser, more forward-looking decision is to opt for 16 GB of RAM. And that is €240 more — the ‘affordable’ 21.5-inch 4K iMac has now reached a price of €1,979, simply for choosing a couple of extras that, in mid-2017, should really be standard fare. If not the RAM, at least the SSD.

But hey, if one’s budget is limited, a solution could be to just purchase the iMac as is, and update the RAM and drive later. The good news comes from the iFixit folks who, when performing their 21.5-inch 4K iMac teardown, have discovered that the RAM is user-replaceable and not soldered on the motherboard. (The drive is replaceable too, but that was more obvious). The bad news is that, to replace the drive and add more RAM, you basically have to dismantle the whole iMac, and I for one am not thrilled by having to cut out and separate the display to get to the machine’s innards. 

When the time comes for me to get a new Mac — and given my MacBook Pro’s current conditions, it’ll have to be soon — I will very likely choose this entry-level Retina 4K iMac because overall it’s good value for its money compared to other solutions. After doing some easy calculations, however, it seems I’ll be able to afford only one configuration upgrade, and I hate to be put in a position where I have to decide to either choose more RAM or a better internal drive. I have been using Apple computers since 1989 and never questioned the premium one usually pays when choosing Apple, but these iMac base configurations and related built-to-order options feel like a bit of nickel-and-diming on Apple’s part.

Aside from my personal needs and upgrade scenarios, if we look at other solutions in the current Mac product lines, we encounter details such as these:

  • Even the top-of-the-line 27-inch Retina 5K iMac is offered with only 8 GB of RAM. I mean, it’s the current, most professional configuration for a desktop Mac; it wouldn’t hurt having 16 GB already in its base configuration, and Apple could easily charge $200 more for it. It’d look better and I don’t think anyone would complain. Well, at least the RAM in the 27-inch iMac is easily upgradable by the user.
  • The entry-level 12-inch Retina MacBook and the entry-level 13-inch MacBook Pro (without Touch Bar) both cost $1,299. The latter has a bigger and brighter screen with more resolution, a better processor, one more USB-C port, a better GPU, and even a better camera. Compared to the 12-inch MacBook, the only ‘drawbacks’ are that it’s slightly larger, that it weighs 450 grams more, and that it has a 128 GB SSD instead of a 256 GB SSD. I’m sure there are people whose priority is having the lightest, most compact portable Mac, and who, faced with this comparison, would undoubtedly choose the 12-inch MacBook. But in my view, offering these two models for the same price immediately puts the smaller MacBook at a disadvantage. The moment weight and dimensions cease to be crucial, anyone can see how the 13-inch MacBook Pro is the better deal here.
  • Pay attention when you’re customising the configuration of the Mac you intend to buy. As Adam Engst at TidBITS warns, you could end up with a worse configuration for the same price depending on how you start, or you might pay more for the same configuration. Read the article for more detailed information and advice.

Final considerations

While I understand that certain design constraints may impact the upgradability of a machine, I’m still rather baffled at the relative rigidity of Apple’s offerings. Having to decide how much RAM, what kind of storage solution, and how much storage at the time of purchase puts customers in a difficult position, as they have to make a decision right away that will very often affect their Mac for its entire lifecycle. No Mac is a throwaway machine, and while there are exceptions, a lot of people keep their Macs for many years. Looking at the stingy base configurations for many Macs, Apple pushes people towards two main behaviours:

  1. Be content with the base configuration of a Mac model, and as soon as it’s not enough for your needs (or to handle whatever new technology will be thrown at you down the road), just get another Mac.
  2. Customise at once the Mac model you want, and end up with a fairly future-proof machine that will certainly last you more years, but spending much more money on it in the process.

From Apple’s standpoint, this is a great strategy, of course. It makes sense. I’m a terribly budget-conscious customer, alas, but even if I weren’t, the thing that irritates me the most is how certain components of many Mac base configurations look purposefully unappealing to induce people to upgrade them right away, thus spending more money. I mean, a spinning 5400rpm hard drive in a retina iMac, in 2017? I had a 5400rpm hard drive when I purchased my 12-inch PowerBook G4 more than 13 years ago. Eight gigabytes of RAM in the high-end 27-inch Retina 5K iMac, aimed at customers whose needs very likely demand a bare minimum of 16 GB of RAM? Laptops with a non-upgradable 128 GB SSD? All this with base model configurations that aren’t exactly cheap from the start. It doesn’t strike me as treating your customers respectfully.

Every time I bring up this topic, some people feel the need to point out that my complaints are simply dictated by my limited budget, but my beef is less money-related than it seems. It has to do with something I already pointed out above — and that is essentially the way Apple controls and conditions most upgrade paths when it comes to purchasing a new Mac. You either stick with underwhelming base configurations that will remain unchanged for as long as you have that Mac, or you upgrade components at once, up-front, at the time of purchase, and put up with the considerable premium Apple charges you for that[1]. And in those cases (like the 21.5-inch 4K Retina iMac I’m interested in) where you technically could upgrade RAM and hard drive yourself at a later date, having to dismantle the machine completely to achieve that (with the far-from-remote possibility that you end up damaging something in the process) makes the option unappealing and a very ‘last resort’ one at that.

 


  • 1. While they might not classify as ‘considerable premiums’, don’t get me started on the customisation options for included accessories like keyboards, mice, and trackpads. If I want a Magic Keyboard with Numeric Keypad instead of the more compact one, Apple charges €30 more. When you’re buying a machine that already costs €1,500 — and may cost you as much as €6,199 for a fully upgraded high-end 27-inch iMac — those additional 30 euros look downright offensive. ↩︎

 

Category Tech Life Tags ,

Let’s have both Reach Navigation and the navbar

I read with interest Brad Ellis’s article on Medium, All Thumbs, Why Reach Navigation Should Replace the Navbar in iOS Design, and I wanted to add a few observations on the matter.

He writes:

As devices change, our visual language changes with them. It’s time to move away from the navbar in favor of navigation within thumb-reach. For the purposes of this article, we’ll call that Reach Navigation.

This introduction is also the core of Ellis’s thesis. And I both agree and disagree. I agree that, instead of stretching a thumb or readjusting the device in one’s hand to tap out-of-reach UI elements, a navigation within thumb reach would be ideal. It would be better from a usability standpoint. At the same time, I think that doing away with the navbar is a mistake, again from a usability standpoint.

The navbar is important for all the reasons Ellis himself lists, and I’d emphasise its importance in navigating apps with lots of hierarchically-arranged screens, such as iOS’s Settings app. The navbar helps users find where they are in a series of nested screens; it’s like the Path Bar in Mac OS’s Finder.

Of the solutions Ellis proposes, I find the best to be suggesting developers to design their apps in a way that renders the navbar unnecessary. But here I want to stress another point: discoverability. Don’t make apps that are obscure to navigate. I like the navbar as a core UI element because it’s generally transparent in telling you where you are, and also because, as a personal preference, I’d rather tap on self-explanatory labels than start to ‘guess-swipe’ on different areas of the screen to see what happens. Will I uncover a hidden drawer/sheet/panel the UI design never hinted at, for example? 

I’m not a fan of swiping as a navigational gesture, or better, as the only navigational gesture. My main peeve with swiping is that with some apps it’s very easy to inadvertently tap on a UI element on the current screen that triggers some other action, like dropping a pin on a map, highlighting a text field (which in turn triggers the virtual keyboard), or opening another screen entirely. This is something I’ve noticed happening (to me, at least) the bigger the iOS device screen gets — i.e. on the Plus-sized iPhones and on iPads. Tapping on the navbar, in this regard, feels safer.

The navbar is also a better solution with regard to accessibility. Tapping may require more precision than swiping, but I think it also requires less effort.

And here’s a good piece of advice from Ellis:

Prioritize placing buttons at the bottom of the screen.

[…]

Move the most-used items to the bottom.

Which inspired my humble proposal: Just move the navbar to the bottom of the screen.

Now, I haven’t tested my idea with all the apps, but as far as iOS’s built-in apps which feature a navbar, my proposal seems to work. Here are two examples in the Settings app. On the left, the screen as it is now in iOS 10.3.2. On the right, a quick mockup of my bottom navbar proposal:

navbar comparison 1

 

navbar comparison 2

In this second example, I find that moving the navbar to the bottom (and maintaining the screen’s Title at the top) also makes for a less cluttered and more legible top area, especially on the 4-inch display of the iPhone 5/5c/5s/SE. Long labels like Display & Brightness here can finally be centred on the screen, and feel less condensed.

Another example, with iOS’s Photos app:

navbar comparison 3

I chose this app because here we see that the navbar would be placed above a bottom-row of buttons. In this instance, a possible criticism may be that placing the navbar there could create a bit of interference with the Photos, Shared, and Albums buttons. But I think that with the navbar within easy reach, accidentally hitting an element of the bottom row instead of one of the labels in the navbar is also less likely, unless you have really big fingers.

These are quick and rough mockups, I’ll admit. The navbar at the bottom could be made visually more prominent, for example. I also realise it might take a while before getting accustomed to the new placement, but the added convenience of having more reachable navigational tappable labels could facilitate the process. Again, this is just an idea I haven’t fully explored in all possible instances system-wide, but I think it could be a ‘best of both worlds’ solution: make navigation easier with more reachable controls, without losing the navbar and creating a potentially opaque navigation interface.

Category Software Tags , , ,

The disappearing computer and what it disappears

Walt Mossberg’s last column, The Disappearing Computer, is well worth reading and has certainly given me much to mull over these past few days. 

I think his analysis and predictions regarding the direction technology is going to take in the short-to-medium term are rather spot-on. Much of what’s going to happen feels inevitable. That doesn’t mean I have to like it or be okay with it. And for the most part I don’t and am not.

All of the major tech players, companies from other industries, and startups whose names we don’t know yet are working away on some or all of the new major building blocks of the future. They are: artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables.

The first thing that struck me upon reading this list was how little I’m interested in those things (save, perhaps, for wearable health monitors). Apart from personal preferences and interest, though, I can see how some of those fields can provide some degree of usefulness to people. What worries me, generally speaking, is the price we’ll have to pay in the process of bringing such technologies into the mainstream. Another thing that worries me is the state of our planet, and none of those technologies strikes me as an Earth-saving technology. There are days in which I feel particularly bitter, and the world looks like the Titanic, and technology like the orchestra that keeps playing sweet music in our ears while we all sink.

Whenever I voice my concerns about where technology is driving us, I am mistaken for a Luddite, or for a technophobe. I’m not. I simply refuse to drink the Silicon Valley Kool-Aid. Artificial intelligence and machine learning are great concepts that can have a lot of useful implementations, but to generate meaningful output, to produce a response that mimics intelligence, a machine needs a huge amount of data. A machine isn’t intelligent, it’s just erudite. A lot of data is collected without enough transparency. A lot of data about us is collected without our explicit consent. A certain amount of personal data is tacitly given away by ourselves in exchange for some flavour of convenience. My biggest concern is that a lot of data is being collected by a few entities, a few ‘tech giants’, which are private corporations with little to no external oversight. And despite their public narratives, I seriously doubt their goal is to advance humanity and make our lives better in a disinterested fashion. 

I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.

Your whole home, office and car will be packed with these waiting computers and sensors. But they won’t be in your way, or perhaps even distinguishable as tech devices.

This is ambient computing, the transformation of the environment all around us with intelligence and capabilities that don’t seem to be there at all.

On the surface, this is all great and exciting. On the other hand, I don’t want technology to be too much out of the way. Or, in other words, while I think it’s cool that tech becomes more ‘invisible’, I don’t want it to also become more opaque. I don’t want devices I can’t configure. I don’t want impenetrable black boxes in my daily life, no matter how much convenience they promise in return.

Google has changed its entire corporate mission to be “AI first” and, with Google Home and Google Assistant, to perform tasks via voice commands and eventually hold real, unstructured conversations.

Not long ago, I humorously remarked on Twitter: “Remember, it’s Google’s Assistant. Not yours.” Well, I wasn’t really joking. I’m utterly astonished at the amount of people who don’t mind giving Google a great deal of personal information, only for the convenience of having a device they can ask in natural language, e.g. How long is it going to take me to get to my office if I leave by car in 15 minutes? and receive a meaningful response. To receive precise trivia for answers, people are willing to put devices in their homes which basically monitor them 24/7 and send data to a big, powerful private corporation.

The problem is that a lot of non-tech-savvy people don’t care and don’t react until they see or feel the damage. In conversations, I often hear the implicit equivalent of this position: “Yeah, I’ve been giving Google/Facebook/etc. all kind of personal information over the years, but none of them harmed me in return; what’s the big deal?” The big deal is that someone else now owns and controls personal information about you, and you don’t know exactly how much data they have, and how they’re using it. Just because they’re not harming you directly or in ways that immediately, visibly affect you, it doesn’t make the whole process excusable. A company may very well collect a huge amount of personal information and just sit on it for years until they figure what to do with it; one day the company gets hacked, all the data is exposed, your accounts and information are compromised, and oh, at this point people are finally angry and upset, and blame the hackers — when the hackers are just an effect, and not the cause.

I urge you to read Maciej Cegłowski’s transcript of Notes from an Emergency, a talk he recently gave at the re:publica conference in Berlin. It’s difficult to extract quotes from it, exactly because it is entirely quotable. If you want to understand my general position on Silicon Valley, just read how he talks about it. This passage in particular has stuck with me ever since I read it, because it perfectly expresses something I think as well, but with a clarity and briefness I couldn’t find:

But real problems are messy. Tech culture prefers to solve harder, more abstract problems that haven’t been sullied by contact with reality. So they worry about how to give Mars an earth-like climate, rather than how to give Earth an earth-like climate. They debate how to make a morally benevolent God-like AI, rather than figuring out how to put ethical guard rails around the more pedestrian AI they are introducing into every area of people’s lives.

Back to Mossberg:

Some of you who’ve gotten this far are already recoiling at the idea of ambient computing. You’re focused on the prospects for invasion of privacy, for monetizing even more of your life, for government snooping and for even worse hacking than exists today. If the FBI can threaten a huge company like Apple over an iPhone passcode, what are your odds of protecting your future tech-dependent environment from government intrusion? If British hospitals have to shut down due to a ransomware attack, can online crooks lock you out of your house, office or car?

Good questions.

My best answer is that, if we are really going to turn over our homes, our cars, our health and more to private tech companies, on a scale never imagined, we need much, much stronger standards for security and privacy than now exist. Especially in the U.S., it’s time to stop dancing around the privacy and security issues and pass real, binding laws.

From what I’ve seen so far, legal systems everywhere haven’t been able to keep up with the pace technology is moving. It’s extremely unlikely that technology’s pace is going to slow down, and while I hope laws and regulations will be passed and enforced more swiftly, I’m not sure governments will be completely impartial about it, as knowing more about people seems to be a shared agenda between governments and tech giants. 

I still hope people themselves can fight back to regain control of their data, but this era of technological progress is also characterised by so much regress in other human behaviours: intolerance, racism, xenophobia, a rise in superstition and distrust of science, the tendency of believing whatever it’s on the Internet without displaying a sliver of critical thought… 

And in our tech lives, I witness more and more frequently just how blinded by convenience we’re becoming. Never before have I seen so much aversion towards friction in our daily lives. And convenience is the siren song tech giants are constantly singing in our ears: “Put everything in the cloud, give us your data, your documents, your photos, it all gets out of the way, it’s all synced conveniently to all your connected devices; it’s all so much easier, and so inexpensive!” I agree that certain friction is unnecessary, but there’s also a kind of friction that prevents our brains from working on auto-pilot all the time, that contributes to keeping our minds nimble, that keeps laziness and apathy at bay. Judging from the increasing number of people I see completely engrossed in their smartphones every time I’m out and about, the siren song of convenience is getting more and more intoxicating. Cegłowski is right, The danger facing us is not Orwell, but Huxley. The combo of data collection and machine learning is too good at catering to human nature, seducing us and appealing to our worst instincts. We have to put controls on it. The algorithms are amoral; to make them behave morally will require active intervention.

Perhaps my cynicism and lack of starry-eyed Silicon Valley visions of progress stem from the current bleak picture painted by what’s happening in the world on a daily basis. What is technological evolution without a corresponding human evolution? The first thing that comes to mind is something Lenny tells David Haller in the pilot of the TV series Legion: “Don’t give a newbie a bazooka and be surprised when [they] blow shit up.”

Category Tech Life Tags ,