Better both worlds than the best of both worlds

Tech Life

When I sat at my Mac to start writing this article, I knew I had to link to a series of past observations, because it’s not the first time I’ve spoken my mind about the iOS vs Mac OS debate. Checking the archives, I realised I’ve already written a piece that sums up all those past observations, wraps them up, and builds on them: it’s Tap swipe hold scroll flick drag drop from June 2017. I urge you to read that before proceeding, to better understand my stance on the matter. This article, in fact, builds on that one.

There is a group of tech geeks who love iOS so much that they’ve gone to great lengths to reconfigure their hardware setups and software workflows to be iOS-first or iOS-only. More power to them, sincerely. But they’re also the people who, more or less directly, have contributed to spread this nasty ‘iOS vs Mac OS’ mentality I’ll always refuse to espouse, since I think that the best of both worlds is using both worlds and taking advantage of what each does best.

Instead, proponents of the iOS-only way of life have moved through two main phases. The first was emphasising how iOS was simpler and more intuitive to use than Mac OS; how iOS devices are more portable, more convenient, more battery efficient and simultaneously just as powerful as Mac laptops. 

It’s true, the hardware advantage of the iOS platform is undeniable. But which is the better platform from a software and interface standpoint is certainly a more subjective question. It is also undeniable that for several iterations before iOS 11, iOS has been more lacking in flexibility than Mac OS, leading to a certain frustration among iOS-first users who found themselves with powerful hardware — the iPad Pro line — driven by an operating system that barely scratched the surface of that hardware’s potential. 

The iPad-specific features introduced in iOS 11 have been a noteworthy step towards more flexibility and power for the platform. And so now we’re witnessing phase two: iOS-first power users who want even more pro features and solutions for iOS (and especially iOS-on-the-iPad); they want a dream device I’ve humorously called the ‘Apple Surface Pro’ — something with Mac-like hardware running an even more ‘pro’ flavour of iOS.

My position on this isn’t to sarcastically remark Not gonna happen, folks. I actually think Apple may produce some sort of 2‑in‑1 laptop/tablet hybrid if the company thinks it could appeal to a large-enough audience. It’s not the hardware that concerns me (and I believe it doesn’t concern Apple, either) — it’s the software.

The days of iOS behaving the same way on every device are over. iOS 11 in particular has made that abundantly clear. This is good: both iOS hardware and software have matured and reached a point where it’s simply ridiculous to treat the iPad as ‘just a big iPhone’ — or the iPhone as a small iPad for that matter. The price to pay for this emancipation of iOS on the iPad, as I wrote in Tap swipe hold scroll flick drag drop, has been an added layer of complexity to the OS: more features, and more gestures, not all as intuitive as the ‘primitives’ established in the first half of iOS’s history.

Now, I can see the coolness factor in having a new iOS-driven Apple Surface Pro device; a sort of super-tablet that can become anything you want — a pleasantly intuitive tool for drawing, sketching and painting with an Apple Pencil; an eBook reader; a media player on the go… Then you dock it to its keyboard component (which is more than just a keyboard and may even have USB‑C ports and an SD slot), and you can use it as an ultraportable notebook computer for a series of professional tasks requiring this particular form factor. 

But how to handle all this from a software perspective? So far, what we’ve seen on iPhones and iPads is that certain iOS features and/or UI gestures are available or not according to the specific hardware. iOS will behave slightly differently if you’re using an iPhone 8, or an iPhone X, or an iPhone SE, or an iPhone 5s. On the iPad, the fifth-generation ‘regular’ iPad is a bit more limited than the iPad Pro. Supporting a new iOS-driven 2‑in‑1 laptop/tablet hybrid would involve designing and introducing another layer of complexity in iOS.

Whether Apple evolved iOS monolithically to support these fabled new pro features to reveal them to the user only on supported devices, while keeping things ‘simple’ on simpler devices; or created a separate ‘iOS Pro’ system flavour to specifically drive this hypothetical new Apple Surface Pro device, the fragmentation and complication of the platform would be inevitable. It would also be quite resource-draining for Apple, which is already struggling to keep all its software platforms running with an acceptable degree of quality assurance. Pushing iOS to this kind of next level is not unfeasible, but I wonder how big the impact would be on other endeavours, Mac OS in particular. And on iOS itself.

Ryan Christoffel’s article, What I Wish the iPad Would Gain from the Mac — which I’ve seen frequently quoted by other sources in my feeds as of late — definitely makes for an intriguing reading. In his conclusion, Christoffel writes:

The iPad is already proving a formidable Mac-alternative for some users – what happens if it continues closing the gap by adopting the Mac strengths I’ve listed? If the iPad offered support for multiple instances of an app, was available in a more diverse array of hardware, allowed apps to get things done persistently in the background, was home to Xcode, Final Cut Pro, and Logic Pro equivalents, and became a proper shared device with multiple user accounts – why would people continue using the Mac?

What happens may be nirvana for iOS-only power users, but I also wonder whether it’s worth going to all this trouble to get to a point we have already reached today with the Mac. I closed the tweet that inspired this piece saying that, in my opinion, both platforms — iOS and Mac OS — may lose in the long run. What I meant is:

  1. That iOS, in all this effort to get more specialised, more ‘pro’, more Mac-like, ends up becoming less intuitive and approachable by regular people who don’t really care to use iOS and an iOS device in a power-user way, even if it’s the only device they have. The feature creep has certainly brought new capabilities to the platform, but also more complexity. Today I see a lot of people in Apple Stores that seek training to familiarise themselves with iOS devices, and my data might be anecdotal and all, but in the pre-iOS 7 days regular people seemed to pick up the gestures and the Multi-touch interface much more quickly and with less intervention from tech-savvy users or Apple Store staff.
  2. That Mac OS becomes more and more neglected, as both Apple and third-party developers devote more attention to iOS. That Mac OS is allowed to turn into a weaker platform because, as iOS cannibalises Mac OS features, Mac OS in return receives lukewarm apps that are mostly lazy iOS ports or little more than Web apps wrapped in a cumbersome, un-Mac-like UI. (And also loses features — see how the next Mac OS Server release will deprecate a number of services.)

If everything that has been hypothesised above (in my article and in Christoffel’s piece) should come true, we’re going to be living in ‘interesting’ times for sure. We might end up with a platform that, in trying to be more like its older sibling, a) loses its identity as really simple, really intuitive and friendly environment, especially for tech-averse people; and b) fails to be as powerful and versatile as Mac OS already is now. And as for Mac OS, its development path might be hampered by the shift towards iOS and, more dangerously, by the spreading idea and attitude among iOS-only fans that it all has to be a zero-sum game, that for iOS to win, Mac OS should lose. I’m worried that the efforts to achieve a product or platform that is supposedly the best of both words, might lead to sacrificing what is already really great about each of those worlds. 

iOS and Mac OS are different platforms, with different user interfaces, different input methods, different paradigms, different approaches. I think that working to make both platforms shine by doubling down on their respective merits and strengths ultimately makes for a richer scenario. You just can’t keep adding features to iOS indiscriminately in order to turn it into a desktop-level operating system. And you can’t keep fiddling with Mac OS in a way that, instead of making the operating system more robust and refined, makes it increasingly buggier[1].

It’s also important to never lose sight of the limitations imposed by the vary nature of each platform. Pro apps can exist on iOS, but their degree of versatility and powerfulness is dictated by the task one wants to carry out, and the limitations of a platform’s paradigm. Multi-touch is a fun type of user interaction, but its input capabilities are more limited than those of a keyboard and a mouse. That’s why a hypothetical Logic or Final Cut for iOS can’t realistically be as powerful or versatile as the same applications on the Mac. iPads and iPhones certainly have capable processors, but raw CPU power isn’t everything in this equation. The very nature of the Mac’s user interface, its input methods, its paradigms, all these allow for complex yet usable UI layers and elements; for small yet precise controls, designed to be handled by a mouse; for a great number of contextual menus and commands, easily discoverable with a right-click; for an interface that’s 100% visible all the time, because your fingers or hands don’t get in the way while you’re tapping, scrolling, scrubbing or swiping. 

Each platform can evolve by carefully considering its winning characteristics and figuring out new applications that leverage such characteristics. Augmented Reality (AR) could be a step in an interesting direction for mobile devices, once it gets (if it gets) past its current gimmick state. While the Mac can really become — or rather, get back to being — the platform for professionals, with high-end machines and suitable pro-level applications that take advantage of the tried-and-true user interface of Mac OS, and the sheer power, connections, and expandability of an iMac Pro or Mac Pro.

Maintaining the focus so that cars can be the best cars, and trucks the best trucks[2], shouldn’t be regarded as rigidity but as a renewed clarity of vision for each platform. I may be wrong, but I’m afraid that going after a ‘best of both worlds’ trajectory might just bring more compromises that negatively impact both iOS and Mac OS down the road.

Stray observations

  • When it comes to debating these topics among tech enthusiasts, I really wish we could lose the ‘iOS versus Mac OS’ approach and mentality. I really wish people would stop framing these matters as Mac OS is old and should be retired, while iOS is fresh, it’s the future, and can do everything Mac OS does and better. The truth is that keeping both platforms around — and healthy — is really the best course of action to provide a satisfactory multi-device ecosystem. As someone who masters both Mac OS and iOS, the user experience and level of productivity I derive from the combination of both is much better and more complete than choosing just one platform in the misguided belief that it can be a complete substitute for both.
  • In case I wasn’t clear earlier, I’m not arguing that iOS should remain ‘dumb’, while Mac OS remains the ‘smart’ sibling. I’m just saying that simplicity should always be top priority when it comes to iOS. Because simplicity is what has always made iOS stand out as an operating system — its ability to simplify a series of tasks and activities which regular people used to find a bit awkward to perform on a traditional computer or, worse, on a netbook. Pushing iOS to act more like a traditional computer’s OS kind of defeats the purpose. A lateral counterexample: In the evolution of watchOS there has been a course correction that has taken account of the limits of the smartwatch’s interface and user interaction more closely and more thoughtfully, greatly benefitting the user experience and the platform as a result.
  •  


    • 1. The awful release that is Mac OS High Sierra, combined with all the issues that have been reported about the hardware quality of the MacBook/MacBook Pro line, isn’t doing the Mac platform any favours. It’s unfortunate that Mac OS is losing trust and ground this way. ↩︎
    • 2. I’m referring to the famous analogy made by Steve Jobs a few years back. See here, for example. ↩︎

     

Shortsightedness

Tech Life

Somehow I had missed this Tim Cook interview on The Guardian, but fortunately I have Kirk McElhearn in my RSS feeds, and his recent article The Tech Industry’s Tunnel Vision about Coding and Language has brought that interview to my attention.

Irritatingly, the article doesn’t present the full text of Cook’s contribution, just a series of quotes. And, like Kirk, I was a bit let down by this one in particular:

I think if you had to make a choice, it’s more important to learn coding than a foreign language. I know people who disagree with me on that. But coding is a global language; it’s the way you can converse with 7 billion people.

At first I was reminded, by contrast, of this famous Steve Jobs quote:

It is in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.

I wanted to emphasise what I perceive to be a stark difference between Jobs’s and Cook’s mindsets, so yesterday I tweeted those two quotes together. The excellent Zac Cichy, in turn, reminded me of Steve Jobs’s specific position on learning to program, and posted this excerpt from the seminal Steve Jobs — The Lost Interview (1995) with Robert X. Cringely. Essentially, Jobs says:

It had nothing to do with using [programs] for practical things, it had more to do with using them as a mirror of your thought process. To actually learn how to think. I think everyone in this country should learn to program a computer. Everyone should learn a computer language because it teaches you how to think. […] I think of computer science as a liberal art.

From what I understand, Jobs’s standpoint remains more articulate than Cook’s. Learning to program helps shape how you think, he says, and that’s important and should be taught in school in addition to all the other arts. Learning to program should be treated as another liberal art. In Cook’s view, coding is more important than learning a foreign language. This expresses a preference — A is better than B. Jobs’s point of view is more inclusive — we should have A and B. Cook’s is, at best, shortsighted.

An objection to my tweet points out that the two quotes “have different contexts.” That Jobs “talked about Apple’s main values, whilst Cook has explained — given the huge impact of technology nowadays — what is more important for education.” The contexts aren’t actually that different. One has to keep in mind that Apple’s main values were essentially Jobs’s values. What Jobs believed directly impacted Apple’s direction and actions. With Cook, things doesn’t look as clear-cut to me. His Apple has lately been pushing the importance of coding a lot (see for example the free Hour of Code sessions in all Apple Stores); but his personal views on technology don’t seem to fully embrace the same direction. More people who learn to code from a young age means wanting an even more technology-driven (and technology-obsessed) society. On the other hand, The Guardian’s article, paraphrasing Cook, opens with: The head of Apple, Tim Cook, believes there should be limits to the use of technology in schools and says he does not want his nephew to use a social network. And with the quote “I don’t believe in overuse [of technology]. I’m not a person that says we’ve achieved success if you’re using it all the time,” he said. “I don’t subscribe to that at all.” To me, this sounds a bit contradictory — or at least like the position of someone who wants to have his cake and eat it too.

As for the quote that started my reflection — It’s more important to learn coding than a foreign language — I fully agree with McElhearn: 

Learning a language leads to all sorts of cognitive benefits, and kids who learn languages generally do better in other subjects as well. I don’t know if Mr. Cook speaks a foreign language, but his attitude about language is typically American.

[…]

No, coding is not a global language, you can’t talk to people with if – then statements. It’s a tool, not a means of communication. This sort of attitude is dangerous; not only because it neglects the other elements needed in tech – art and design, empathy and understanding – but it dumbs down the world and attempts to turn kids into drones. You can converse with far more people through music and art than you will ever be able to by learning code. And it’s a shame that Mr. Cook ignores that.

Oh, and, by the way, Mr. Cook, those developers you hire from India, China, Germany, Brazil, and other countries? They can only work for you because they learned a foreign language: English.

I’m trilingual, and my educational background is rooted in the liberal arts, so McElhearn’s standpoint resonates a lot.

What I believe to be severely lacking in schools everywhere, however, aren’t coding classes. What we need more and more, as this technological progress marches onward at breakneck speed, is learning how to use, how to handle technology properly; is learning how to healthily integrate it in our lives. It’s learning that technology isn’t everything and that it’s not necessary to let it dictate every aspect of our everyday life. It’s learning that an excessive reliance on technology may make our life easier on the surface, but also make us a bit dumber and antisocial in the process (again, if we let things go unchecked). 

We need to teach and encourage critical thinking towards technology and its global impact on how everything is shaped today. That this critical thinking is largely absent is apparent everywhere you look on the Internet and social media. We’ve come to a point where scientifically proven facts are ‘debated’ while a lot of people are ready to believe everything they read online if it gets repeated enough. We’ve come to a point where people eat evidently poisonous things just because they saw a funny video.

We have to bring back critical thinking and common sense. I think this is more important than coding.

Chrome has left the Dock

Briefly / Software

I’ve never been one of those people who only use one browser to interact with the Web. I use tabbed navigation heavily for work and leisure, and I also love to try different browsers to see what kind of features and approaches they present, what makes them unique or in what ways each browser can be the best tool for the job. Over the years, my preferences have varied depending on a browser’s UI, its performance, its resource- and energy consumption, its memory management. As browsers have evolved, they’ve got better at managing their impact on the CPU and RAM, but there have been periods of relapses even for the best among them.

Since version 3, I’ve been using Safari as my primary browser, and have been quite happy with it all this time. But my habits and workflows have always demanded at least a second browser, and very often a third. For a long time, my setup was pretty much like this:

  1. Safari
  2. Google Chrome
  3. Whatever other browser I was testing/exploring.

That third seat has been occupied by several different browsers. In no particular order: Stainless, OmniWeb, Firefox, Camino, Shiira, Opera, Sunrise, Sleipnir, and recently Vivaldi and Brave — both very interesting projects in my opinion. I talked about the built-in ad-blocking features of Brave back in May in this article.

In that article on the Brave browser, I wrote: 

My preferences for these secondary browsers have changed with time. […] When I decided to remove Flash from my system, the secondary browser would become Chrome because it incorporates a Flash plug-in, and I would resort to Chrome to access those websites requiring Flash to work. Then in recent years, when 99% of the sites I visit either don’t use Flash anymore, or serve HTML5 content, I’ve basically stopped using Chrome.

After a period of using Safari, Opera, and Firefox as main browsers (being very pleased by the recent improvements of Opera: much faster, with some ad-blocking features that don’t require extensions, etc.), and after a subsequent period using Safari, Vivaldi, and Brave, I briefly returned to Chrome, after a friend suggested I should try to explore the many extensions and resources available to turn it into a ‘power tool’ (his words). Well, I don’t know if suddenly my aging MacBook Pro was not enough to handle Chrome, or if Chrome has become more resource-hungry, or if I had installed perhaps too many extensions, but the experience was pretty much terrible. Considering that about two years ago I was often choosing Chrome over Safari for the better overall performance, I was rather disappointed.

Then, around mid-November, Mozilla introduced Firefox Quantum. Intrigued by that blog post, I immediately updated Firefox, and put to the test Mozilla’s claims regarding speed. I was stunned. The jump in performance compared with the ‘old’ Firefox was very noticeable. Not only that: the new look and UI had got better, too. Suddenly, Firefox had become a browser I enjoyed using. If you haven’t tried it yet, download it, and see for yourself. Great job, Mozilla.

While I still use Opera and Vivaldi every now and then, for the past two months my browser setup has been steadily Safari, Firefox, and Brave. And Chrome has been uninstalled for good, as it’s no longer needed, at least on my Mac. Why go all the way? Because removing another piece of Google from my ecosystem doesn’t hurt. Now it’s just a few Gmail accounts I mainly use for newsletters and mailing lists; and Google Maps because, quite frankly, it’s irreplaceable.

People and resources added to my reading list in 2017

Tech Life

In 2015 and 2016, I noticed a trend in my RSS subscriptions and management: I was discovering fewer interesting people and resources worth following on a regular basis, while at the same time I was removing people and resources I had previously added, for a series of reasons I detailed in last year’s post. 2016 was a very active year for me, photography-wise, so the majority of discoveries were photography-oriented.

2017 was a terrible year. A sort of reverse-sabbatical where, instead of taking a break to focus on specific interests and projects, I found myself taking a break to… to focus on really nothing specific. Lots of things were left floating in a limbo. My creative writing and projects got sidetracked due to a hazy, unspecified inertia. Only my work with translations and app localisation was the exception, thankfully.

The tech world in general, and podcasts

I have grown progressively weary of how most of the tech debate is conducted today. I like to read people with (pardon the photographic reference) a wide dynamic range; people who don’t lose sight of the bigger picture; people who don’t limit themselves to being serial gadget lovers (i.e. people who lose themselves in every new shiny tree they encounter, rarely taking account of the whole forest; and I promise I’ll stop with the metaphors). Well, perhaps I haven’t looked hard enough last year, but I haven’t found many. 

More and more, the podcast seems to be the preferred method of delivery, and well-written tech blogs look like an endangered species. I don’t want to start another tirade about podcasts, but let me reiterate one fundamental criticism: there is simply too much supply, and too little time. I can’t spend my day listening to podcasts, as I find practically impossible to follow a podcast episode while doing other things. Music can be enjoyed even when it’s in the background. I can’t follow what people are talking about while reading stuff on the Web or working. It’s just interference. I have to make time for your podcast. And if I’m going to give you one hour of my time for an episode, you better deliver on the quality and content, otherwise it’s bye-bye.

As a listener, my podcast habits have changed slightly. I used to follow a really short list of podcasts, doing my very best to listen to all their episodes regularly. In 2017 I added a few more podcasts, with the express intention of just listening here and there, loosening my overall commitment. No offense to anyone, but I simply felt it was time to broaden my horizons without also sacrificing more of my time. Here’s the current list of podcasts I’m subscribed to:

  • Covered, by and with Harry C. Marks.
  • Release Notes, with Joe Cieplinski and Charles Perry.
  • John Gruber’s The Talk Show.
  • Citizen Lit, a literary podcast hosted by Jim Warner.
  • Too Embarrassed to Ask, hosted by Kara Swisher of Recode and Lauren Goode of The Verge.
  • The Americans Podcast, by Slate Magazine / Panoply [iTunes link] — I’m a huge fan of the series, and this podcast is a great listen after each episode of The Americans. It’s the only podcast I manage to follow regularly, because the output is limited and follows the series’ airings; and because each episode is rather short.
  • The Radiolab podcast, by WNYC Radio. I was recommended to listen to an episode on Oliver Sacks aired in late October 2017, and I kept the podcast in my subscriptions. I was trying to find the words to describe what Radiolab is about. Thankfully the show’s description does a better job at it than what I could possibly come up with: Radiolab is a show about curiosity. Where sound illuminates ideas, and the boundaries blur between science, philosophy, and human experience.
  • On Margins, a podcast about making books, hosted by Craig Mod.
  • Tomorrow with Joshua Topolsky [iTunes link]. From the description: “Tomorrow with Joshua Topolsky is a podcast about what’s happening right now — and next — in the world of culture, technology and the internet, music, movies, politics, and more.” I don’t often agree with Topolsky on technology, but it’s important to listen to people with different points of view, otherwise it’s all just an echo chamber.

I have listened to the occasional episode of other podcasts, but these are the ones I’m keeping in my subscription list for now. My podcast listening apps of choice are Pocket Casts on iOS (as always), and Podcast Lounge on Windows Phone.

YouTube channels

For the first time, in 2017 I also started paying attention to what YouTube has to offer. YouTube has got rather difficult to ignore altogether, and I dive in with strict moderation, knowing the attention sucker it can become if you don’t keep your consumption in check. I follow some tech-oriented channels whose output is not too overwhelming.

  • For product reviews: Btekt and Mr Mobile (Michael Fisher). Both of these guys, in my opinion, produce great videos (well-edited, not too long and not too short, etc.) and provide rather balanced reviews. I enjoy Fisher’s style and humour, in particular.
  • For product unboxing & reviews with the added lulz: Unbox Therapy.
  • I’ve also added the Computer History Museum channel, because it’s just rich in good-quality technology content.
  • When I have my recurring bouts of retrocomputing nostalgia, I check The 8‑Bit Guy.

I’ve added these resources to what I follow. I’m sure there are many more out there, and some may be of better quality or value. I’m not into YouTube much, so if you feel I’m missing some fantastic channels, feel free to let me know via email or Twitter.

Tech blogs and sites

If I had written this piece back in early October, this would have been a completely blank space. Really. The few discoveries I made all happened in the last 2–3 months of 2017:

I’ve always read Peter Cohen’s articles when he wrote for Macworld, The Loop, and iMore. I added his personal site to my reading list back in October 2017. He hasn’t posted anything since, but I hope it’s only temporary. 

Since he has been quoted often in recent times by other people I follow, I decided to check his blog more regularly, and I ended up adding it to my feeds. I’m talking about Matt Birchler and his blog BirchTree. What made me decide to add him to my reading list was his recent 8‑part exploration of Android from the perspective of an iOS user. I appreciate when tech geeks take the time to explore other platforms, because that usually gives them a better perspective on technologies and products. I’ve done that too, and I’ve learnt a lot.

Speaking of exploring other platforms, after webOS and Android, I felt it was time to take a deeper, less prejudiced look at Windows Phone/Windows Mobile. After a surprisingly positive experience with a Nokia Lumia 925 and Windows Phone 8.1 started in November, I’m currently examining Windows 10 Mobile on a Nokia Lumia 830. Two great resources that have helped me a lot this past couple of months have been:

  • All About Windows Phone, with good reviews and tips; I’ve found a lot of nice Windows Phone apps by perusing the site.
  • Windows Central: it has a broader scope, and its focus is Windows in general, but it’s a good place to keep an eye on to stay informed on what happens in Windows land. 

If you’ve known me for long, you’ll surely find my renewed interest in Microsoft and Windows a bit strange. Many other long-time Mac users keep looking at Microsoft and Windows through 1990s-era glasses, and well, I think it’s time to be more open-minded. I’m not switching to Windows, mind you, and I’m not necessarily saying that it has become better than Mac OS. But the hardware is interesting, and Windows has certainly got better than when I was using it more regularly years and years ago. 

If I could afford it, I’d probably get a Surface laptop or tablet as a secondary device, to at least have a first-hand experience so that I can better understand the kind of efforts Microsoft has made to improve their hardware and software. I was able to do that with Windows-powered smartphones, and I was unexpectedly, positively surprised. Over the years I’ve come to realise that you can’t be interested in technology without ever examining what’s outside your preferred platform and ecosystem.

That’s it for tech blogs/sites. Again, if you believe I’m missing out on someone particularly smart, insightful, and worth reading, let me know!

People who don’t post as often as they used to, or have stopped altogether — and that’s a real pity

There are a few people whose contributions I used to enjoy, but it seems they’re now either writing very infrequently, or have taken a hiatus. I don’t know why. Perhaps they now have other priorities or are busy elsewhere. I still keep their sites in my RSS reader in the hope they can return someday. I felt like mentioning them here not because I want to single them out and line them up against some wall of shame, but because I believe they’re people worth reading and following. I’m mentioning them as a way to tell them, Hey, I miss your writing. It’s really a pity you’ve stopped posting regularly. I hope to read more from you in the future.

  • Michael Anderson used to have a blog called Building Twenty; the domain appears to have expired now, but this is one of the last snapshots available through the Internet WayBack Machine.
  • Hey Cupertino, by Patrick Dean. Its last entry is from almost exactly one year ago. As I wrote last year: “I really like Patrick’s review style: each review is detailed, well written, and accompanied by meaningful screenshots. One immediately notices how Patrick decides to review an app only after having extensively used it on his device. This means his reviews are generally less superficial, and his recommendations are always worth checking.”
  • The Pickle Theory, by Shibel Mansour. It’s a pity Shibel hasn’t the time to write more on his site. He’s someone whose point of view I’d like to hear more often.
  • MbS-P‑B, by Mike Bates. I enjoyed his reviews and I enjoy his photography. His blog has been silent since late 2016. I hope the hiatus isn’t definitive.
  • No Octothorpe, by G. Keenan Schneider. He’s not on hiatus, but he’s one of the few tech writers with a genuinely creative approach to tech writing, and he posts too damn infrequently.
  • Mac Kung Fu, by Keir Thomas. Last year I wrote: “It’s mostly tips and tricks for Mac OS, iOS, Apple TV, Apple Watch, etc. Keir is a competent power user and writer. There’s always something to discover, even if you’re an experienced Mac or iOS user, and Keir often manages to surprise you.” Keir doesn’t update the site as often as he used to, and it’s a pity. Still, I recommend you add it to your list of resources; I’m sure you’ll still be able to find plenty of useful tips in the archives.

My RSS management

Not much has changed from last year. To recap: on my main system, Reeder is still my favourite app. On my PowerPC Macs I use older versions of NetNewsWire (version 3.2.15 under Mac OS X Leopard, and 3.1.7 under Mac OS X Tiger). On iOS for me there’s no better RSS reader than Unread. On older iOS devices that can’t be updated past iOS 5 or iOS 6, I use Reeder instead (the last compatible version on those systems). It offers a great reading experience. A special mention goes to Feed Hawk by John Brayton, a very useful iOS tool to quickly add a website’s RSS feed to your reader of choice. My nano-review of Feed Hawk is here.

Since now I also use Windows Phone 8.1 / Windows 10 Mobile as my secondary mobile platform, I’ve searched for a good RSS reader on Windows as well. I’m currently enjoying FeedLab, but I’ve also been recommended the more feature-rich Nextgen Reader (for both mobile devices and PCs). 

And I think that’s all. This article may be updated in the following days, in case I realise I forgot something. 

Past articles

In reverse chronological order:

I hope you find this series useful. (Keep in mind that some links in these past articles may now be broken). Again, feel free to send tips and suggestions for more resources, either via email or Twitter. Thanks for reading!

The iPhone X still underwhelms me

Tech Life

There is something that just keeps not clicking about the iPhone X for me.

I could easily insert a joke here: “It’s the Home button”. But I wouldn’t be joking, actually. And it’s not (just) the Home button, or lack thereof.

I have read many reviews and impressions, and a lot of people seem to agree on one particular aspect of the iPhone X — it’s the first Apple product in ages that really excites them. It seems to exude that intangible Apple essence that just makes you love it. Something that Apple hadn’t seemed to manage to pull off since Steve Jobs’s passing. I’m a long-time Apple user, and I know exactly what they mean. I felt it with the introduction of various Apple products over the years: the first Mac, the first LaserWriter, the PowerBook 100, the PowerBook Duo system, the Newton, the iMac G3 and G4, the first iPod, the first iPhone, the iPhone 4, the colourful iBooks, the G4 Cube, the 12-inch PowerBook G4, the MacBook Air… You get the idea.

If you go through that list of Apple hardware, you’ll see that none of those computers or devices was perfect. Some of those were underpowered. Others, like the Cube or the MacBook Air, were limited by the very design choices that made them iconic. Yet, soon after being unveiled, I simply wanted to acquire them. Even with their flaws and limits, there was that je ne sais quoi that made them special. I realise this is how many iPhone X owners feel about their device. I can sympathise, of course, but I cannot feel it.

What I keep feeling when I handle an iPhone X is disappointment. I’ll try to articulate such disappointment through a series of observations more than a general analysis.

Reducing the bezel and going ‘all screen’

The only way I understand this recent trend of extreme bezel reduction in smartphones is that manufacturers are running out of design ideas to make their products stand out and entice people to purchase them. You could argue that a phone that is ‘all screen’ is not just form, but offers functional advantages as well. Like, more screen real estate without having to also increase the physical size of the device. But a bezel isn’t just wasted space. It helps you handle the device better, it gives more stability when you hold the device and interact with the user interface. When I’m reading something on my iPhone 5, my thumb rests comfortably on the lower bezel; that is, on the space between the Home button and the bottom right corner of the phone. 

In my extended handling of an iPhone X, since that space is missing and it’s all screen there, while reading long-form articles on the Web, my thumb either stayed raised and out of the screen’s way, or rested on the right side of the phone. Sure, I could hold the iPhone X nonetheless, but it didn’t feel as comfortable or secure. This was mitigated when holding an iPhone X with a leather case. The case added grip and, ironically, increased the size of the iPhone’s side bezels.

Gesticulating in the absence of a Home button

I hold the rather unpopular opinion that removing the Home button has been a mistake and a step back on Apple’s part. In theory, I get this kind of design iteration move. I get the general plan (to reach a point where the interaction with a touchscreen is all touch), but not the execution — it feels poor, hastily thought and hastily carried out.

In his recent piece on the iPhone X, John Gruber writes: 

In short, with the iPhone X Apple took a platform with two primary means of interacting with the apps — a touchscreen and a home button — removed one of them, and created a better, more integrated, more organic experience.

But let’s just step back a couple of paragraphs in his article, where he summarises the functions of the Home button versus the gestures and actions that have been put in place on the iPhone X after removing the Home button:

Over time, the home button’s responsibilities grew to encompass these essential roles:

  1. Single-click with display off: wakes the device.
  2. Single-click with display on: takes you to home screen.
  3. Double-click: takes you to multitasking switcher.
  4. Triple-click: configurable accessibility shortcut.
  5. Rest finger: authenticate with Touch ID.
  6. Double-tap (without clicking): invoke Reachability.
  7. Press-and-hold: invoke Siri.

I took the liberty to convert his bulleted list into a numbered list, for the sake of discussion. We can agree that, of all these seven main roles of the Home button, №4 and 6 aren’t as universally or frequently used like the other five. Not everybody needs to have an accessibility shortcut (I wonder how many of you knew about this triple-clicking, by the way. I confess I didn’t remember such shortcut), and not everybody has a Plus-sized iPhone.

Here are the gestures replacing the above-mentioned functions on the iPhone X:

In iOS 11 X, almost every role of the home button has been subsumed by the display, with the remainder reassigned to the side button:

  1. Wake the device: tap the display.
  2. Go to the home screen: short swipe up from the bottom of display.
  3. Go to the multitasking switcher: longer swipe up from the bottom.
  4. Even better way to multitask: just swipe sideways on the home indicator.
  5. Accessibility shortcut: triple-click the side button.
  6. Authenticate: just look at the display.
  7. Reachability: swipe down on the bottom edge of display.
  8. Siri: press-and-hold side button.

It’s a (literal) mixed bag. First of all, not all these gestures are touch-only. Two of them still involve a physical button (Apple Pay also involves the use of the side button, if I remember correctly). Face ID makes for a truly useful hands-free authentication, but there are still instances where Touch ID can be a faster option. The rest of the gestures — especially returning to the Home screen and handling multitasking — I still find confusing and harder to execute with consistent results. The remapping of the gestures to invoke Notification Centre (swipe down from the top left) and Control Centre (swipe down from the top right) doesn’t help, either. Oh, and before we forget: force-quitting apps from the multitasking view is easier/faster on an iPhone with a Home button: double-click the Home button, then just swipe up the ‘card’ representing the app, just like it was on webOS. In what Gruber calls ‘iOS 11 X’ the procedure involves tapping and holding, then removing the ‘card’. Perhaps it was made on purpose to discourage people from being trigger-happy when it comes to force-quitting apps. The gesture feels clunkier nonetheless. 

(Oh, and as for №1, tapping the display to wake the phone: nothing new under the sun. I don’t know Android devices enough, but on Windows Phone devices the feature has been present for years.)

My impression here differs significantly from Gruber’s: this, to me, doesn’t feel like a “better, more integrated, more organic experience”. It feels like a renovation project that started with an idea — Let’s get rid of this wall [the Home button] — but didn’t fully take account of a series of consequences that quickly created a sort of snowball effect (if you remove that element, these other two need to be moved, another has to be replaced, etc. etc.). I agree that some of the resulting gestures may make a lot of sense on paper, or even after a certain period of acclimatisation with the device; but the whole picture, from a usability standpoint, feels arbitrary and forced by a self-imposed design constraint.

There was nothing wrong with how the Home button worked. There is nothing strange or weird in the fact that a Multi-touch device also relies on a button outside the display to interact with the interface. In fact, it is done to get out of the UI’s way. The removal of the Home button isn’t a bold move like removing the floppy drive from the first iMac, or getting rid of a particular connection (SCSI) to make room for a better one (USB, FireWire). It’s more like the removal of the 3.5mm headphone jack. That it was made ‘for the better’, is debatable, and not as clear-cut as removing a specific technology to push another that is provably, unquestionably better. Like with the removal of the headphone jack, Apple seems more interested in removing what they perceive as obstacles on their own design path, rather than solving particular problems users may face, or taking steps to demonstrably improve the user experience.

The progressive fragmentation of iOS

John Gruber:

Apple hasn’t called attention to this, but effectively there are two versions of iOS 11 — I’ll call them “iOS 11 X”, which runs only on iPhone X, and “iOS 11 Classic”, which runs on everything else.

I like this nomenclature, but it’s slightly more complicated than that:

  • There’s ‘iOS 11 X’, which runs on the iPhone X.
  • There’s ‘iOS 11 Classic’, which runs on the other iPhones, from the 5s to the 8 Plus.
  • And there’s ‘iOS 11 for iPad’, which runs on all supported iPads.

I make this distinction because there are a series of gestures that are unique to the iPad, that tie to specific iPad features not present on current iPhones.

I keep quoting Gruber’s piece because it is insightful. Like me, like others, Gruber too perceives this ‘fragmentation’, but if I understood his point of view, he thinks it’s essentially a temporary bump, a necessary transitional step in the constant, iterative evolution of iOS:

What we’re left with, though, is truly a unique situation. Apple is attempting to move away from iOS’s historical interface one device at a time. Just the iPhone X this year. Maybe a few iPhone models next year. iPad Pros soon, too? But next thing you know, all new iOS devices will be using this, and within a few years after that, most iPhones in active use will be using it — without ever once having a single dramatic (or if you prefer, traumatic) platform-wide change.

This is true, but my impression is that the picture Gruber is painting here is a bit too optimistic. Sure, if you upgrade to an iPhone X or to another future device running ‘iOS 11 X’, you’ll have to retrain and adapt to the new gestures and whatever this flavour of iOS brings and will bring with it. That won’t be too ‘traumatic’ because you will leave your old iPhone behind, along with its Home button and ‘iOS 11 Classic’ paradigms. But if you also have an iPad, with its Home button and ‘iOS 11 for iPad’ paradigms, the differences between the two flavours of iOS will remain apparent every time you go from a device to another. 

I’m sure the next step for Apple is to introduce Face ID in other iOS devices. Probably in the very next iPad. But the upgrade cycle for iPads is notoriously slow — if I’m still finding an old third-generation iPad useful today, imagine those people who just purchased a 10.5‑inch iPad Pro. It’s quite probable that there will be people using ‘iOS 11 X’ and ‘iOS 11 for iPad’ devices for a long while. 

The current fragmentation of the iOS platform, in my opinion, can’t be resolved in software. As Gruber himself pointed out: 

And some aspects of the iPhone X experience wouldn’t work on older devices. You could in theory swipe up from the bottom to go home on a non‑X iPhone, but you couldn’t swipe-up-from-the-bottom to unlock the lock screen, because that requires Face ID. Conversely, there is no room in the iPhone X experience for Touch ID. There is no “rest your finger here” in the experience. It wouldn’t matter if the fingerprint scanner were at the bottom of the display or on the back of the device — it would be incongruous.

Only by progressively introducing new hardware that works like the iPhone X can the process of reunification, of ‘defragmentation’ of the platform begin. But at the software level, things are bound to remain different, at least between iPads and iPhones.

For example, Apple may introduce a new iPad (Pro) X, with a thin bezel and without a Home button, accentuating the ‘all screen’ feel, looking like a big iPhone X. I don’t even want to start thinking of the handling issues of a bezel-less tablet, let’s just focus on the UI gestures. One quickly realises that not all iPhone X gestures can be scaled and ported ‘as is’ on this theoretical iPad X. Certainly Notification Centre and Control Centre won’t retain the same specific gestures we now find on the iPhone X (swiping down from the top left or top right on a device as big as an iPad, which is also often used in landscape orientation, makes no sense). But it will be interesting to see how the gestures tightly related to navigating Home and to multitasking will be implemented, considering the current gestures revolving around the Dock. 

The transition will be over when all Home-button, Touch ID-based iOS devices are phased out, and this may not take long at Apple’s end; but again, when you look at how frequently most non-geek users upgrade their iPhones and iPads, this transition will effectively take years. 

Mind you, I’m not finding all of this problematic, strictly speaking. I realise a lot of issues here concern me mostly at a theoretical UI/UX level. But I’m finding a lot of this, well, largely unnecessary.

Back to the iPhone X

And so I return to the iPhone X. I understand how Face ID may be the next step, the ‘future’ of authentication. I understand how innovative it is compared to Touch ID. But what kind of problem does removing the Home button really solve? For now, what I see is just that its removal has triggered a series of new user interface and user interaction problems, popping up in a sort of ‘falling dominoes’ effect. The new gestures that had to be designed as a replacement feel like a hastily executed workaround. 

Gruber writes: The iPhone X, however, creates a schism, akin to a reboot of the franchise. And later, after asking, Why not bring more of what’s different on iPhone X to the other iPhones running iOS 11?, he concludes I think they didn’t because they wanted a clean break, a clear division between the old and the new, the familiar and the novel.

It’s my understanding as well. But I guess that my fundamental question is, Why creating this schism in the first place? I don’t believe it was necessary. This could have been a smoother transition if Apple hadn’t removed the Home button and had introduced a transitional iPhone with Face ID and with the Home button. But it all stems from the need to introduce something ‘fresh’, from the ‘fear of missing out’ — since the competition’s new thing is reducing smartphone bezels as much as possible, let’s jump on the bezel-less wagon too. And if that implies making design compromises and remapping the UI in awkward ways, well, we’ll deal with it later.

I have resisted bringing up Jobs’s Apple until now, but my impression is that, under Jobs, Apple was more daring in its general attitude and less prone to peer pressure. There was more action and less reaction. Apple (Jobs) seemed more focused on doing its thing and the ‘fear of missing out’ was not really a concern. How others designed and manufactured their computers and devices was not really a concern. Apple’s concerns were to develop its own designs, aimed at providing customers with the best products possible. That encompassed the design of the hardware and the software. Apple, following Jobs’s philosophy, was clearly selective when it came to saying ‘yes’ or ‘no’ to a design solution, to an idea for a product or feature. That’s what Think Different was all about.

On the surface, today’s Apple hasn’t really changed. The principles are essentially the same, but I also notice a sort of ‘me too’ attitude which, if I were talking about a person, I would ascribe to insecurity and even performance anxiety. I’ve noticed a shift where, instead of focusing on a selected range of products and markets, Apple seems more interested in ‘being everywhere’ first, and ‘let’s figure out how’ later. Maybe I’m wrong, maybe it’s just how you stay on top today in this frenetic technological landscape, but again, this ‘fear of missing out’, pushing Apple to develop products like the HomePod, to invest lots of resources in car-related projects, to dabble in the production of original television content, etc. — this ‘fear of missing out’ is forcing Apple to create and maintain several software platforms, to neglect products for years (even relatively successful ones like the Mac mini), to spread their resources thin. 

Say hello to the future” — I will when I see it

With the iPod, there was never a defining slogan, but when talking about it Steve Jobs was certain it would revolutionise the way people listen to music, and that’s exactly what happened. When now I read that the iPhone X is ‘the future of the smartphone’ or that ‘the future is here’, it just rings hollow. Why is it the future of the smartphone? The only feature that feels mildly futuristic is Face ID. As for the rest, what about it? It has very good specifications, very good cameras, a very good display… But I don’t understand what the big deal is, essentially. iPhone X users will probably say that the device is more than just the sum of its parts; that it’s the overall experience that ultimately makes the difference. But I still don’t see what makes the experience on this device truly stand out compared with, say, an iPhone 8. Face ID and ARKit allow for cool effects and implementations, I definitely agree. But the processor in the iPhone 8 and 8 Plus is the same, so you can enjoy ARKit-based applications on those iPhones, too. The iPhone X’s cameras are a bit better, but not fantastically better.

It must be how it looks, then. The immersive AMOLED screen; the shifting between active apps like with a deck of cards; the seamless, (almost) friction-less Face ID authentication; Animoji; the industrial design (which I wouldn’t call radical[ly] new, as Rene Ritchie does), and how you feel the iPhone X in the hand.

Is that it? Have I missed something? I don’t think it’s enough future to have in my pocket for 1,200–1,300 euros.

To me, the future of the smartphone is a device with unique applications enabling me to do things I’d never thought I’d be doing with a smartphone — or any other device, for that matter. I can’t make examples, it’s the classic case of I’ll know it when I see it, but I can recall having felt this way with my first iPhone, the 3G, back in 2008. The idea of using just one pocketable device to browse the Web decently while out and about; writing emails with ease on the fly; being able to find my way in places I’d never been before thanks to Google Maps; having an abundance of information, in real time, any place I was with cellular reception — that whole experience felt really revolutionary to me. Ten years ago I felt I was ‘living in the future’. Now I feel I’m living in pretty much the same future, only with retina displays, better cameras, and faster processors.

After handling the iPhone X for a while, I returned to my old iPhone 5 with iOS 10, and oddly I didn’t feel I was missing out that much, apart from the superficial differences. I didn’t feel the thrill of having the future of the smartphone in my hands slip away. The overall experience was like having been handed a cool, aimed-to-please, but overpriced product with a slightly awkward UI and an unapologetically compromised design. 

I’m fully aware that my opinion reflects the fact that I don’t own an iPhone X, that I only had a reduced exposure to it compared to those who own it, use it every day, and enjoy the hell out of it, joie de vivre and everything. But that’s kind of the point — in the examples I made at the beginning, all the Apple products I listed truly wowed me from afar. I didn’t have to ‘spend some time’ or ‘get accustomed’ with a MacBook Air, with an iMac G4, a Newton MessagePad, or with an iPhone 4 to know I wanted one right away. That kind of magic, of thrill, has yet to return for me.