Microblogging and fragmentation

Tech Life

I get routinely annoyed at Twitter for something they do to, or don’t do for, their users. During my annoyance periods, I look around in search of ideas or alternatives. This is how I got into App.net in late 2012. This is how I ended up backing Manton Reece’s Micro.blog project on Kickstarter a few months ago.

The cool idea behind Micro.blog is that, while it looks like a Twitter clone on the surface, it actually goes deeper than that, as each account is an independent microblog. Each user is actually publishing very brief blog posts, from websites that are hosted either on the Micro.blog platform (you pay a small monthly fee to have your hosted [username].micro.blog space), or on spaces users already own (you can integrate your microblog with your current blog or website). 

This approach has some advantages. For example, you own your space in a way or another, and you’re not dependent on a centralised infrastructure and on the whims of companies like Twitter or Facebook. You can publish on your blog or microblog first, and then broadcast elsewhere. People can follow you on Micro.blog itself by joining the platform, but if they don’t, they still can subscribe to your RSS feed. 

This is an intriguing path I urge you to explore, but I must say that so far I’m struggling to make the most of it and optimise my experience. The main reason is fragmentation. Now, it’s not entirely the platform itself at fault here. My particular workflows and habits as someone who publish posts and follows other people who do so, make it difficult to achieve a seamless experience. But I also think that a new type of RSS reader or application could greatly help smoothen things out.

Fragmentation, I was saying. Let’s have a look at my habits first. I’m currently most active on Twitter, but I’m also a member of two other social networks mainly populated by ex App.net folks: pnut.io and 10Centuries. These two other places have slower-moving timelines than Twitter, but still, after adding yet another timeline (Micro.blog’s), I tried monitoring everything by keeping different windows and client apps open on my Mac, only to find it was all a bit overwhelming. I had to do what I do on iOS: check things on each different network by keeping only one app open at a time, maybe two if I’m taking a break from work.

This is not a problem of Micro.blog in itself, of course; I’m just pointing out the fact that if you don’t want to give up other social spaces, adding yet another one can become cumbersome. Sure, you can crosspost to Twitter and Facebook from Micro.blog but, if I understood correctly, you can’t handle Twitter or Facebook interactions on Micro.blog; you still have to go to Twitter or Facebook to do that. In its current state, Micro.blog is an excellent, robust solution for those who are dissatisfied with Twitter, are leaving it or putting it on the back burner, and at the same time are considering the idea of a single space from where to blog, micro-blog, and be social. (Suggested reading: A Guide to Micro.blog For People Who Have A Love/Hate Relationship With Twitter by Jean MacDonald).

Fragmentation, for me, is an issue also when it comes to writing. At first I had thought about integrating Micro.blog into my main website (this one), but I quickly decided against it. Perhaps I’m in the minority here, but I actually like to keep things separated. I like to have a place (this one) where I publish long-form pieces, and another where I broadcast status updates, share photos, links, etc.; for now this other place is mainly Twitter, but even if I got tired, gave up everything and used Micro.blog exclusively, I would still maintain two separate spaces for brief ‘social material’ and for long articles.

Separation and fragmentation tend to form anyway, even if you just stick to blogging and microblogging. The Micro.blog client (or Web interface) is good, but I think it works at its best if you limit yourself to status updates and to the social aspect of Micro.blog. Whereas if you frequently write longer posts and long-form pieces on your main blog, you either:

  1. Keep blog and microblog separated using e.g. MarsEdit for your blog and the Micro.blog client for your tweet-sized updates and for reading other people’s updates.
  2. Keep blog and microblog integrated, and at this point it might be easier to just use MarsEdit to write everything; but to read other people’s updates and interact socially, you’d still need to switch to a different app (Micro.blog app or other client).

Earlier I mentioned that a great feature of microblogging is that you can follow someone’s activity by simply subscribing to their RSS feeds. The problem is that current RSS readers have user interfaces tailored to handle longer posts with titles, mostly. This is because such interfaces all follow the email client paradigm. You have a list of sources on the left, then a list of post/articles you can browse once you’ve selected a source, and then a bigger pane on the right where you usually read the article directly in the RSS reader.

If you follow somebody who mixes short micro-post and longer, traditional blog posts, you’ll easily end up having dozens of unread items, but of course 98% of these will be Twitter-like updates (typically untitled), while the meatier stuff will be the remaining 2%:

#alttext#

And this is for just one source. Imagine following even just 25–30 people. It would become impractical very soon, especially with people who micro-blog as frequently as they tweet. A traditional RSS reader would have an inadequate interface. On the other hand, a microblogging app would need a good overhaul to feature a full-blown set of tools to write long-form posts and not just micro updates.

I still haven’t refined this idea, but for me a more practical Mac RSS reader to follow microbloggers would be an application capable of detecting micro-posts (status updates) and present them in a Twitter-timeline-like manner; while long-form posts and articles would be separated and available to browse and read in the traditional RSS reader view (similar to email clients). Or — even better — an application that looked like a Twitter client at first, but when you click on a post announcing a longer article, a pane would open on the side, where you could read the full article with ease. If you’re only interested in longer articles, a toggle could hide the ‘chaotic’ Twitter-like timeline and present everyone you follow as an RSS source, highlighting only the articles and omitting the status updates.

#alttext#

This is a roughly-sketched mockup of such an application. It’s a sort of Tweetdeck/NetNewsWire hybrid. And yes, the OS X interface looks older because I created this image on my 12-inch PowerBook G4 running Mac OS X 10.5 Leopard.

I believe microblogging has a future, and Manton Reece is doing a great job with Micro.blog as a platform. There are, however, still different levels of friction involved that might make transitions less smooth; not only because the ideal would be for people to just embrace microblogging and leave proprietary, centralised social networks behind — and that’s not going to happen very soon — but also because microblogging as a platform needs specialised applications (readers, clients) that can handle timelines, short updates, and long-form pieces, ideally in one single place, with a homogeneous UI. And by handling I mean both reading and writing. I’m aware that the era of monolithic applications capable of managing different tasks is at its sunset, but the overall (micro)blogging experience would certainly feel less fragmented.

Apple needs polycarbonate again

Tech Life

The last polycarbonate Mac was the 13-inch unibody MacBook released in May 2010 and discontinued in July 2011. It cost $999 but was powerful enough to meet even prosumer needs. Its weaker points were perhaps the graphics card — an NVIDIA GeForce 320 M sharing 256 MB with the main memory — and the lack of a FireWire port. For the rest, it was a capable machine, with a 2.4 GHz Core 2 Duo CPU, and with the RAM that turned out to be expandable to 16 GB. The most notable weakness, though, was ironically the ‘durable polycarbonate unibody enclosure’, which was prone to ugly cracks in the most-stressed spots (such as the hinge area on the back). But that’s not the point. 

Let me get to the point by first going back in time. In 1999, when Apple was back on track under Steve Jobs’s direction, all Macs were made of some kind of plastic/polycarbonate material. The separation between consumer and pro machines was indicated by the name scheme, essentially. If the Mac’s name started with an ‘i’, then it was a consumer/prosumer model (iMac, iBook). If it started with ‘Power’, then it was a professional model (Power Mac, PowerBook). This was, of course, reflected in the machine’s specs and prices. The entry-level Mac was the base iMac, with a G3/350 MHz CPU and a starting price of $999. The iBook was more expensive than you might remember: $1,599 for the base 300 MHz model. It was, however, very affordable compared with its professional counterpart: the base 333 MHz PowerBook G3 (“Lombard”) cost $2,499, and the 400 MHz model a jaw-dropping $3,499. Similar prices were found in the pro desktop line — the more high-end Power Mac G3 configurations exceeded $2,000, and the Power Mac G4 had a range of configurations that cost $2,499 to $3,499.

As time passed, and Apple researched new building materials for its computers, the separation between consumer and pro Macs was also conveyed by the materials employed and the enclosure designs. Around 2003–2004, the distinction between polycarbonate (indicating a consumer/prosumer machine) and aluminium (indicating a pro machine) had fully evolved. In this period, the entry-level Macs are the base 12-inch 800 MHz iBook G3 model (April-October 2003), priced at $999; then the base 12-inch 800 MHz iBook G4 model (October 2003 — April 2004), priced at $1,099; and the eMac (only $799 for the base G4/800 MHz model). This is probably the period where you can find the most affordable Macs in recent history that are also not ridiculously crippled when it comes to specifications.

In 2006, after the transition to Intel architecture, the situation isn’t much different: the white polycarbonate MacBook takes the slot that belonged to the iBook, and again the base model is sold at $1,099. The base 17-inch iMac isn’t much more expensive, by the way, and can be had for $1,199. But of course the most affordable Mac keeps being the Mac mini, introduced a year earlier, whose base model is only $599. Its chassis, however, is a mix of polycarbonate and aluminium, so it represents an exception to the “polycarbonate = consumer, aluminium = pro” rule.

But it doesn’t matter, because when I say that Apple needs polycarbonate again, I mean that in a sort of symbolic way — like in the era of the iBook and pre-retina MacBook, Apple should provide a true entry-level Mac made of a different material than aluminium. A Mac that isn’t too terribly crippled specs-wise, but affordable, a little more rugged and — why not? — a little more cheerful. For that, Apple won’t lose its status as a ‘premium’ tech company. If anything, this new ‘low-end’ Mac will signal that the company can smile and not take itself too seriously, just as it happened sometimes under Steve Jobs; and it will also help sell even more Macs. 

Yes, I still think that using $300–400 iPads as affordable consumer solutions isn’t always (and everywhere) the right strategy. Where I live, I keep seeing a lot of university students preferring cheap laptops over premium tablets as their portable solution. I know, Apple will never make a $400 MacBook, but a $799–899 model, in my opinion, could really be a hit in the low-end consumer and education slots. Apple’s recent trend in portable Macs has seen prices go up again, and I’ve been hearing old stereotypes come up again (e.g. Apple makes trendy products for rich people).

How to position it, though? The current laptop line would become even messier with a cheap MacBook, an expensive retina MacBook, the comparatively affordable MacBook Air, and the overpriced MacBook Pros. So here’s my slightly radical proposition.

  • Discontinue the MacBook Air.
  • Turn the current 12-inch retina MacBook into a 12-inch retina MacBook Pro.
  • Added bonus: make that 12-inch retina MacBook Pro a bit thicker, a bit more powerful, and give it an additional USB‑C port.
  • Make the MacBook the most affordable line again, even visually, with this hypothetical new ‘polycarbonate MacBook’; make it in different colours (but bright and vivid ones, like the iMac G3, the first iBooks, the iPod nanos, not the usual and boring space grey, silver and gold); make it reasonably thin (but thicker than the MacBook Pros) and give it at least a USB‑A port; cut costs by using a non-retina display, but perhaps something a bit better than the display in the current MacBook Air; and finally, give it a great battery life, something similar — or even better — than the MacBook Air.

This hypothetical new MacBook could be a sort of spiritual successor of both the old Mid-2010 white unibody MacBook and the MacBook Air line. A convergence of materials, designs, colours, for a product that would hark back to the 1999 era of Apple resurgence in the consumer market; a product that could turn out to be equally successful.

 

P.S. — Also, a product with a decent keyboard, please.

A few stray observations on voice assistants

Tech Life

To keep the title of this piece short, I have used the somewhat generic term ‘voice assistants’ instead of something like speech recognition and intelligent personal assistant applications. Now that I’ve hopefully made that clear, here are a few thoughts I’ve been having on the subject, in no particular order of importance.

  • Every time I interact with people who are excited by voice assistants and the underlying technology, they often like to include references to past science fiction series and films, ‘pioneering’ the voice-based human-computer interaction. I’m more and more of the opinion that those series and films have been a bad influence on tech people, that they gave them the wrong idea about what the future of computing should be about. On several occasions, I’ve been baffled by how few of my tech enthusiast interlocutors failed to recognise that in series like Star Trek or Space: 1999, the voice-based interaction with computers is essentially a dramatic device, a way to deliver information to the viewers that is quick and effective. Instead of having boring close-ups of displays where you see queries typed by a Starfleet officer and the computer’s responses, it’s easier to conceive the computer as another character — a very erudite one — who can be queried on the spot and whose response is equally fast. Sometimes this is taken even further by the introduction of an android, a computer with a human shape. 

    In other cases, presenting a voice-based human-computer interaction is a sci-fi trope to convey a general idea of technological advancement, especially when combined with the absence (or very reduced footprint) of physical tech gadgets/devices. Voice assistants might be ‘the future’, but their current state is little past the mimicking stage of this fictional interaction. That is, we’re aping what we saw in those sci-fi shows, but we’re still at a stage where the form is nice, yet the substance is lacking. There’s little depth beyond the surface. Speech recognition is passable, but reliability is still poor, and the scope of actionable tasks is still limited. It’s like playing one of the early text adventure games, where the parser can’t interpret commands that are more sophisticated than GO EAST, TAKE LAMP, OPEN DOOR, etc. Sure, “we’ll get there” someday, but I can’t shake the feeling that it’s not worth the amount of energies Silicon Valley is pouring into this, and the amount of data we’re feeding to machines to improve Artificial Erudition (I’m still not ready to call it Artificial Intelligence, sorry).

  • I have this theory about the current limited usefulness of voice assistants, and their relatively slow rate at getting better. I tentatively call this theory ‘the Google Glass fallacy’. It has been pointed out how Google Glass has turned out to be a failed attempt as a general-purpose device aimed at the general public, but a more successful one in limited, specialised applications and environments. I believe voice assistants have started with the wrong foot — as I wrote on Twitter yesterday, I think that if voice assistants had been originally designed having people with disabilities as first and sole target audience (instead of lazy tech dudes), and then gradually extended to everyone else, today they’d be a bit better.

    Joe Cieplinski’s 3‑tweet response to that really resonated with me, because it touches on one aspect that inspired my observation in the first place. Here’s his response (emphasis mine):

    I think you may be on to something there. Another problem that enthusiasts who think “voice will one day replace your screen” never consider is that those with hearing difficulties would be locked out entirely. I’ve always felt that voice will find its place, but never be the “only” way to interface with computers. Even the folks who wrote Star Trek knew that. It’s also worth noting that there are two different things at play here. Voice recognition, and then artificial intelligence. There’s no reason the two have to be permanently linked. We could just as easily type to Siri or Alexa. Or show it images to interpret.

    [Link to tweet 1 | Link to tweet 2 | Link to tweet 3]

    I think that in the creation and initial development of these voice assistants, there hasn’t been given enough thought to the ‘assistive’ part, because the design mainly referenced able-bodied people. Simplifying, there’s a big difference when your goal is to develop a tool that makes your life-as-an-able-bodied-person easier (read: spoiled) instead of a tool that makes the life of a disabled person more tolerable. Your able-bodied person’s ‘friction’ is bullshit compared to the real friction of a person with any disability. A useful virtual assistant is one that, first and foremost, addresses a few crucial types of impairments. Design with that in mind, give precedence to solving problems related to the interaction between a person with impairments, develop against those, test against those, then worry about perfectly healthy twenty-somethings who are too inconvenienced to manually select the music they want to play.

  • When my dad was still around, every now and then I used to tease him into discussing tech-related topics. He was an extraordinarily intellectually curious person, always willing to learn new things, and often approaching questions with great common sense. We only had the chance to talk about voice assistants once, briefly. I was explaining to him the technology and the current capabilities of Siri, Alexa, Cortana, Google Assistant and the like. So, what do you think? – I remember asking. 

    He fell silent for a bit, then he said: These things can be really useful to people who, for one reason or another, need assistance in their lives. I mean, real assistance: because they’re blind, or can’t move, or are simply too busy to use their hands. For someone like me they’re mostly useless. You know me, I’m quicker if I just use my hands.

    – I use Siri to set a timer when I cook. Sometimes for a reminder. Nothing else.

    – Simple things, he nodded. – I probably wouldn’t even use it while cooking. I’d just clean my hands, take the phone, set the timer myself.

    – Yeah, there are times when I still have to do that anyway. Siri doesn’t always understand what I say.

    He slowly shook his head, then said: – Reliability must be put first with these assistants. They ought to understand you at once, and if they don’t, they ought to allow you to correct them as quickly as possible. Otherwise they’re just like that subordinate at the office who is supposed to help you do the work, but he doesn’t understand or misunderstands what you want him to do, and you end up doing more work to fix the misunderstandings.

    – Yes, that’s Siri right there.

    We laughed, then he observed: Now, if this Siri misunderstands you, you are absolutely able to take matters into your hands, and you just do the thing. You just open the app for the weather forecast, or you set the timer yourself, or you type your internet search. You do that in no time. Now imagine those who truly need this kind of technology in their lives, they are already frustrated enough by their condition. When these assistants fail, it’s even worse. They need them to work. If tech companies want to help these people, they have to work hard at this stuff, or just drop it. Half good doesn’t work here.

    — Or, you know, at least recognise your limits and rethink the project. Make something that’s really good at one thing…

    – Yes, like something that’s really good at reading things for blind people. You develop one piece, then maybe another company develops something that’s really good at taking your dictation — but really good, something that gets you even if you stutter, or have a lisp… Then maybe one day these two companies collaborate, put the pieces together, and make something very good at more than one thing… and everybody wins.

    – Instead, everyone wants to compete, each company trying to offer a finished product that does many things out of the box, and they’re all more or less mediocre.

    I miss these chats with my dad.

  • I wrote this in Siri’s fuzziness and friction — October 2015. Nothing has improved on this front, and this kind of criticism can be extended to other assistants:

    And indeed, Siri is the kind of interface where, when everything works, there’s a complete lack of friction. But when it does not work, the amount of friction involved rapidly increases: you have to repeat or rephrase the whole request (sometimes more than once), or take the device and correct the written transcription. Both actions are tedious — and defeat the purpose. It’s like having a flesh-and-bone assistant with hearing problems. Furthermore, whatever you do to correct Siri, you’re never quite sure whether your correcting action will have an impact on similar interactions in the future (it doesn’t seem to have one, from my experience). Then, there’s always what I usually consider the crux of the matter when interacting with Siri: the moment my voice request is misunderstood, it’s typically faster for me to carry out the action myself via the device’s Multi-touch interface, rather than repeat or rephrase the request and hope for the best.

    […]

    Siri’s scope is still rather limited. What is the reward for my continued use of this technology despite its immaturity? That sometime in the future it’ll be able to properly write a text message or a reminder? Time is too precious a resource for me to keep trying to have Siri understand simple requests. Not only does the friction in interacting with this particular fuzzy interface have to disappear, but the scope, applications and usefulness of Siri must expand as well — it has to offer enough flexibility and reliability to engage the user. It has to offer more, to provide an advantage over performing the same tasks manually. Otherwise, I think it’s difficult to expect users to invest time and energy in something that still feels non-essential. 

  • While this other article — Siri, wake up — is five years old. Five years. It has aged rather well, which is not a compliment to Siri. In this MacRumors article from September 2017, among other things, there’s this:

    Siri is powered by deep learning and AI, technology that has much improved her speech recognition capabilities. According to Wired, Siri’s raw voice recognition capabilities are now able to correctly identify 95 percent of users’ speech, on par with rivals like Alexa and Cortana.

    Not my experience at all. I’ve tried Siri, Google Assistant, and Cortana, interacting in English (which is not my first language, yet I believe my pronunciation to be fairly decent) and performing the same requests. Google Assistant and Cortana both performed better and more consistently on this front. Cortana (on Windows Phone 8.1, Windows 10 Mobile, and iOS) even understood me while whispering to it at dead of night. There’s more: even Dragon Dictation under iOS 4.2.1 on my old iPhone 3G was able to correctly understand 99.8% of the text for a short email I was preparing to send.

    […] Joswiak says Apple’s aim from the beginning has been to make Siri a “get‑s**t‑done” machine. “We didn’t engineer this thing to be Trivial Pursuit!” he told Wired. Apple wants Siri to serve as an automated friend that can help people do more.

    Maybe it should have been engineered to be (also) Trivial Pursuit. At least it would be good at the Artificial Erudition part of this whole machine learning thing. At this stage, Siri is indeed an automated friend, but a quirky, unhelpful one. Apple is in a difficult position here, because they have decided to integrate this half-baked assistant in too many points of their ecosystem to just pull out now. And the pace of improvement of this automated friend is frustratingly slow.

  • Speaking of quirky, unhelpful automated friends: apparently, Alexa laughs at you, unprompted. Nick Heer comments:

    But why is this possible at all? Is there some sort of hidden maniacal laughter mode? Is that something people would ever want to trigger intentionally, let alone have the device invoke accidentally? Is this a prank? And could you trust Amazon’s virtual assistant to not do anything like this again?

    Today it’s an unprompted laugh, tomorrow may be something else, equally unexpected, but perhaps not as innocuous. If I have to renounce a big slice of privacy for these ever-listening devices (I’m not putting any of them in my home, by the way), is it too much to ask for some useful assistance in return?

  • This has just popped into my head as I was about to publish the article: How nice and useful it would be to have the ability to define ‘shortcuts’ with these assistants, so that common, repetitive tasks can be carried out with shortened queries. Stupid example: you’re often making pizza, or heating a pre-packaged meal in the oven, and you always set a timer to the same amount of time. You could ask the assistant to Define task. The assistant would respond: Name of the task?, and you’d say Pizza time; then the assistant would ask: What do you want me to do for ‘Pizza time’, and you’d reply: Set a timer for 25 minutes. Once confirmed, every time you say Hey [Assistant], let’s do ‘Pizza time’, the assistant carries out the pre-recorded task.

    Yes, yes, the idea needs refinement (I just thought of it, bear with me), and it would also need a step further towards sophistication: a more context-aware assistant, the ability to perform a longer exchange than a simple challenge/response, the ability to store tasks in a database on the device and/or the cloud, and a better parser, so that when a particular phrase is invoked, it triggers the assistant to expect a task shortcut. But we’ll have to make them stop laughing at us, first.

How's that Windows Phone experiment going?

Tech Life

#alttext#
Left: Nokia Lumia 925 with Windows Phone 8.1. Right: Nokia Lumia 830 with Windows 10 Mobile

One of my readers remembered my lengthy piece from November — A few days with Windows Phone 8.1 and a Nokia Lumia 925 — and has noticed I’ve recently doubled down by also getting a Nokia Lumia 830 to update it to Windows 10 Mobile and try that more recent version of Windows for mobile devices. So I received a brief email the other day and, among other things, I was asked: So, how’s that Windows Phone experiment going? Thinking about embracing the dark side for good?

Pretty well, I’d say. And no, I’m not switching to Windows Phone full time and leaving iOS behind. But — after using the Lumia 925 for more than three months, and the Lumia 830 for one month and a half — my experience with the hardware and the software has been truly positively surprising. I’m at a point where what started as a mere experiment driven by curiosity for the user interface of Windows Phone, now isn’t an experiment anymore. I always carry with me one of those two phones above, together with my primary iPhone. They are valid secondary devices, and in a rare instance where the iPhone ran out of battery, they managed the role of primary device with little effort.

When I first got the Lumia 925 back in early November and started finding my way around Windows Phone 8.1, looking for apps and trying out many of them, I was impressed by how the system kept up with anything I tested, and by how the phone maintained responsiveness. But you know, I thought, these are just a few days, the system has been freshly restored and all… Let’s see if things start degrading after a longer period of time. They have not.

In three months’ daily usage (usually light to moderate), the Lumia 925 with Windows Phone 8.1 has been the most stable system I’ve used that is not iOS. The phone has never shown unexpected behaviour, never froze, never stuttered. In a couple of occasions, an app had to be uninstalled and reinstalled to restore functionality, but I ascribe that to a bug of an app which clearly lacks refinements, and which I later deleted for good. As I wrote in my original piece, I really enjoyed, and continue to enjoy, Windows Phone 8.1’s UI. It’s consistent, predictable, thoughtful, visually vibrant, fun to interact with and, especially in my initial exploratory phase, it has been a breath of fresh air versus iOS’s ‘business as usual’.

When it comes to the Nokia Lumia 830 and Windows 10 Mobile combination, my general impression is roughly the same, although I must say I enjoy Windows 10 Mobile a bit less than 8.1, and that the Lumia 830 has impressed me more than the software. Specifically, I’m surprised at how well this phone handles the more resource-hungry Windows 10. The experience has had a few more bumps so far than Windows Phone 8.1 on the other Lumia: sometimes apps have quit on launch, and one time the default camera app became unexpectedly unresponsive, but nothing more (even under iOS I experienced similar issues in the past). I know things are smoother on devices such as the Nokia Lumia 930, 1520, and 950/950XL, all phones with faster CPUs and more RAM.

I’m still looking for a 930; I made do with an 830 because I found one in good condition at a bargain price, but it’s a nicer phone than I expected. For someone who finds 4.7‑inch iPhones to be the maximum comfortable size, the 5‑inch Lumia 830 handles quite well in my hands, and I have very little problems reaching the farthest areas of the UI. The fact that it has a removable battery and expandable storage through mini-SD cards is certainly a bonus and helps extend the phone’s life and usefulness.

Windows 10 Mobile vs. Windows Phone 8.1

Earlier I said I enjoy Windows 10 Mobile a bit less than Windows Phone 8.1, and I wanted to elaborate a little. While the general interface of Win10M retains an indubitable degree of familiarity for those coming from WP8.1 — the customisable Start screen, Live Tiles, lock screen, status bar, the All Apps list view, etc. — there has been a refresh across the whole UI to make it look perhaps more ‘professional’, more homogeneous across various Windows devices (tablets, convertibles, PCs), more subdued, with familiar UI elements taken from other platforms (e.g. the so-called hamburger menus); and the end result, while still visually pleasing, is also blander and more unimaginative. And even a bit inconsistent in places.

Perhaps my very first impression is still the best summary of Win10M’s general feel. As I said on Twitter in January: It’s as if WP8.1 got married, had kids, and stopped being a rebel artist.

A few quick comparisons (WP8.1 on the left, Win10M on the right):

#alttext#
Calendar, week view — The Win10M version may sport a sleeker interface, but by mimicking the week view of a paper organiser, the WP8.1 version delivers more information at a glance (including weather).

 

#alttext#
Mail — The reason the backgrounds are so different is because on the Lumia 925 (WP8.1) I prefer the Light theme, while Outlook Mail in Win10M on the Lumia 830 is set to match the ‘Windows mode’ on the phone. As for the UI here, it’s a matter of taste. I prefer WP8.1’s bigger text and the navigation by headlines. I can easily switch from All, to Unread, to Urgent by tapping on those big targets, while under Win10M a similar effect is achieved by tapping that small ‘All’ drop-down menu on the top right, which triggers an equally hard-to-tap list of options.

 

#alttext#
MSN Weather — At first glance, the UIs aren’t that different, but on Win10M the information doesn’t feel as efficiently organised. On WP8.1 there is a clear hierarchy, designed to guide your eyes towards what feels more important. Another thing that WP8.1 in general got right (more on this later) is the horizontal navigation, where you usually get pages of information you access by swiping horizontally. In MSN Weather on WP8.1, then, there is more breathing room, because you get separate pages for the Daily and Hourly forecasts. On Win10M, all the information is compressed in a single page and you just scroll down to see it all. There is little differentiation among the app’s visual elements and the overall impression is that it’s just a bunch of information thrown at you.

 

#alttext#

#alttext#
MSN News — Here the difference in how the information is displayed and accessed is more evident, and while the Win10M version has the typical ‘big photo + headline’ presentation of many other news publications on the web, the WP8.1 displays more content at a glance, and feels more organised hierarchically. And you have to scroll less.

 

#alttext#
Blue Skies — This is a third-party weather app, but I chose it because once again it exemplifies the difference between the two core paradigms for content presentation and navigation in WP8.1 and Win10M: in WP8.1 we have distinct screens or pages, where you don’t need to scroll down because all the information is neatly displayed in each page; then, usually, you move to the other page or section horizontally. In Blue Skies for WP8.1, the three blue dots at the bottom clearly indicate that the information is spread across three pages: the landing page (current weather), then a Today page, with a summary of the weather for the day, and finally a 5 Day page, with the forecast for the next 5 days. In Blue Skies for Win10M, all the information is presented in a single page, and you have to scroll, scroll, scroll to see everything. Swiping horizontally, looking at the content thoughtfully presented in a paginated view is, in my opinion, a better UI design and it’s less tiring when you use the phone one-handed.

 

#alttext#
Settings — In this case, Win10M has definitely the better UI. While I love the WP8.1 aesthetics, in Win10M the different system settings are more clearly divided in subcategories. They’re easier to discover and memorise.

Stray observations

The Windows Phone interface has nice touches and features here and there that add to an overall smooth and pleasant experience. When you adjust the volume with the phone hardware buttons, an overlay appears at the top of the screen, showing the volume level; this overlay can be expanded and you can easily see and adjust the volume levels for Ringer + Notifications, and Media + Apps. This way you can, for example, keep notifications at a volume you can hear, and mute any sound produced by media content and apps, so that if you stumble on an autoplaying video while checking a website, you can make sure you (and those around you) won’t be startled by a sudden blast of music or sound effects.

The Battery Saver settings in Win10M have a clever feature I highlighted in this tweet: you can specify exactly when the battery saver should turn on. On iOS, the equivalent Low Power Mode can either be activated when battery drops to 20%, or manually, whenever you need it. But you have to remember to activate it, if you want it to kick in before the iPhone battery reaches the 20% threshold. In Win10M, things are a bit more flexible: I can set it to any value beforehand, start in the morning with the phone battery at 100%, and when it reaches, say, 55% the battery saver activates.

What we know as Notification Centre and Control Centre in iOS, can be found merged in the same pane in both WP8.1 and Win10M. You swipe from the top of the screen, the Notifications pane slides into view, and at the top of it you can access Quick Actions (the Windows Phone equivalent of iOS’s Control Centre). In WP8.1 these are just four (customisable). In Win10M you still see four, but they can be expanded to show all available Quick Actions. There is no Today View or other fancy things we have in iOS, but in daily usage I’ve found this more utilitarian approach to be refreshing and effective in its simplicity. It’s all there, you don’t need to memorise where to swipe from to get what.

It truly is a pity that Microsoft has decided to not develop Windows 10 further for mobile phones, and it’s sad that the ‘lack of apps’ mantra has contributed to the sinking of the whole platform. It’s true, there is less choice than on iOS or Android, but — on Win10M more than WP8.1 — there are enough decent apps to cover almost all the essential services and provide enough functionality to make a Windows Phone handset still useful today.

I’ll close by reiterating a paragraph taken from the conclusion of my previous piece on Windows Phone: Now that I’ve used (and own) iOS, Android, webOS, and finally Windows Phone devices, I think it’s really sad that today it’s just iOS vs Android, basically. The real pity is that, UI-wise, the ‘loser’ platforms are, in many aspects, more innovative, creative, daring, and in most cases more consistent than the two giants. And after more than three months with these Windows Phone smartphones, I can also add that Windows Phone (especially 8.1) has proven to be more stable and reliable, if not than iOS, at least than Android.

Snow Leopardise to not compromise

Software

Commenting on a recent Bloomberg article by Mark Gurman, How Apple Plans to Root Out Bugs, Revamp iPhone Software, Michael Tsai references an otherwise insightful tweetstorm by Steven Sinofsky (a former President of the Windows Division at Microsoft), and shares a few critical observations:

A lot of people are pointing to Steven Sinofsky’s comments. He makes some good points about the “broader context,” but I think he’s completely wrong about Apple’s software quality:

In any absolute sense the quality of Mac/iOS + h/w are at quality levels our industry has just not seen before. […] On any absolute scale number of bugs—non-working, data losing, hanging mistakes—in iOS/Mac is far far less today than ever before.

I don’t see how that can be taken seriously. He doesn’t have access to Apple’s bug database, so how would he know? I really doubt that the number of open bugs is lower than in the past, and even if it were there’s no reason to assume that Radar is representative of the actual number of bugs. He later says that the list of bugs is “infinitely long,” so this whole line of argument seems nonsensical. In what way is today’s Mac/iOS quality better in “any absolute sense” than in, say, 2010? He doesn’t say, except that more people are using it: […]

Well, we can look at how many problems an individual user runs into. Is it higher or lower than before? This measure is independent of Apple’s scale. So is the circle of people I hear complaining. Apple’s customer base has doubled many times over, but the number of family members, friends, and customers that I communicate with has not. Now you could argue that maybe we have become exceptionally unlucky and are running into more than our share of issues, but I don’t find that very convincing.

He wants to discount the actual experiences of “many super smart/skilled people” because “the more a product is used the more hyper-sensitive people get to how it works.” What does that even mean? The number of hours in a day hasn’t increased; I don’t think my Mac/iPhone usage has increased much, if at all. Hardly anyone complains to me about the “slightest changes”; I hear about things that flat out don’t work. That’s not being hyper-sensitive.

I fully agree with Michael here. In a piece I wrote in 2015 — The perceived decline in Apple’s software quality — I argued that “this perceived decline in the quality of Apple’s software products (OS X included) is more related to the nature of the flaws/bugs/annoyances, than the sheer number of those. In other words, it’s not that Apple’s software is quantitatively more buggy today than, say, in the Mac OS 8–9 era, but the issues are (or feel) more critical, and that in turn affects the general level of satisfaction of working with the Mac.” 

At the same time, like Michael and unlike Steven, I can’t say I find today’s Apple software to be far far less buggy or problematic than before. Again, I don’t have access to Apple’s bug database either, so my observations are all necessarily empirical and based on an intensive daily experience with different Macs and different OS X versions. Elaborating on my previous remark, that the perceived decline in Apple’s software quality has more to do with the nature and prominence of the bugs rather than their number, I can say that the latest versions of iOS and Mac OS present a series of annoyances (visual glitches, functional issues, things that work intermittently, etc.) that when manifesting, they have enough prominence and impact to give the whole OS an aura of unpredictability and unreliability; in such a way that, even when everything appears to work just fine, I’m often wondering what kind of issue awaits me round the corner.

And you know what’s ironic? That my experience with Mac OS X High Sierra and iOS 11 has been, for now, limited to borrowed devices and hardware. Devices and Macs I haven’t used as extensively and intensely as my main, older hardware (an iPhone 5 on iOS 10, a MacBook Pro on Mac OS X El Capitan) — and despite the limited usage, I’ve had plenty of occasions to notice buggy behaviours. So this is not being hyper-sensitive towards these issues, because my familiarity with the latest Apple software is only superficial, not developed by an increased usage of a Mac or iPad.

To further corroborate my agreement with Tsai against Sinofsky’s “the more a product is used the more hyper-sensitive people get to how it works” argument, I’ll make a different example, taken from another angle.

I’m still using a fair amount of vintage PowerPC Macs and older iOS devices on a daily basis. I’m writing this on a 17-inch PowerBook G4 from 2003/2004, running Mac OS X 10.5.8 Leopard. I also use other Macs running Tiger (10.4) and even Panther (10.3). I’ve been using these Macs and these versions of Mac OS X constantly for years — and in the case of an iBook G3 and the 12-inch PowerBook G4, since their introduction, April 2005 for Tiger, October 2007 for Leopard. While I indeed encountered a few annoying bugs when Tiger and Leopard were in active development, I remember how the most egregious usually disappeared after a minor OS X release (I even remember resolving an issue on one of my Macs by downloading a Combo Update and reinstalling). 

Whether small or a bit more serious, the bugs, then, felt like something transient passing through an otherwise rock-solid environment. In my 10+ years of using these PowerPC Macs running Tiger and Leopard, I’ve never encountered new issues or noticed things I didn’t before, and I’ve had plenty of time to become ‘hyper-sensitive’ to how they work. Sure, the PowerPC platform isn’t in active development anymore, and I’m speaking of machines and systems that are basically crystallised in their most mature state. But still, in all these years of use, with all the first-party and third-party software I’ve thrown at them, I should have been able to encounter bugs I’d previously missed, or trigger unexpected behaviours. 

While I’m certain there are still underlying issues left unsolved in both Tiger and Leopard, in day-to-day general use, nothing prominent shows up on my radar. I turn on this PowerBook, it boots into Mac OS X 10.5.8, I open whatever apps I need for this session, and I feel I’m working in a stable, predictable environment. The only unfortunate thing I notice is that in places the hardware shows its age, or that certain features or services are too new to support this platform, but neither this particular vintage Mac nor its Mac OS X version are at fault. And it’s pretty amazing I’m still being productive with a 14-year old machine.

I use these PowerBooks, iBooks, and Power Macs, and Mail doesn’t quit unexpectedly or corrupts its message archive; the Finder doesn’t hang randomly, making the machine almost completely unresponsive; after leaving these Macs for a while, I don’t find their fans spinning at maximum speed because a couple of rogue processes are using 134% of CPU resources each(!); their Wi-Fi and/or Bluetooth connection doesn’t drop for apparently no reason at random time intervals (and a couple of those Macs even use third-party Bluetooth dongles!); these and other issues I have instead experienced on more modern Intel Macs with Mac OS X 10.9 and later. And these and other issues are prominent enough to impact the user experience and make people feel distrust towards the operating system and the machine.

I’m just an outside observer, with perhaps the vantage point of having been using Apple hardware for almost 30 years. I can’t say with certainty that today both Mac OS and iOS have more bugs and issues than before. I’m also not saying that everything was 100% perfect before and now it’s all rubbish, because it’s not true. But from having extensively used (almost) each version of Mac OS and iOS, what I do notice is that behind the scenes there was a different approach to their development before a certain point in Mac OS X’s timeline, and that something changed (for the worse) after that point. I roughly place that point between Mac OS X 10.6 Snow Leopard and 10.7 Lion. With iOS things are less clear-cut, because I feel it has always had a lot of attention inside Apple, but the ousting of Scott Forstall clearly was a definite turning point, again not for the better[1].

Back to Gurman’s article, which originated the whole discussion, on the one hand I really hope that whatever internal software development rethinking process Apple plans to carry out is geared towards recovering part of the old approach to development and quality control I mentioned above; on the other hand, I’m not holding my breath. Not for lack of trust, but because these changes take time and a certain resilience to internal and external pressures.

 


  • 1. Another figure I sorely miss is Bertrand Serlet. ↩︎