This post is at least two weeks late. What happened right after I published my two articles on Snow Leopard (link to the first, link to the Addendum) was 95% amazing and 5% frustrating.
Amazing because the response was rather unexpected and overwhelmingly positive. Frustrating because I simultaneously started receiving a lot of feedback via email, and my day job kept me busier than usual. The result is that I accumulated a serious email backlog almost immediately, and my response times got longer because I was overwhelmed with work.
To give you an idea, the second article was published on 22 February, and between 23 February and 16 March, I received 154 emails. A good 30% of these were short messages of appreciation, to which a “Thank you for reading” reply was sufficient. Then there was the usual 10% of messages from people who clearly missed the point of my articles, but at least didn’t write me anything offensive.
The remaining 60% of feedback is comprised of messages from people who asked me to elaborate more on certain aspects of Snow Leopard’s UI; messages from people who raised further questions about Apple’s interface design in general; and messages from people who were curious about my background, who wanted to know where I come from with regard to discussing user interface topics.
In this article I want to share some of the most interesting bits that have come out of this sudden but satisfying onslaught of feedback. I won’t use full names for privacy reasons, just initials. Think of this as a sort of interview, if you like.
After agreeing with me on the better usability in Snow Leopard’s UI compared with Big Sur’s, C.H. writes:
There’s also one thing that drives me nuts when nerds discuss software and OS design. They would dismiss older releases saying that they look ‘dated’. Today anything that doesn’t look flat, bland and minimal is ‘dated’. I don’t think that it’s the right way of talking about GUI design. Emphasis should be put on function more than looks.
I tend to agree. How a graphical user interface looks is important, but it’s when visuals and workings are strongly tied together that good design happens. The look of a piece of software or operating system reflects the underlying ideas and interface decisions on how to present information to the user and how to allow the user to manipulate it. How a user interface looks shouldn’t be treated as a separate, standalone aspect of its design. The way some nerds talk about UIs that look ‘dated’ makes me think that they, as users, tend to separate the visuals and workings of an interface.
The only way I can reasonably call an interface ‘dated’ is when considering hardware limitations. Windows 3.1, to me, looks dated like Pac-Man looks dated to a modern gamer. But Pac-Man is as good and as addictive as some current games; the way it looks is strongly influenced by the limitations of the hardware it was meant to run on and the technology of its time. Same for Windows 3.1. In both these cases, how something looks doesn’t necessarily make it ‘good’ or ‘bad’. It’s how it works.
Another thing that makes the ‘dated’ objection kind of meaningless is the fact that there are old user interfaces which fit the description of what some would define as a ’modern look’. The Macintosh System 6 operating system, that was used by Macs between 1988 and 1991, has a flat and minimal UI. In my opinion, using ‘dated’ as a criterion to judge whether a user interface is good or bad, is misguided.
S.L. asks:
Do you think it’s possible to replicate the overall Big Sur æsthetic going straight from the Snow Leopard interface? That is, can the Big Sur æsthetic be done right?
This is a particularly juicy question.
On a purely theoretical level, I would say it is possible to ‘fix’ Big Sur’s interface by integrating what was ‘right’ in Snow Leopard. Again, on paper, the recipe would be rather simple: go take a deep look at Snow Leopard’s interface, study the Human Interface Guidelines it was based upon, examine the way Snow Leopard’s UI consistency worked, and replicate that in Big Sur while giving it a ‘fresher look’ (if that’s so important). This is more or less how iterating on the UI worked in the past, from Mac OS X 10.0 to 10.6 at the very least.
On a practical level, my instincts tell me that that ship has sailed by now. We are at a point where such a course correction at the UI level would require such time and effort that Apple would have to noticeably slow down Mac OS development. Steve Jobs’s Apple would have done it, because Jobs didn’t really care about shareholders the way the current leadership does. And in fact, in a way, that’s what Snow Leopard was all about back then — the first Mac OS version to focus on fixing what hadn’t worked in 10.5 Leopard, rather than touting new features.
But Tim Cook’s Apple? Forget about it. For that to happen, people would have to constantly complain about Big Sur’s UI the way they did with the infamous butterfly mechanism in the MacBook’s keyboard. For that to happen, people would have to stop buying Macs because of Big Sur alone. And that’s very unlikely, considering how many people are just fine with Big Sur’s UI.
Apple’s stupid insistence on heavily borrowing iOS and iPadOS’s visual language for the Mac’s interface is starting to look like an irreversible trend. As someone who’s been a Mac user for more than 30 years, this user interface degradation is painful to witness. I fear that, inside Apple, either there’s no one left of the old design team guard, or the new guard is simply ignoring their input.
It’s just speculation on my part, of course, but what I see when I use Big Sur is the work of people who only seem to know about iOS’s interface and paradigms. Designers who haven’t really studied how the old Human Interface Guidelines worked, or don’t care, or they have but think they’re doing a better job (…and they’re not).
What I’ll never tire of pointing out is that the mere fact of altering Mac OS’s interface to make it more similar to iOS and iPadOS’s works against its very usability. If the idea behind this insistence on homogenising these interfaces is to bring new users to the Mac — that is, people who only know and use Apple’s mobile devices — and welcome them with a familiar interface, then Apple is not really doing them a favour.
By having a Mac OS release (Big Sur) with an interface that superficially resembles iOS’s interface and sometimes behaves in a similar way, is less user-friendly than it seems. Because when behaviours do differ — due to the fact that a traditional computer with an interface that revolves around the desktop metaphor and mouse+keyboard as input devices, is different from a phone or tablet with a Multi-touch interface — then you actually add an amount of that cognitive load you originally wanted to remove by making the two UIs (of Mac OS and iOS) more uniform. If it looks like a duck, walks like a duck, but then it barks, then things may get a bit confusing.
With this premise, it’s easy to think that making Mac OS also behave more like iOS is the necessary next step. This is likely what Apple has in mind for the future of the (Apple Silicon) Macs. But if you think about it, a design method that starts from the visuals and then has the visuals influence the workings of a system is a method that works backwards with respect to what’s typically considered good design. The interface of a Mac, an iPhone, and an iPad should be focused on being the best for each specific device.
A.M. writes:
I appreciate the criticism in your Snow Leopard vs Big Sur articles. I also realize it’s somewhat easier to point at what’s wrong than make a list of what is right or what should be done right with a UI. In your opinion, what are the fundamentals of a good UI?
Apple itself wrote a good answer to this question, back in 1982 with the publication of the Apple //e Design Guidelines, a short manual in two parts, the first written by Joe Meyers, the second by Bruce Tognazzini. In Part 2, Tognazzini outlines the goals for good human interfaces: simplicity, consistency, efficiency, self-teaching, speediness, minimum strain on the user’s memory, and honesty. Here are some excerpts:
Simplicity
User interaction should be simple and easy to remember. Spend the necessary time to design a user interface that presents the best trade-off between alternate design issues. Once the user has become basically familiar with the human interface, if she guesses at an unknown response, she should be correct 95% of the time.
Consistency
All programs written for a given computer should have as great a commonality as is practical. […] All programs produced by a given software house should perform the same function in the same way. The same key sequence must not do the opposite thing in different products (E=edit, E=eradicate). […]
All software should be self-consistent: menu formats should be identical. […] If the LEFT-ARROW key deletes characters in one part of the program, it should delete characters in all parts of the program. If you are working on a large project, be sure to spend enough time in team meetings being sure that everyone is on the same track — all too often the three or four sections of a program end up with an entirely different ‘feel’.
Efficiency
The user should be able to perform the desired task in as little (perceived) time as possible, with the minimum (perceived) complexity. […]
Self-teaching
Often there is a trade-off between ease of learning and ease of use. Carefully balance your decisions: if the program is too difficult to learn, salespeople will not learn it and, thus, not sell it. If endless instructions and voluminous menus make it slow and cumbersome to use, people will get frustrated and tell their friends not to buy it. […]
Both syntactic and content help should be available at the point at which it is needed; designers are successfully doing that without encumbering the experienced user. See: Help and Menu. Many designers have successfully created a multi-tiered interface. See: Novice/expert modes.
Speediness
Actual speed of operations is important, but perceived speed is even more important. It may seem important to conserve keystrokes. but it is more important to conserve “brain strokes” and design the interface so that there is a natural flow. A more important goal is to reduce the amount of unproductive time, which is time spent deciding how to perform the desired task rather than time spent performing the task. This concern should permeate the entire design process.
React to user’s input immediately. A user will interpret any delay of more than a few tenths of a second after he has pressed RETURN to mean that either the program or the user has made an error.
Minimum strain on the User’s memory
Programs that are not used literally every single day will be forgotten. Users will not remember command words, the names of their files, nor the fact that you are accepting data not with RETURN, but with CTRL‑V. […]
Computers are notoriously good at remembering the above type of information. Share it with your user: make sure the information needed is available where and when needed.
Honesty
Do not lie to your users. Do not say, “File loaded” when the file is not loaded, only the name of the file has been “loaded,” whatever that means. [In other words, don’t create deceiving user interfaces.]
These goals may have been written 40 years ago, but they’re far from ‘dated’, and I find they’re still great software design goals/guidelines today. It would be interesting to closely examine all major operating systems available today and test them against these goals. Not even Apple’s own operating systems would pass. Mac OS, for one, has increasingly become less consistent from OS X 10.7 Lion onwards (some would argue that things started going downhill after 10.4 Tiger, even).
J.W. has a provocative question:
Don’t you think that this dumbing-down of the Mac UI (and other platforms) is because regular people today are dumber when it comes to computing than we were back in the Eighties?
I like the direction this question is going, but I believe it’s more complicated than that. First, I’d say no: today, regular people tend to be more literate than those of us who were exposed to computers back in the 1970s and 1980s. Back then, approaching this new thing, the computer, was more cumbersome. I was given my first home computer when I was 10 years old, and tried to learn everything I could in the only possible way before the Web existed — via books and magazines. But at that time many regular people first encountered computers in their twenties or thirties, and in the workplace. And they had never experienced anything like that before in their life, except maybe pocket calculators.
Today we see smartphones and tablets already in the hands of small kids. Some of them are already familiar with the basic interactions with these devices at an age where I was still perfecting my cursive script, so to speak.
Nevertheless, this superior base computing literacy of today’s regular folks compared to the regular folks of the Eighties, has interesting side-effects when we return to user interface design. Chris Espinosa already pointed this out back in 1997, at the end of a lecture held with Larry Tesler at the Computer History Museum in Mountain View, California:
…I don’t think acculturation had no effect [on human interface design]. I think what acculturation has done [is that] it’s made us soft. We don’t address complexity anymore the way [we used to do] — we sweated over complexity in 1981; we were deathly afraid of complexity. We take it for granted now.
And that is where the problem lies today. A lot of usability and discoverability issues of current operating systems’ UIs stem from that ‘taking acculturation for granted’ Espinosa was talking about. Today, a lot of people arrive to traditional computers via mobile devices; mobile devices are the first computing experience they know. And iOS and Android are simpler (some geeks would say ‘dumber’) operating systems than Windows, Mac OS, or Linux.
So, on the one hand, you have regular people who find using traditional computers harder than the smartphone or tablet they’re familiar with, because they assume computers will behave similarly — they take them for granted — and are puzzled when that doesn’t happen. On the other hand you have operating systems on traditional computers with worse user interface design because the OS designers take for granted that people today are more familiar with this stuff than they were decades ago.
So the OS designers take shortcuts because they think, It’s not the 1980s anymore, we don’t have to explain these interactions starting from Adam and Eve anymore. This leads to ambiguity in the UI; it leads to assumptions like, We can make this discoverable only on mouseover, the user will figure it out, or, It doesn’t matter if this control doesn’t look like a button. On the iPhone it’s the same thing and people will know that. Well, judging from the amount of feedback from regular people I received on Big Sur’s UI, the answer is no, many users won’t figure that out as instantly as you think.
Conclusion, for now
This has been just a small selection of subjects I wanted to address publicly. As noted at the beginning of this piece, the amount of positive and thought-provoking feedback I’ve received after publishing my articles on Snow Leopard’s UI has been staggering. I’ve done my best to respond to every single person who wrote me, and I’ve tried to extrapolate the very best questions I was asked. Of course, this is far from exhaustive; user interfaces is the kind of subject I easily get carried away with, but I’m also aware that people’s attention is very limited and I didn’t want to abuse that with an extremely long article.
Once again, a heartfelt thank you to everyone who got in touch, and thanks for the amazing feedback. And as always, thanks for reading!