I was perusing some past issues of ACM Interactions magazine, and I stumbled on an interview with Don Norman, a figure I’ve always admired and one of the main forces of inspiration for me to delve deeper in matters of usability, design, and human-machine interaction.
The interview, titled A conversation with Don Norman, appeared on Volume 2, Issue 2 of the magazine, published in April 1995. And of course it’s a very interesting conversation between Don Norman and John Rheinfrank, the magazine editor at the time. There’s really very little to add to the insights I’ve chosen to extrapolate. While discovering them, my two main reactions were either, How things have changed in 30 years (especially when Norman talks about his work and experience at Apple); or, 30 years have passed yet this is still true today. I’ll keep my observations at a minimum, because I want you to focus on Norman’s words more than mine.
1. Forces in design
Don Norman: […] John, you deserve much of the credit for making me try to understand that there are many forces that come to bear in designing. Now that I’ve been at Apple, I’ve changed my mind even more. There are no ‘dumb decisions.’ Everybody has a problem to solve. What makes for bad design is trying to solve problems in isolation, so that one particular force, like time or market or compatibility or usability, dominates. The Xerox Star is a good example of a product that was optimized based on intelligent, usability principles but was a failure for lots of reasons, one of which was it was so slow as to be barely functional.
John Rheinfrank: Then your experience at Apple is giving you a chance to play out the full spectrum of actions needed to make something both good and successful?
DN: […] At Apple Computer the merging of industrial design considerations with behavior design considerations is a very positive trend. In general, these two disciplines still tend to be somewhat separate and they talk different languages. When I was at the university, I assumed that design was essentially the behavioral analysis of tasks that people do and that was all that was required. Now that I’ve been at Apple, I’ve begun to realize how wrong that approach was. Design, even just the usability, let alone the aesthetics, requires a team of people with extremely different talents. You need somebody, for example, with a good visual design abilities and skills and someone who understands behavior. You need somebody who’s a good prototyper and someone who knows how to test and observe behavior. All of these skills turn out to be very different and it’s a very rare individual who has more than one or two of them. I’ve really come to appreciate the need for this kind of interdisciplinary design team. And the design team has to work closely with the marketing and engineering teams. An important factor for all the teams is the increasing need for a new product to work across international boundaries. So the number of people that have to be involved in a design is amazing.
Observation: This was 1995, so before Steve Jobs returned at Apple. But Jobs’s Apple seemed to approach design with this mixture of forces. The results often showed the power of these synergies at play behind the scenes. Today’s Apple perhaps still works that way within the walls of Apple Park, but often the results don’t seem to reflect synergetic forces between teams or across one design team — It’s more like, there were conflicts along the way, and an executive decision prevailed. (No, not like with Jobs, because he better understood design and engineering than current Apple executives).
2. Design can only improve with industry restructuring
JR: You just said that there may be some things about the computer industry, or any industry, that make it difficult to do good design. You said that design could only improve with industry restructuring. Can you say more?
DN: Let’s look at the personal computer, which had gotten itself into a most amazing state, one of increasing and seemingly never-ending complexity. There’s no way of getting out. Today’s personal computer has an operating system that is more complex than any of the big mainframes of a few years ago. It is so complex that the companies making the operating systems are no longer capable of really understanding them themselves. I won’t single out any one company; I believe this is true of Hewlett-Packard, Silicon Graphics, Digital Equipment Corporation, IBM, Apple, Microsoft, name your company — these operating systems are so complex they defy convention and they defy description or understanding. The machines themselves fill your desk and occupy more and more territory in your office. The displays are ever bigger, the software is ever more complex.
In addition, business has been pulled into the software subscription model. The way you make money in software is by getting people to buy the upgrade. You make more money in the upgrade than in the original item. Well, how do you sell somebody an upgrade? First, you have to convince them that it’s better than what they had before and better means it must do everything they had before plus more. That guarantees that it has to be more complicated, has to have more commands, have more instructions, be a bigger program, be more expensive, take up more memory — and probably be slower and less efficient.
3. Why changing is hard in the tech industry
DN: […] Now, how on earth do you move the software industry from here to there? The surety of the installed base really defeats us. For instance, Apple has 15,000,000 computers out there. We cannot bring out a product that would bring harm to those 15,000,000 customers. In addition, if we brought out a revolutionary new product, there’s the danger that people would say the old one is not being supported, so they’ll stop buying it. But they don’t trust this new one yet. “Apple might be right but meanwhile we better switch to a competitor.” This story is played out throughout the computer industry. It’s not just true of Apple. Look at Microsoft, which has an even worse problem, with a much larger installed base. It’s been a problem for many companies. I think the reason why a lot of companies don’t make the transition into new technologies is that they can’t get out of their installed base.
Mind you, the installed base insists upon the current technology. There’s a wonderful Harvard Business Review article on just this: Why don’t companies see the new technology coming? The answer is, they do. The best companies often are developing new technology. But look at the 8‑inch disk drive which has replaced the 14-inch Winchester drives. It was developed and checked with the most forward-looking customers, who said, “That will never work for us.” So the 8‑inch drive wasn’t pushed. Despite everything being done to analyze the market, in retrospect, the wrong decision was made. At the time, by the way, it was thought to be the correct decision.
It’s really hard to understand how you take a mature industry and change it. The model that seems to work is that young upstart companies do it. Change almost always seems to come from outside the circle of major players in the industry and not within. There are exceptions, of course, of which IBM is an interesting one. IBM was once the dominant force in mechanical calculating machines and young Thomas Watson, Jr., the upstart, thought that digital computers were the coming thing. Thomas Watson, Sr. thought this was an idiotic decision. But actually Junior managed to get the company to do create the transformation. It’s one of the better examples of change in technological direction, and it also was successful.
About Norman’s last remarks, see Wikipedia: “Watson became president of IBM in 1952 and was named as the company’s CEO shortly before the death of his father, Watson Sr., in 1956. Up to this time IBM was dedicated to electromechanical punched card systems for its commercial products. Watson Sr. had repeatedly rejected electronic computers as overpriced and unreliable, except for one-of-a-kind projects such as the IBM SSEC. Tom Jr. took the company in a new direction, hiring electrical engineers by the hundreds and putting them to work designing mainframe computers. Many of IBM’s technical experts also did not think computer products were practical since there were only about a dozen computers in the entire world at the time.”
4. “Personal computers”
JR: So it looks as though we have another transition to manage. It’s very strange that they call these devices ‘personal computers.’
DN: Yes. First of all they’re not personal and second, we don’t use them for computing. We’re using these things to get information, to build documents, to exchange ideas with other people. The cellular phone is actually a pretty powerful computer that is used for communication and collaboration.
Observation: This brief remark by Norman about mobile phones is rather amazing, considering that it was made back in 1995 when smartphones didn’t exist yet — the functions of what we now consider a smartphone were still split between mobile phones and Personal Digital Assistants (PDAs). Also the mention that these devices (personal computers) are not really personal still sounds especially relevant today, for different reasons. See for example this recent piece by Benj Edwards: The PC is Dead: It’s Time to Make Computing Personal Again.
5. Interface design, interaction, and building a personality into a device
JR: So in what direction do you think computer-interface design should go? Many companies are making moves to simplify entry and interaction (Packard Bell’s Navigator and Microsoft’s BOB). In the short term, how does this fit your vision?
DN: The question really is, in what direction do I see our future computers moving? Microsoft has introduced BOB as a social interface, which they think is an important new direction. Let me respond to the direction and I’ll comment later on BOB. As I’ve said before, I believe our machines have just become too complex. When one machine does everything, it in some sense does nothing especially well, although its complexity increases. My Swiss Army knife is an example: It is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or a scissors. My Swiss Army knife also has so many tools I don’t think I ever open the correct one first. Whenever I try to get the knife, I always get the nail file and whenever I try to get the scissors, I get the awl, etc. It’s not a big deal but it’s only about six parts. Imagine a computer with hundreds or thousands of ‘parts.’ I think the correct solution is to create devices that fit the needs of people better, so that the device ‘looks like’ the task. By this I just mean that, if we become expert in the task, then the device just feels natural to us. So my goal is to minimize the need for instruction and assistance and guidance.
Microsoft had another problem. Their applications are indeed very complex and their model is based on the need to have multiple applications running to do, say, a person’s correspondence, communication, checkbook, finances. How did they deal with the complexity with which they were faced? There has been some very interesting social-science research done at Stanford University by Cliff Reeves and Byron Nash, which argues that people essentially treat anthropomorphically the objects with which they interact, that is they treat them as things with personalities. We kick our automobile and call it names. Responding to computers in fact has a tendency to go further because computers actually enter into dialogues with people, not very sociable dialogues, but dialogues nevertheless. So from their research, Reeves and Nash did some interesting analysis (somewhat controversial, by the way) in the social-science community about the social interactions between people and inanimate objects. That’s all very fine, and you can take that research and draw interesting conclusions from it. It’s a very big step, however, to take that research and say that, because people impart devices with personalities, you should therefore build a personality into a device. That was not supported by the research. There was no research, in fact, about how you should use these results in actual device construction.
Observation: The bit I emphasised in Norman’s response made me wonder. And made me think that maybe this is one of the reasons why most automated ‘AI’ assistants — Alexa, Siri, etc. — remain ineffectual ways to devise and implement human-machine interaction to this day. Perhaps it’s because we fundamentally want to always be the ones in charge in this kind of relationship, and do not like devices (or even abstract entities such as ‘AI’ chatbots) to radiate perceived personality traits that weren’t imparted by us. By the way, I hope we’ll keep holding on to that feeling, because, among others, it’s at the root of a healthy distrust towards this overhyped ‘artificial intelligence’.
It’s very difficult to decide what is the very best way of building something which has not been studied very well. I think where Microsoft went wrong was that, first of all, they had this hard problem and they tried to solve it by what I consider a patch, that is, adding an intelligent assistant to the problem. I think the proper way would have been to make the problem less complex in the first place so the assistance wouldn’t be needed. I also think they may have misread some of the research and tried to create a character with an extra cute personality.
In his response, Norman continues with another interesting remark (emphasis mine, again). Despite referring to a product we now know did not succeed — Microsoft BOB — I think he manages to succinctly nail the problem with digital assistants and offer a possible, radical workaround; though I seriously doubt tech companies today would want to engage in this level of rethinking, preferring to keep shoving ‘AI’ and digital assistants down our throats.
6. Making devices that fit the task
JR: It seems as if substantial changes in design will take a long time to develop. Will we have something good enough for the ten-year-old with ‘Nintendo thumb’ before he or she grows up?
DN: I think for a while things aren’t going to look very different. The personal computer paragon could be with us another decade. Maybe in a decade it will be over with. I’d like to hope it will be. But as long as it’s with us, there aren’t too many alternatives. We really haven’t thought of any better ways of getting stuff in or out besides pushing buttons, sound, voice, and video. Certainly we could do more with recognition of simple gestures; that’s been done for a very long time, but we don’t use gestures yet in front of our machines. I mean gestures like lifting my hand up in the air. We could, of course, have pen-based gestures as well and we could have a pen and a mouse and a joystick and touch-sensitive screens. Then there is speech input, which will be a long time in coming. Simple command recognition can be done today but to understand, that’s a long time away.
So in my opinion the real advance is going to be in making devices that fit the task. For instance, I really believe within five years most dictionaries will be electronic, within ten years even the pulp novel, the stuff you buy in the airport to read on the airplane, will have a reader. What you’ll do is go to the dispenser and instead of the best 25 best-selling books, it will have 1,000 or 2,000 books for browsing. When you find a book that you like, you’ll put in your credit card and the book will download to your book reader. The reader will be roughly the size of a paperback book today and look more like a book than a computer. The screen will be just as readable as a real book. Then look at any professional, say a design professional. You couldn’t really do your design without a pencil. Look how many pencils good artists will use. They may have 50 or 70 or 100 different kinds of drawing implements. We have to have at least that kind of fine-detail variation in the input style in the world of computers. I don’t think we’ll have the power that we have today with manual instruments until we reach that level. I think the only way to get that power, though, is to have task-specific devices. That’s the direction in which I see us moving.
Observation: There was, indeed, a time, when tech seemed to move in the direction envisaged by Norman, with devices designed for specific tasks. When Steve Jobs illustrated the ‘digital hub’ in the first half of the 2000s, the Mac was the central hub where we would process and work with materials coming from different, specialised devices: the digital camera, the camcorder, the MP3 player, the audio CD, the DVD, the sound-recording equipment. At the time, all these devices were the best at their designed tasks.
But then the iPhone came (and all the competing smartphones based on its model), and it turned this ‘digital hub’ inside out. Now you had a single device taking up the tasks of all those separate devices. Convenient, but also a return to the Swiss Army knife metaphor Don Norman was mentioning earlier in what I indicated as section №5: “My Swiss Army knife […] is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or scissors.”
If you think about it, the Swiss Army knife is also a good metaphor to explain a big part of the iPad’s identity crisis. A big smartphone, a small laptop, a smarter and more versatile graphic tablet, among other things; and yet, it tends to do better at the task it ‘looks more like’: a tablet you use with a stylus to make digital artworks.
After years of smartphone (and similar ‘everything bucket’ devices) fatigue, it seems that we may be moving again towards task-specific devices, with people rediscovering digicam photography, or listening to music via specialised tools like old iPods and even portable CDs and MiniDisc players. The e‑ink device market seems to be in good health, especially when it comes to e‑ink tablets for note-taking and drawing; products like the Supernote by Ratta or the BOOX line by Onyx; or the one that likely started the trend — the ReMarkable. I have recently purchased one of these tablets, the BOOX Go 10.3, and it’s way, way better than an iPad for taking notes, drawing, and of course reading books and documents for long stretches of time.
I hope we’ll keep moving in this direction, honestly, because this obsession for convenience, the insistence on eliminating any kind of friction and any little cognitive load, and wanting single devices that ‘do everything’ is what is making interfaces become more and more complex, and making tech companies come up with debatable solutions to make such interfaces less complex. See for instance how Apple’s operating systems have been simplified at the surface level to appear cleaner, but in doing so have removed a lot of UI affordances and discoverability, burying instead of solving all the complexity that these systems have inexorably accumulated over time.
Or see for example how digital assistants have entered the picture in exactly the same way Microsoft came up with the idea of BOB in the 1990s. As Norman says, an intelligent assistant was added to the problem, becoming part of the problem instead of solving it. So we have complex user interfaces, but instead of working on how to make these interfaces more accessible, less convoluted, more discoverable, intuitive, and user friendly, tech companies have come up with the idea of the digital assistant as a shortcut. Too bad digital assistants have introduced yet another interface layer riddled with the usability and human-machine interaction issues we all know and experience on a daily basis. Imagine if we could remove this layer of awkwardness from our devices and had better-designed user interfaces that completely removed the need of a digital assistant.
[The full magazine article is available here.]