The Machine That Changed The World — Transcription of the interview with Larry Tesler

Tech Life

MCTW Tesler interview

Introduction

The recent passing of Larry Tesler hit me harder than I thought. Even if I didn’t get to know the man, I felt as if I had lost a friend. And knowing the magnitude of his contribution to computer science, I was rather annoyed that basically all news outlets, in giving the sad news, just labelled him as “the inventor of Copy and Paste”. At least John Markoff’s article in The New York Times gave a more encompassing profile of Tesler, and I do recommend you read at least that to have a better idea of who Tesler was and what he did.

Another thing worth reading is this Twitter thread by Chris Espinosa.

I wanted to put together something as a personal homage to Larry Tesler, but I didn’t know how. A summary of his accomplishments and inventions felt too much like a school assignment. Writing a deeper, more meaningful profile, would have required that I knew the man more closely, or that I had access to, and direct communication with, people who knew him better. But I chose not to bother anyone out of tact and respect.

So I decided to let Tesler speak for himself, in a way. As I noticed when transcribing the lecture he gave with Chris Espinosa in 1997 on the origins of the Apple human interface, Tesler possessed this admirable mixture of clarity of thought and simplicity of discourse. That’s why I chose to carry out another transcription, this time of an interview he did for a five-part documentary series, The Machine that Changed the World, that aired in 1992.

For a comprehensive look into the series, I recommend checking out the excellent work by Andy Baio in 2008 on his waxy.org website.

About the interview

The video of the interview (at least, the video I found) can be watched here. [Update, March 2022 — I’ve been notified that the original YouTube link doesn’t work anymore. You can watch the interview here instead. Note also that the video is accompanied by a transcript on the WGBH website, but it doesn’t look accurate in certain places; maybe it is an automated transcription?]

According to the description, this is the full-length interview, and only portions of it were featured in the documentary series. Its total length is 1 hour and 45 minutes, and it’s divided into two parts. The first part lasts little more than one hour and is the proper interview/conversation with Larry Tesler. In the second part, Tesler is at the computer giving some demonstrations of software applications and explaining the Macintosh’s user interface a bit. This second part is somewhat more chaotic and, really, it’s way better to just watch it than reading a transcription. That’s why I have only transcribed the first part of the conversation with Tesler.

There was very little to edit in the process of transferring the conversation to the written word, thanks to Tesler’s extraordinarily lucid and articulate responses. I actually had a more difficult time understanding the interviewer — he didn’t sound as he was miked, and his questions or remarks could be a bit meandering at times.

Topics of the conversation include bits of computer history and Tesler’s beginnings, what it meant to be a programmer in the era of big mainframes, several insights on how computers and their roles changed through the years, Xerox PARC, programming languages, the future of computing. What struck me most about this last topic is how accurate Tesler’s predictions are, despite him saying, So all the predictions that I’ve made today are very unlikely to be correct based on the track record of our industry. (Remember, this interview took place in late 1991 — early 1992).

Quite the contrary, Larry. Thank you for everything you contributed to the history of computing.

Disclaimer: I have done this transcription work and chosen to publish it here in good faith, for educational purposes. I don’t make any money from my website, as it is completely ad-free. In any case, should any copyright holder contact me requesting the removal of the following material, I will certainly comply.

Enjoy the conversation.


 

The interview

 

Interviewer: Later when you met computers at college and stuff, they were large mainframe computers. Can you describe what the experience of computing would have been like for most people in the late 1950s-early 1960s?

Larry Tesler: The first time I actually ran into a real computer was about 1960 when I was a senior in high school, and it was an IBM 650 in the basement of a lab at Columbia University. And you walked into this medium-sized room that was very noisy (so the doors were closed) and air-conditioned, and there was something about the size of eight refrigerators all glommed together that was the computer, and then around the room there were all kinds of other devices, for dealing with punched cards, and printing output, and so on. The term mainframe hadn’t been invented yet — this was really a single-user computer; I wouldn’t call it a personal computer because of the physical size and the cost, but only one person at a time could actually use this machine. It didn’t remember anything from one usage to the next: there were no tapes, no disk drives… you put your cards in, and use the machine, and then all the cards came out again, and the next person came in, and put their cards in… so it was really more like a laundromat for data, you know… One user at a time in a machine.

Int: I believe that around that time the very idea of software, the word, hadn’t been invented… So what did it mean to be a programmer in those days, then?

LT: Well, in those days programming meant working out a sequence of instructions to do some data processing, usually. Most programmers were doing either military types of applications, calculating various trajectories for rockets or missiles or that kind of thing; or they were doing business data processing: adding up bank balances, and accounting, and so on. So programming was a matter of writing instructions that were very customised towards a particular application for a particular bank, or a particular study within an aircraft company, that kind of thing. And so this term, software, had not yet been invented, but it was around that time that programming became more of a profession, and something that people would actually study in school as something they would do as a profession. As opposed to just something that a mathematician or a data processing expert would learn how to do as part of their job. It became a separate occupation.

Int: And of course this machine, this computer, was a remarkably versatile machine… I mean, people had barely begun to sort of probe what it was capable of with software. When did that dawn on you that really this machine could be programmed to do all kinds of things, from making music to painting?

LT: Oh well, it didn’t take long to realise that these machines could do anything. That was really realised by people in the field from the very beginning, even in the 1940s. And in the 1950s there were people dreaming of the kinds of systems we have today, where there were ways of pointing at things on the screens, and there were graphical user interfaces, and that kind of thing. So that was imagined very early, but it wasn’t possible to do it because the technology hadn’t advanced far enough yet. Within a year after I had learned to program I was already immersed at that time at Stanford around people that were talking about computers and music, computers and art, and we were even doing computer graphics programming at that time, back in 1961.

So it was pretty clear that computers could be used to simulate anything in the world; in fact, a lot of the work going on in computer science dating even back before the first computers were running, was in developing theories of computability. What things (even though they were theoretically possible) would take so long to compute? That the universe isn’t going to last long enough to compute them? With computing, the only issue isn’t whether something can be done — it’s a question of how long it will take to do it. And some things are just impractical to compute.

Int: What things regarding the hardware had to change for some of these dreams to be realised? Did [computers] have to become smaller, interactive? What do you think are the main features, what had to change as you saw it?

LT: Well, it became very clear to us back in the 1960s that — as computers even at that time were getting more memory, higher speed, smaller physical size — that as that trend would continue over the years (using integrated circuits which were invented in the late 1950s, and using advanced memories, and so on), it was very clear that by sometime in the late 1970s or 1980s computers would be very small. Some people were even predicting that they would be the size of a book by 1980. As it turned out, it’s taken more till about 1990 for that to happen, so people were off by a few years. But it was really clear that it was just simply a matter of time before the size came down. It was also clear that the price was going to come down: all the trends and memory costs and computing costs were in a very rapidly declining direction. So people even back in the 1960s were predicting that there would be consumer computers.

I remember walking around the street in the 1960s looking at a hardware store thinking, well, someday next to that hardware store there will be a software store, because everybody would have a computer and everybody would be going out and buying software. And 15–20 years later that’s what happened, and it was completely predictable.

Int: Now of course when you started there wasn’t really much of a commercial software industry, was there?

LT: The commercial software industry back in the 1960s was primarily custom software. You could go to a company and say “I need a program written to do something”, and they would write a program for you. Now in fact I went into that business myself, so it became very clear to us and that business that you could write a program for one person, and put it on the shelf; someone else would come in and ask for something fairly similar, and you could customise it. Soon people got the idea of writing a generic program that would serve the needs of many people, that could be customised even by the user. For example, there were statistical packages that you could give to any statistician, and the statistician — without having to do any programming — could calculate their own statistics for their own data and customise it fairly well. Or people that needed to do simulations.

There were other people that got the idea of having programs that would let you edit text. Most of the text to be edited in those days were computer programs, but gradually people started editing text that had to do with prose, and that led to what we call word processing today, over a long period of time. So these things kind of evolve gradually, and at the same time the notion of commercial software, package software, grew but it really didn’t take off until the late 1970s when there suddenly were hundreds of thousands of personal computers out there.

Int: Yes, but I guess this must have been a problem in the 1960s… A machine like the IBM 360, its very success seemed to be a step back in terms of realising this vision.

LT: In what way, what do you mean?

Int: In that it was a large mainframe computer which still had the idea of group usage as opposed to individual usage. How did you think— I remember you told me a story that, when you were at Stanford…

LT: Yeah. Okay, in the middle of the 1960s there were several forces diverging at one time, and in some ways the current array of computers we have today — supercomputers, mainframe computers, minicomputers, personal computers — had their origins back in the 1960s. There were companies like IBM, Burroughs, and Honeywell that were building these very large mainframe computers (as we call them today) that were to be used by large organisations to do massive data processing, and the main question with those machines is how much data could you push through the machine in one night so that the accounts would all be balanced by the morning.

Then there were companies that were appealing more to the scientific research market and the telemetry market, and so on, like Digital Equipment [Corporation], that were focusing on smaller computers that were a lot less expensive and that there would be a lot more of, intended to be used by a single person or a small group; and that led mostly to today’s minicomputers, although also those machines had a lot to do with the advent of personal computers. What it took to do personal computers was the invention of the microprocessor in the 1970s, and that made it possible to have very very small computers that were very very affordable.

The fourth type of computer was the supercomputer, and that also originated in the 1960s, primarily at Control Data with machines that were even faster than mainframes. Their job wasn’t just to push masses of data through during the night, but to do very complex scientific calculations. And all those types of machines survive today, but the one that was kind of the least important in the 1960s — the personal machine — which was something that very few people had, today is of course the majority of computing. So that’s been a real switch.

Int: In the 1960s, before Xerox PARC was set up, do you remember some of the visionaries that actually had working systems which showed the shape of things to come? I’m thinking of people like Ivan Sutherland, Doug Engelbart. What role did they have in actually demonstrating what could be possible very early?

LT: Many of the things that we see today in personal computers started a long time ago. If you go back to the 1960s, for example, there was a computer designed by an engineer named Wes Clark called the LINC that was really a personal machine. It cost something like 10–15,000 dollars in those days. It had very small tape drives, it had a small screen, it was used by one person at a time, and it was affordable — not by everybody like personal computers could be today, but affordable by scientists at least. And that illustrated back in the early 1960s what it could be like to have a personal computer. That machine affected the thinking of a lot of the people who later developed personal computers, like Alan Kay and myself who both had access to that kind of machine.

Then there was also a lot of software work going on. For example, Ivan Sutherland in 1963 developed Sketchpad which was an interactive graphics program where you could actually have an engineer draw on the screen with a pen, say a shape, and then have the computer straighten out the edges and allow the engineer to scale the shape, to connect various shapes together, and realise that there were connections between them, and make them so they could turn together, and so on. So he got the idea of demonstrating that. Unlike a paper sketchpad, where once you drew [a shape] the best you could do was erase it and draw it again, it was possible [by] using a computer to do a lot of very powerful manipulations on graphics that a person would input. And that system led through a series of steps to the graphics systems that we have today, particularly in Computer Aided Design for engineering.

Another pioneer was Doug Engelbart who in the early 1950s conceived the idea of computers that would help people to create and to collaborate, and so on. It wasn’t until the 1960s that he was actually able to get enough technology, together with the right people and the right software, to actually start creating glimpses of the vision. And at that time what we would call a personal computer today was something like a $100,000 system put together by Engelbart and his people. So it was really just a research project and a demonstration of what some day could be. But during this project they invented the mouse, they invented what we call hypertext today, and collaboration among many people working together on a network; and a lot of today’s fundamental concepts of interactive computing and of easy-to-use computing originated in Engelbart’s project.

Int: How was it that you think then… They had these visionary ideas, they had one or two demonstrations of the thing, it was clear the way hardware was going to go… What was the best way to bring it about, given that such projects require continuous funding over a long period of time and in the 1960s the Defense Department [with DARPA] had done that. You became associated in the early 1970s with a big project, Xerox PARC. Tell me a bit about that.

LT: Right around 1970 the chairman of Xerox, Peter McColough was reading forecasts (that were even being made at that time) of what was called the ‘paperless society’ and the ‘paperless office’, and where one day people would shuttle information through computer networks onto screens, and push buttons, and very little paper would ever get generated or produced or moved from one place to another. Well, as someone who was the chairman of a company that built their entire fortune on copying large numbers of pieces of paper onto even larger numbers of pieces of paper, he was understandably concerned about the ‘paperless society’ and the ‘paperless office’, as to what this would mean to his business.

So he decided that the best strategy for Xerox would be to get ahead of the problem instead of behind it, and help create the paperless office and be the company that started and benefited from it. So to do that he decided to set up a research centre, and it was a very visionary thing to do, but as you can see he had a high motivation to do that because of these warnings he’d been hearing that paper was going away. So he created a lab and put in some outstanding recruiters and managers, and they were very lucky to recruit a number of people who had a fairly similar vision as to where computing was going in terms of distributing processing over a lot of smaller computers, using centralised services to be able to access data and high-speed printers, making things easy to interact with. And they were able to recruit people from Engelbart’s group, from a lot of DARPA projects, from a lot of universities, and a little bit from other companies that had this single shared vision and had a lot of time.

The project was billed as something that could take ten years to result in any kind of product. And so these people had time to think about the way things ought to be, instead of having to meet the usual industrial deadline of having to have a product out in two years or three years or four years. They had a lot more time to think about it, and because of that this group was able to create a lot of the metaphors and approaches to computing that are standardised today. The Ethernet…

Int: You were saying they got a bright group of people together who had common interests. How would you describe what were you after? On the one hand, what was Xerox after? And on the [other] hand, what were most of the people who were there after?

LT: What Xerox was after was to create the paperless office instead of falling victim to it. And they didn’t know exactly what that meant, but the term that the management came up with was the architecture of information. And so we were looking at all of that. Now one of the great ironies was that the very first technological success at Xerox was something called the laser printer and the laser printer of course generates more paper even than many copiers do. So Xerox kind of inadvertently ended up protecting their business in creating a wonderful new paper generator. But at the same time the rest of the architecture of information, the ‘paperless’ part of it was being created.

What the scientists at PARC realised was that, in order to get information from one place to another, we needed to have wires going from one place to another, and that when people were working together in an office building, we could have very high-speed communications within the building, a lot higher-speed than we could have, say, across the country; and that most people communicated with other people in their own building, with printers in the same building, and so on. And so the first thing we created after the laser printer was what we called the Local Area Network, and that was called Ethernet, which was one of the early inventions of PARC.

Another thing we had to address that was very important, was how people could deal with different media, with text and graphics, and so on. And so we developed what’s called a bitmapped display, which is the typical graphics display that you see on the more modern personal computers today. And there needed to be a way for a person to point at things on the screen, and again we used a mouse derived from the Engelbart idea. And in addition to that there needed to be a way of interacting with the computer, and that turned out to be one of the more challenging problems; and the term user interface was one that we used a lot to talk about how the user would interface or interact with the computer to make it very easy to use, and natural to use, and attract people to it so that they would be able and willing to do their work through a computer. Remember, a computer at that time was thought of as something that was very forbidding, difficult, highly technological, you had to be a real expert and a doctorate to understand, you know… that was kind of the public image. And we somehow had to humanise computers and make them a common object that anyone could use. So that was another challenge.

Int: So how did you go about that challenge? I mean, some of the ideas derived from Engelbart’s work… But the basic idea was to use software to create — you used the term user illusion — to create an impression for the users. I mean, I’m trying to understand this, that the real machine is probably too difficult for users to understand; so what are you doing, creating a sort of a virtual machine? Something in between the user and the machine? What’s going on with the user interface?

LT: Well, what a computer can do is simulate… What a computer does is [to] run through a series of program steps that create something that is like something that goes on in the real world, or in a mathematical model, or whatever, so computers simulate. And what we realised was that we could create what some people called a ‘user illusion’, something that appeared to be a world on a screen. One way to think about it is, if you play a video game there’s an illusion of spaceships, or of roads and cars etc. depending on the kind of game; and the user who gets engrossed in the game starts operating as if they’re really working in the real world, when in fact they’re only working in this imaginary, simulated world created by the sequence of steps in the computer program.

So what we realised was that we could create an illusion of an office, for example, with folders and documents and file cabinets in the office. And that instead of having the user learn complicated and unfamiliar technological terms like streams and resets and various other things that people had to do before using computers. That we could use the metaphor of the office, for example, and talk about ‘opening files’ and ‘closing files’ and ‘editing documents’, and other terms that were much more familiar to people, and actually support that terminology by using graphics to depict these types of objects. That was kind of the tack that we took.

Int: You tested these things on people, didn’t you? I remember you telling me you were initially skeptical of the mouse…

LT: Yes. Well, the Engelbart work in the 1960s was pioneering in the functionality that it provided, but all the people using this system were people connected with the project, and so they learned to use the system as they built it, they taught their coworkers who came to work with them, and so on. And what they discovered was that it would take sometimes months to become really expert with the system. And they considered this inevitable but worth the trouble. And when the Xerox salesmen started looking at these systems in the early 1970s, they said “I don’t think we could sell these to people, they’ll take too long to learn”. And so I and some others decided that an important goal would be to find a way to reduce the learning time tremendously.

And I thought one of the problems was that there was the mouse device, and it was another device to learn, and if we could eliminate that device that would make it easier for people to learn. So I set up what we call user tests and I brought in people to try using the mouse and also try using the keyboard, to use a very, very simple text editor. And what I was attempting to prove was that these devices like the mouse were an encumbrance and an impediment. And what I proved instead was that the mouse was actually a great benefit and that people were not only able to work more quickly using the mouse, they were able to learn more quickly and they preferred using the mouse to using keys on the keyboard. This was a great surprise to me and also taught me a lesson: that when you design a system for users to use, for general people to use, other than their designers, that the only way to find out what’s really easy is to actually test the system with users, and involve users in the design of the system. And we used that methodology throughout the development of our user interfaces at PARC, and we used those also at Apple. And it really does work.

Int: Now the person who put together this team of people was Robert Taylor… was it a pretty remarkable group in the history of computing… do you think of that as a sort of ‘golden period’ or one of the golden periods for people to gather together?

LT: Hmm… Well, actually it wasn’t Bob Taylor that put together the whole group but —

Int: He was the main recruiter, right?

LT: Ah, well, Xerox PARC had several laboratories, and in the Computer Science Laboratory, where they were studying distributed computing primarily, Bob Taylor recruited the staff for that. And this was the group of really eminent computer scientists who were working on very difficult problems, like how a number of people working together could share a collection of information and access it simultaneously without one person’s access to it tripping over somebody else’s access to it. There was another group called the System Science Laboratory that was run by various different managers, who did the recruiting, and there we were more concerned with issues like the user interface and how to make things extremely personal and easy for anybody to utilise. Alan Kay really was, I think, the primary figure in that group throughout the decade of the 1980s, and didn’t recruit all the talent but kind of assembled it and drove a lot of the advanced work in the area. He as well as a number of others.

Int: Both of those labs you mentioned collaborated to produce this prototype, the Alto, right?

LT: Yes.

Int: And the Alto was produced in 1973, I think, and then other models came out. Now, how had the Alto sort of achieved or cracked many of the problems… Compare the Alto with the familiar Macintosh displays we have now, had they solved many of the problems by that time?

LT: Okay yeah, the Alto was developed around 1973 by a collaboration of the labs. And the important things about the Alto were that it was relatively inexpensive — maybe $15,000 or $20,000 — too expensive to be a commercial personal computer, but inexpensive enough that we could convince the management that everybody at Xerox PARC should have one. And in fact, over the years, thousands of Altos were built and put around the company. Another important thing about the Alto was that it had a high-resolution bitmapped display. It would be today what we’d call a portrait display, a full-page size, and because of the high resolution we were able to do very impressive graphics on it, as well as have text that instead of being all capital letters of a certain size were proportionately spaced like you’d see in a book. And we could essentially create a very good image of a book page, with graphics and text, in whatever typefaces you liked, on the Alto screen.

The other thing about the Alto was that it was fast enough that we could devote a lot of the power of the computer to supporting an easy interaction with the user. And given the other devices like the mouse that were on it, it created a very good test bed for the ideas that we see today in what are called Graphical User Interface machines that a lot of people are familiar with, using machines like the Macintosh.

Int: When you developed this technology, were you and some of your colleagues anxious to get it out into the world? Did you try quite hard to convince Xerox to use it?

LT: Oh right. The reason that most of us went to work there was that we felt that this would be an opportunity to bring computing to everyone, and we were there with missionary zeal to get this thing done. And we were very impressed that a company as big and powerful as Xerox had the vision to do this, and we were confident that they would take this out into the market with their enormous sales force and sell it. As it turned out, right around the time that the technology was ripe — about 1980 — Xerox ran into issues in their mainstream copier business, where they were getting a tremendous amount of competition from Japanese copier makers [in particular]. And they decided they had to focus all their energy on protecting their mainstream business, and they really couldn’t invest in bringing this technology to market — which was unfortunate, because at that time the technology was ripe. And so what happened was all the little startup personal computer companies brought the technology to market instead, because Xerox really couldn’t afford to distract themselves from their main problem at the time.

Int: Let’s talk about those startup companies from about 1975… These hobbyist machines started appearing: what was the reaction of PARC to these machines?

LT: I remember when the first well-known hobby machine appeared, that was the Altair, which got a lot of publicity, and there was a van that drove around the country showing it off. Some of us went down to the local hotel to get a look and I found it fascinating because here was something that we said would become a phenomenon in the 1980s, and it was only 1975 and there was already a little bit of it happening. But when it came down to it, we ended up ridiculing it because the user had to assemble the machine himself, and we thought, “Well, how many people will sit there and put the machine together by [themselves]?” And it was… not a very powerful computer. Compared to the Alto, it didn’t have a bitmapped display, it had a very tiny amount of memory relative to the Alto, it was not all that inexpensive, and there was really no software for it. So we thought it was kind of an interesting curiosity but it did start a hint that maybe we weren’t the only people understanding that personal computing was going to happen.

And sure enough, over the next few years the machines got sophisticated very, very quickly. They developed bitmapped displays, they got easier to use, they got more memory, and soon they were being sold as fully assembled machines [with] software built in, software you could buy in stores. And so by 1980, when we were thinking we would have brought to the market these very powerful personal computers, we couldn’t do it yet, and already there was this other wave that came from hobby computing that had actually succeeded in bringing personal computing to the market.

Int: Were you surprised at the demand… that so many people would want to buy one of these machines?

LT: I wasn’t personally surprised. I think a lot of other people were surprised that there was interest in these machines that were so flimsy, but I had myself started a rumour on that IBM 650, which had about the same amount of memory as these little personal computers, and I knew from my old experience that there was quite a lot you could do with a very, very small computer. In fact, when I had that 650 I was only allowed to use it a half an hour a week. And as a 16-year old I always dreamed that if I could have one of these at home and I could use it any time I wanted to, it would be quite an amazing thing. And it became clear that there were thousands of other people that had that same feeling, and a lot of personal computers got sold in those days.

Int: So Xerox found itself in a bind, really. They supported this great work but then this hobbyist machine started. Can you tell me about the way they were planning to hedge their bets, the connection with Apple, and how it came that Steve Jobs visited Xerox PARC?

LT: Xerox started an investment group to invest in small growing companies that had businesses that could somehow complement or feed into Xerox. And one of the first companies that they chose to invest in was Apple Computer. And the attraction of Apple was that Apple was able to manufacture in large quantities these very inexpensive machines, relative to what Xerox was used to. Xerox really had been used to manufacturing a medium volume of very expensive and large electromechanical devices — copiers — and the idea of manufacturing these very small, high-volume, fully electronic machines was something they had no experience in. And here was this little startup company that was pouring them out by the hundreds of thousands. So Xerox got very interested in that and made an investment. They then got the idea that maybe Apple could be the company that could manufacture the new computers we were developing at PARC at a lot less cost than Xerox could do it. And so they invited Steve Jobs and some of his people to come take a look and see whether that was possible. And that’s how that contact started.

Int: You were one of the people who showed Steve Jobs around?

LT: Yeah, we had a number of people that demonstrated various technologies. Some of the people there were somewhat reluctant to do it for various reasons. For one thing, this seemed to be a group of not well-educated kind of hobbyists who didn’t really get personal computing, and they were pushing these little plastic boxes with hardly any memory out there, and couldn’t possibly appreciate what we were doing. I was very excited because I already had bought a personal computer, I had friends who were working at Apple, and I understood that there was really something interesting going on here. So I was quite enthused about doing the demonstration.

Int: What was Steve Jobs’ reaction? Did he get it straight away?

LT: Oh yeah. When Steve saw what we were doing he just got very excited, and he kept saying, “Why hasn’t Xerox commercialised this?” And I think what revealed was that Xerox, by putting all this technology into PARC and not really incorporating it into the mainstream of the company, had not come through with kind of the follow-through on the swing. They had done the great technology but they hadn’t figured out any way to bring it into the market, and there didn’t even seem to be a great will to do that. While Steve Jobs — who was the entrepreneur, the commercialiser — immediately saw that these kinds of ideas could be very powerful and could bring personal computing to a much wider audience than he’d been able to do so far, with his Apple II.

Int: Shortly after that visit a bit of an exodus began, you left fairly shortly afterwards Alan Kay, is that right? This period — we’re in 1979 — sort of started the beginning of the end of the Xerox PARC golden age, when people drifted away. Is that right?

LT: During the 1970s Xerox PARC had very low turnover; a lot of people were hired in the mid-1970s, and it had a reputation of a place that you just went and stayed because it was an idyllic kind of place for a scientist. But around 1980 the industry was changing tremendously. There was a startup fever in the Valley and personal computers were becoming very attractive and very possible, and a lot of people at PARC started realising that Xerox wasn’t going to be able to pay attention to getting these products to market in an adequate way. And those of us who were interested in the commercialisation of it began to leave. So in the middle of 1980 a number of us left and in the following couple of years there was quite an exodus. But PARC replaced us all and still continues to thrive as a centre of excellence in computer science. It’s still quite a remarkable place.

Int: Several machines came out in the early 1980s: Xerox produced the Star, Apple produced the Lisa, but it’s really with the Macintosh that there’s a sort of — that the ideas really get public acceptance. Why is that? Is that a question of cost, simplicity, what?

LT: Well, there are many different factors. First of all these were just basically different ideas that were generated by Engelbart and by Xerox PARC and so on. And… it’s how you take the ideas and express them in a particular style that can make a lot of difference. So for example, the idea of having a camera with lenses on the front is a pretty simple one, but there are lots of different ways to put that together into a camera, and there are lots of different stylings you can use, lots of different price points. So it’s the same for computers. And the Macintosh project at Apple just happened to hit upon a very good combination of affordable cost, convenient size, a very friendly user interface that had an æsthetic appeal, and also [Apple] was able to recruit a lot of software developers outside Apple to develop a lot of exciting applications for their machine. So it took all those things together to make it happen. And neither the Lisa or the Xerox Star or any of the other attempts at doing things of that type in the early 1980s were successful. The Macintosh was the first one that found the right combination of features.

Int: The computer is a fantastically different-looking product to the one which you started, there are children [who] can use this thing and so forth. Thinking about what the computer is and how it’s changed, I’ve got a few general questions about the role that programming languages bring and what’s going on. At the most basic level if you had to explain to somebody what a computer is, how would you answer that?

LT: A computer is a machine that follows instructions precisely, is what I would say.

Int: Now the first instructions that were given to the machine were to do with arithmetic, and they were very closely related to how the machine was built. Is that correct?

LT: Uh-huh. Originally the tedious work and time-consuming work the computers were designed to replace was calculating. Calculating was very error-prone, and there was a lot of calculation required to do in the mid-20th Century and especially in military applications. So computers were invented to do that, and what people focused on was making the calculating as fast as possible; and the computers were so small that there were very few instructions you could put in, and so people would put in one instruction at a time, and they would be very (what we call) low-level, primitive instructions, like: ‘add this number to that number’, ‘store that number in this memory location’, and so on. We call this machine language, a very, very low-level and for most people a very out-of-reach kind of complicated language because it takes so many instructions to get anything useful done.

Int: And what broke that was that in the 1950s started to emerge these higher-level languages [as they’re called]. What kind of languages?

LT: The idea of a higher-level language was to have a language that was written more like what people were used to in their own field. So, for example, for scientists a language was developed called FORTRAN or FORmula TRANslator, where they could write mathematical formulas that looked a little bit like what they would write on a piece of paper to express a mathematical relationship. For business data processing programmers a language was invented called COBOL, Common Business Oriented Language, that was more English-like and had a vocabulary that was similar to what people used when they talked about data processing. And so on. For different types of work, different programming languages were invented, so that the programmer didn’t have to know anything about the individual machine instructions. It created kind of a layer between the programmer and the machine where they could express their program in a more familiar notation.

Int: And that process has been continuing ever since… But what’s sort of going on is, with a programming language, you’re changing the way we think about the machine?

LT: That’s right. What a programming language does and what any user interface does to an interactive application is a very similar thing. It creates what we call a virtual machine, it creates an illusion that the machine you’re working on is much simpler and much more oriented toward your problem than the real honest machine is, the actual physical machine, which is very difficult to access, that we replace that level of access by one that’s much simpler.

Int: The act of doing that often results in a lot of computation. If we take the Macintosh desktop display, and we take what is for a user a very simple action, something like opening a file, how complex might that be for the computer itself in terms of numbers of operations?

LT: Okay. If there’s a file on the screen and the user says “Open that file, I want to see inside it”, that maybe takes half a second, and the computer can do several million instructions in a second. So there’s something like hundreds of thousands of machine-level instructions that are run just in order to open that file. Some operations take longer than that, so most things a user does you can think of as taking tens of thousands or hundreds of thousands, or even millions of instructions. And yet the user has to do a very simple thing, just click a couple of times with the mouse button, and that causes all these instructions to be executed.

Int: And so the speed at which these things swish is absolutely critical. We wouldn’t be here having this conversation if computers couldn’t be made fast, presumably?

LT: That’s right. Speed is one of the things that continues to advance in computing and has to in order to make them easier and easier to use.

Int: Software programs to produce this easy computer to use, get longer and longer, and one of the things which I think is quite interesting is, how is it possible for programmers to keep track of programs that get hundreds of thousands or millions of lines long?

LT: That’s really true. Some of the more interesting programs in the world have millions of instructions in them, each of which was written by a programmer, and you have to remember that these are high-level instructions, each of those instructions generating sometimes five or ten or twenty machine-language instructions. So these programs are enormous, and high-level languages help the programmer to keep track of them, but there are still millions of instructions. So we have what are called structuring or abstraction mechanisms. And what all that means is that there are ways of grouping things and giving them names so that a whole sequence of instructions, for example, can be given a name and talked about as a whole. Very much the same way, for example, that if I can give you something called a ‘Caesar salad’, that’s a lot better than me saying, “How would you like a salad that’s romaine lettuce with anchovies and olives and so on…” and listing all the ingredients. I can give it a single name. So programmers use these kinds of nicknames and other mechanisms for helping them to manage the complexity.

Int: What’s it like running a big software project — I mean is the metaphor like architecture? I mean, in other areas of life where you’re working in physical media, there are insights, but if you’re trying to organise a project with lots of people, are there difficulties in dividing it up between people; are there special things about building large software programs that the world has never had to lead before?

LT: In many ways when programs get very large, one person can’t do them: you have to have many people working on them, three or five for most types of things, but sometimes hundreds of people working on a program. And the coordination among all the people becomes an impossible task. So a lot of procedures get set up about how people work together, and how work is delegated. But some of the more difficult problems are detecting problems. If there’s a bug or an error in one part of the program, figuring out whose part of the program that was, and how that may have interacted with some other person’s program, can become a very, very complicated task. People who have managed both programming projects, software projects, and also other types of projects that involve a lot of people, tend to agree that software has its own special problems and is one of the most complex types of projects to manage, to schedule, and so on. It’s really quite an endeavour. A lot of research has gone on in how to deal with that complexity and make it easier, but there’s still a lot more work that needs to be done.

Int: And one of the things that some people worry about is the question of software reliability when you get into such millions of lines of code, especially as we think we’re going into a networked sort of world. What strategies have people got for coping with this?

LT: When programs get very, very large, an old strategy that was developed was something called modules, modularity. You break up the computer program into separate units called modules, each of which has a specification that’s very well-defined, and how it interacts with other modules, with the user, and so on, is all very well-defined. So now the system designer can step back and think about an easier problem which is, What modules are needed and how will they interact? Once that’s set, then an individual group can go off and think about the work on the separate modules. One advance that got invented in the 1960s but has only gotten really popular in the last few years, is called Object-Oriented Programming. And the idea here is that these modules contain not only pieces of program, but also contain the data the program operates on. And this gives the ability for the system designer to not only partition the instructions of the program into independent modules, but also to partition the data and keep the data close to the program. It makes it easier to design the system, it also makes it easier for the implementer of the modules to implement their part without being so concerned about how it interacts with other objects or modules in the system.

Int: Give me an example of what an object might be — on the screen, in a simulation…?

LT: Well, an object can be kind of anything that you can give a name to that has information about it, and things that it can do or things that can be done to it. So for example in an interactive computer screen the windows are objects. If there’s a table of numbers, the table’s an object. Each entry in the table is also an object. If there is a menu that you can choose from, the menu’s an object and each item in the menu is an object. If the computer program is about bank accounts, then each bank account is an object, each customer is an object, each bank that you exchange money with your bank is an object, and so on. So just about anything that’s a noun that you can think of, any thing that you can interact with is an object.

Int: And [what do] you think is a very good way of thinking about these problems, so that if something goes wrong, we can diagnose it in terms of the object?

LT: Right. Now, so for example if something was wrong with all the bank balances of checking accounts then the programmer will know that the problem is probably in the checking account object, and there’s nothing wrong with the savings accounts. He won’t look in the savings account object. He won’t look in the customer object or in the bank object. So it helps the programmer to focus their attention on solving the problem by looking in the object that obviously is exhibiting the problem.

Int: And Larry, you’ve seen tremendous change in the use of computers and the friendliness and so forth. What do you still think remains to be done, and what can we look forward to in the next decade?

LT: Well, there are people that think that the industry has matured and that the directions of computing have been set and that the types of machines and user interfaces we see today are it. Kind of like televisions have looked pretty much the same for 50 years or so, and they just have gotten cheaper and colourful. And cars have looked pretty much the same now for 70 years. But I and many other people think that computing is really in its infancy, and what we think of as a computer today is nothing like what a computer will be like in the future. So in the future, rather than computers being objects that tend to sit on a desk, or that can be lugged around in your car when you go from one appointment to another, computers will just continue to keep getting smaller and smaller. There will also be big ones of course, but the most common computers will be ones that are very small that you can take around with you. And instead of interacting with them with keyboards and mice, you might write on them with a pen, you might talk to them and have them understand what you’re saying.

There may even be other ways people come up with for interacting with a computer. Also the computer will become more aware of the environment. Right now your computer only knows what you tell it. Computers today are beginning to get sensors in them, like robots, where they can be aware of temperature, they can have vision and see what’s around them in the room. So for example your computer could see you coming and tell you, without you even asking for it, that there’s an important message waiting for you. On the other hand, it could see a thief coming and recognise that it’s being picked up by someone who’s not its owner and sound alarms. So I think that our general notion of a computer today as being a box on a desk, that you tell it to do things is going to continue to evolve, and computers in the future will really be very little like they are today.

Int: Now there are two reasons why this seems to be special to technology. One is the fact… Because as you say, other technologies mature and seem to sort of stabilise, right? But this is different. On the one hand, the size of it can get smaller which makes it possible, but on the other hand it’s more malleable than anything else. There’s nothing that we’ve ever invented before which has this property.

LT: Probably the closest things to computing, just to get an idea of how malleable it is, would be paper and clay. Clay can be moulded in all sorts of shapes. And not just bowls that you can make out of clay, but all the different materials in society that are made out of ceramics — just an incredible variety, different physical properties, and different sizes and shapes and uses. I think it’s a good metaphor because it shows how you can mould it. But ceramics have not much to do with information. So the other analogy I like to use is paper, where paper can be used for anything from building screens to give you privacy when you’re changing your clothes, to filters for coffee, to places to draw and sketch with charcoal, paint, pencil, pen, and as well as to write, print, and so on. So this very simple idea of a sheet of thin fibre has got an ever-expanding number of uses and applications and forms —

Int: So it’s a medium?

LT: Yes. And so computers have become really another medium.

Int: …I mean, are you saying the computer… [that] the best way to think of it is as a medium, a dynamic medium…?

LT: Yeah, the computer is a medium, but it’s one that can change. Instead of something like film or tape or paper, where once the information is on it the value of it is that it’s fixed; in a computer it has the ability to transform the information and change it either by instructions in a program, or by the user interacting with it. So I don’t see how that can have any end. And the other thing is, the important thing about a computer isn’t the physical device, that is the computer. It’s really the software that’s in the computer that makes it be alive essentially. And the computer itself can take up many forms, from room-filling devices to things that are so small that they’ll fit in a watch, or eventually things that would be able to be implanted in your ear, or something like that. So it isn’t the physical device that’s as really important as the software in it.

Int: But yet human beings tend to… so far in the history, they’ve taken their image from the physical device, and just like in the past they thought of them as big room-sized devices. Now we’re rather wedded to the desktop thing. What do you think about the networking implications and, so far, of the work you did at Xerox PARC, the networking side; and while it’s taken off a bit in business, it’s still got a long way to go. Do you see that as being really big in the next decade?

LT: Yes. What’s going on right now in the communications industry, in the telecommunications industry, is that fibre-optic networks and satellites are being installed everywhere and making it possible for huge amounts of data to be transmitted from one place to another. What this means to people using computers is that they can instantly access information somewhere else. We’re seeing a little bit of that phenomenon just with the popularity of fax machines in business, where people realise that it’s quicker and cheaper to get a letter from one place to another by facsimile than by putting it in an envelope and physically transporting it.

So the same is true of data in a computer. And already banking transactions worldwide are done electronically: money doesn’t move across borders anymore in trucks and airplanes, it’s just information sent through wires, adding to an account here and subtracting from an account there. Eventually this same power will be commonly used by individuals. Instead of going to a library you’ll be able to access the information you would have gotten in a library right from your study at home through your computer. Or, in fact, once computers are mobile and portable, and communicate without wires, you’ll be able to get that library information from anywhere, wherever you happen to be walking.

Int: And that’s the big change that’s going to come, that the image of a computer when we think of change if we look forward 10–20 years, will be the mobile, much smaller device, that will have this networked quality about it?

LT: Right. The computer will be something that you take with you instead of leaving behind at your desk, and it’ll be something with which you can communicate with other computers, with other people, and access information anywhere that it is. I think that’s one of the things that are exciting about where computers will be going. But it’s only one: there are so many exciting directions that computing is growing in, it’s hard to pick out one as being the most important.

Int: Are there any others…?

LT: I think the fact that computers will be able to sense their environment, and recognise objects in their environment, and that computers will be able to act as the agent and assistant to the user and do things without being told to do them, but just knowing (in a sense ‘knowing’, not the way people know, but in a more limited way that computers can know) — knowing what the person wants and needs, and being able to do things on the person’s behalf. I think those are also extremely important directions that computing is going.

Int: Last question, Larry. In your experience in computers, what surprised you the most about the way things have developed, if anything has?

LT: Well I don’t know if I’m much surprised by anything.

Int: [laughing]

LT: Okay. Ahh, let’s see… I think the most surprising thing to me about how computers have evolved isn’t about the computers, but about people’s expectations. At any given time there are people who are predicting different rates of speed of evolution of the technology, and usually most people are wrong. It’s very difficult to predict based on the past and the present what will be happening in the future. So all the predictions that I’ve made today are very unlikely to be correct based on the track record of our industry. And I think that’s the most surprising [thing], it’s the unpredictability of where it’s going to go.

The Author

Writer. Translator. Mac consultant. Enthusiast photographer. • If you like what I write, please consider supporting my writing by purchasing my short stories, Minigrooves or by making a donation. Thank you!