How I’d live this quarantine if it was 1990

Tech Life

Joshua Topolsky, from Thank god for the Internet:

But thank god for the internet. What the hell would we do right now without the internet? How would so many of us work, stay connected, stay informed, stay entertained? For all of its failings and flops, all of its breeches and blunders, the internet has become the digital town square that we always believed it could and should be. At a time when politicians and many corporations have exhibited the worst instincts, we’re seeing some of the best of what humanity has to offer — and we’re seeing it because the internet exists.
Now, I’m not letting Mark Zuckerberg or Jeff Bezos off the hook, but we also can’t deny that there is still good, still utility, still humanity present here — and it’s saving us in huge ways and little ones, too. In the shadow of the coronavirus, the sum of the “good” internet has dwarfed its bad parts. The din of a connected humanity that needs the internet has all but drowned out its worst parts. Oh, they’re still there, but it’s clear they aren’t what the internet is; they’re merely the runoff, the waste product. 

John Gruber, quoting this passage above, remarks:

So true. Feeling isolated? Cooped up? Me too. But imagine what this would’ve been like 30 years ago. This sort of crisis is what the internet was designed for, and it’s working. 

 


 

These two posts have inspired me to make a sort of thought experiment. Considering I’ve been self-isolating a bit earlier than when the quarantine became mandatory in my country (Spain), I’m now on Day 33 of my stay-at-home life. So I’ve started wondering, What would I do if this was 1990 instead of 2020? How would my ‘quarantine lifestyle’ be like?

It’s easy and natural to say, Oh, it would be horrible without the Internet, but that’s because in our time-travelling musings, we approach 1990 with a 2020 mindset. It’s the same thing as when we try to imagine/remember how life before mobile phones was like. And I wrote a piece about this very subject back in 2011. Here’s a relevant quote, with emphasis added:

People who have never really lived in a world without mobile phones […] might think that daily life at that time was unnecessarily complicated and ‘harder’. Organising meetings, finding people, finding places around you, having to use paper maps instead of having a portable device with GPS functionalities built in, not being able to look things up in Google or Wikipedia at any time. The truth is, people knew how to organise themselves with the tools they had available. Daily life had a completely different pace and style, built around the tools available at that time. It really isn’t a matter of ‘worse’ or ‘better’ — life was just different. People were equally able to organise their meetings, to communicate with one another, to go to places never before visited by using a map or tourist guide, to search for information at public libraries, and so on and so forth. 

In 1990 I was still living in Italy, finishing high school. I already had a room full of books (I developed a passion for reading at a very early age, mostly thanks to my grandfather, an erudite with a vast personal library). I already was fascinated by computers and technology, already keeping up with the news by devouring several computer magazines (my dad had subscribed me to one of the best at the time — it was called Bit[1]).

In 1990 my home computer was a Commodore 64; by that time it was a pretty souped-up setup, with its dedicated monitor, tape drive, disk drive, and printer, and I even expanded the C64’s RAM. I mostly used it for gaming, but also more ‘serious’ stuff thanks to GEOS — an operating system with a graphical user interface which turned the humble C64 into a ‘poor man’s Macintosh’, at least visually. 

But my dad had also started bringing home discarded IBM and IBM-compatible PCs from the company where he worked, so I was using old word processors like WordStar to write school assignments and also my first short stories, which I would print using a loud IBM dot-matrix printer my dad had also procured.

If this was 1990, and a pandemic had struck and forced people to stay at home, the only thing I would miss would be going to the studio I was apprenticed to, where I was learning desktop publishing on a Macintosh SE + LaserWriter workstation. But for the rest, I’d say I would have a lot to keep myself entertained:

  • I’d have lots of books to read (or finish reading) at my disposal.
  • I’d have tools and materials I could use to write my fiction, from an Olivetti electric typewriter, to a old IBM PC AT connected to a printer.
  • I could keep in touch with friends and relatives via landline telephone.
  • I could get the news and a bit of entertainment from TV, radio, and papers.
  • I could listen to vinyl records, CDs, and cassettes on the home Hi-Fi stereo, or in my room with my old Walkman. My parents owned a fair amount of records, there was always music in our home.
  • In 1990 I was still living with my parents, so if we wanted to spend time playing together, we would take out our boxes of board games and cards.

These are just a few things off the top of my head. The Internet has brought a lot of good and bad things into our lives, but it’s not that the world was hopelessly boring and grim before the Internet and social media as we know them today even existed. As for the distractions, thirty years ago they felt — how can I put it? — less wasteful. 

At least in my case, sometimes today there’s a certain depressing aftertaste after, say, spending a couple of hours deep down some rabbit hole in YouTube. The entertainment I may have felt during those two hours, or whatever I may have learnt during those two hours, quickly evaporates afterwards, and I’m left with the sinking feeling of having wasted two hours of my life, of having burnt a precious resource — time — that I’ll never get back. 

In 1990, an hour spent on the phone talking with my best friend felt enriching. An hour spent playing some games on the Commodore 64 felt good because it was usually followed by several attempts at understanding how the game’s BASIC program worked. Music was less of a background, and it used to inspire a lot of my writings; often it immersed me in the perfect mood to jot down ideas for a story. Same with films. Perhaps the fact that in 1990 I didn’t have access to the staggering amount of information and choices I have today, made me more focused on what was available to me. And while more limited in scope, I had a deeper knowledge in those selected areas/subjects. In contrast, today I’m faced with such an amount of information and choices that I often feel like a lot of my time is spent ‘just browsing’, gathering crumbs of information, rather than forming deeper nodes of knowledge, if you know what I mean.

In case this is starting to feel like a nostalgia trip to you, the gist of this, in the end, is what I wrote in that 2011 piece I quoted above: It really isn’t a matter of ‘worse’ or ‘better’ — life was just different. If this was 1990, I don’t think we’d feel much more isolated than we’re feeling now. It’s all in what I call the ‘time-travelling bias’ — if we were transported back in 1990 with all our current baggage of habits and conveniences, then yes, it probably would be a dreadful experience for many. But if you, like me, were already alive in 1990 and remember how life and your life was back then, then you probably realise we would have found plenty of ways to spend our time in isolation.

 


  • 1. You can see some of the issues on the Internet Archive; it was first inspired by BYTE Magazine, and was published between 1978 and 1997). ↩︎

 

Yes to everything — Addendum

Tech Life

My previous article about the iPad, Yes to everything, was difficult to write because, as I was drafting it, one observation led to another couple of thoughts, which in turn begot other thoughts… It was getting hard to provide a cohesive discourse. What I did was to gather as many thoughts as possible in a coherent whole, and leave additional stray observations as a coda. But that ultimately resulted in a very long piece. So, before hitting the Publish button, I decided to leave those stray observations out and write a standalone ‘addendum’ piece — the one you’re reading now.

Meanwhile, I received the most varied feedback and remarks about Yes to everything, and so in this piece I’ve also added my responses to a few remarks worth of consideration.

1. The feedback

It went pretty much as expected: 1) The article was largely ignored by the higher circles of Apple and tech punditry. 2) I received positive feedback and praise by some people, a few of them exactly catching what I meant to say. 3) I received a considerable amount of private communications, the sheer majority from incensed iPad fans, many completely misunderstanding every word I wrote. (How do you manage to do that, by the way?)

When I write something, I want to express my thoughts and observations as clearly as I possibly can. For me, it’s never a matter of ‘being right at all costs’. If I get some facts wrongly, I have no problems in admitting my mistake. Constructive feedback is always very welcome. What I do not tolerate are personal attacks, people who put words I haven’t said in my mouth, and people who write me emails nerdsplaining me things as if I haven’t used all kinds of computing devices since the early 1980s or seen a goddamn user interface for the past 35 years or so.

What I do not tolerate is the utter toxicity some people display the second your views start differing from theirs even in the slightest. What’s even more tragicomic is the underlying misunderstanding and point-missing: writing me a message in all caps ‘shouting’, You don’t get it! You don’t know what you’re talking about! The iPad is awesome and I use it to do all the things I need — when in my piece I literally wrote:

Others mistake my criticism for the iPad at the conceptual level for criticism aimed at the device itself. Nothing could be further from the truth. I do think the iPad is an impressive device. I don’t deny it’s an engineering feat. I absolutely think you can do all kinds of serious work on it. And I’m happy for all those who are able to make the most of it.

So hey, keyboard warriors, how about you re-read my articles more slowly before jumping at my throat with your nonsense?

2. So, what kind of innovation would I like?

A few people asked me more or less the same question: Then what kind of technological innovation are you craving for? Well, apparently one I haven’t seen in a while: ideas, projects, designs, plans. One of the things that struck me most about computer scientists of decades past — one thing I was reminded of as I transcribed the interviews I’ve recently published here, especially the one with Alan Kay — is that their approach seemed to be something like ‘ideas first, technology later’.

Some of them had complete visions of what they wanted computing to become, and then they started working on them and did everything they could to make those visions become true. Sometimes there were no detailed plans, but intuitions, insights, that were enough to point towards a direction. When a technological advancement was achieved, such as the microprocessor, it made previously-theorised designs and applications happen for real.

What I’m seeing today is more like the opposite approach: a laser focus on technological advancements to hopefully extract some good ideas and use cases from. Where there are some ideas, or sparks, they seem hopelessly limited in scope or unimaginatively iterative, anchored to the previous incarnation or design — like, How can we make this better, sleeker, more polished? Whereas there seems to be a dearth of questions like, What’s next? Where do we go from here? How can we circumvent these interface limitations? How can we meaningfully change the way X is done? and so forth. General questions, larger in scope, not tied to a single product. Heck, not tied to the previous iteration of a product, even.

Today, both manufacturers and users have this fascination for the product, the gadget, the tool. People want the faster horse, tech companies give them faster horses and focus almost exclusively on how to make the next horses even faster. Perhaps I’m being hopelessly idealistic here, but I would like to see more fascination for the purpose, for the exploration of different ways to do things and achieve goals, for the end more than the mere means to an end. When Project Courier first emerged back in 2009, I remember Microsoft being harshly criticised for making concept videos instead of releasing an actual product. I still think that a ‘concept video’, when thoughtful, may have some value, in that it presents an idea, or even a complete design, of a possible alternative path or solution. And while it may still be unfeasible in the here and now (for lack of essential technologies, or simply because it breaks a lot of conventions), it remains an inspiration, a concept that is now out there and may turn out to be the decisive spark towards something truly innovative.

3. User interface comparisons: ‘But the Mac UI isn’t great either…’

A lot of people keep turning this matter into a Mac vs iPad shoot-out. It is not. Think about it this way: if one criticised the user interface of an MP3 player, would you respond by saying “But the user interface of a Hi-Fi stereo system isn’t great either”? Well, you could, but that would be missing the point, because the MP3 player’s whole reason of being is that it’s supposed to be an easier-to-use device than the stationary Hi-Fi stereo system you have at home. The way it is designed, the use cases it’s been designed for, demand more immediacy, simplicity, and friendliness.

Similarly, if one of the iPad’s core reasons for being is to be a more immediate, friendlier, easier to use device than a traditional computer, its user interface must take all this into account. User interface complexities are expected and somewhat more forgivable on traditional computers — because of the very nature of the computer and the ways you use it. Despite some superficial interface similarities, these are different systems with different approaches and expectations.

If in becoming the perfect laptop replacement, the iPad becomes effectively a laptop, with a user interface that is just as complex, then what’s the point? Having a touch interface as a differentiator? Other laptops have it. Having pen input? Other laptops have it.

So, I’m not criticising the user interface of the iPad by saying it’s ‘worse’ than the Mac’s. I’m criticising it because I think that a device like the iPad ought to have a better one, period. A device that is aimed at being the better alternative to a traditional computer ought to have the better user interface it can have for such a role.

From what I’m seeing, though, iteration after iteration iOS on the iPad is incorporating an increasing number of features and concepts that come straight from traditional computing. Not a bad thing, per se, but it adds weight to an otherwise sleek user interface. And since familiarity is the easiest shortcut, these features are added in such a way that makes the iPad’s user interface become more similar to a traditional computer’s interface. The conceptual challenge would be: how to incorporate the functionality without adding the same look and paradigm? How to incorporate the feature without adding weight and friction?

Soon you’ll be able to use Photoshop (or a similar app) on the iPad in laptop configuration, moving around the app’s interface with a mouse or trackpad, using it to draw and select stuff. And the experience would be strikingly similar to the one you had in the 1990s, using Photoshop on a PowerBook with System 7 or Mac OS 8. That’s why I keep wondering if maybe there’s another way to do things, one that’s not so riddled of old paradigms and déjà vu.

4. The ‘obsession’ with Jobs

Of course my mentioning Steve Jobs yet again caused controversy and triggered exasperated responses. I’m not ‘obsessed’ with Steve Jobs, as some of you wrote me. I know he’s not around anymore. I know that wondering what he would do if he were still around has no practical value. I was simply speculating on ‘the road not taken’. It may be useless at the pragmatic level, but I find it to be an important thought experiment. A way to zoom out of the specificity of the device and make more general observations. Using a well worn metaphor, we’re so mesmerised by the trees today, that we can’t seem to wonder about the forest anymore.

Despite what you may think, I don’t idolise Jobs. He had his flaws and his blind spots, but he was a better thinker than many others in this industry. On the one hand, he had that ‘ideas first, technology later’ approach I was talking about in point №2 above; on the other, he had the ability to turn his vision into something commercially feasible (and often successful). Wondering what Jobs would do, ultimately, means wondering what someone with his perspective and mindset would do, not always and necessarily Steve the man. I miss figures who can think like him today. Today we have a lot of business strategists in tech, but so very few ‘practical visionaries’.

(By the way, I also wonder what Nikola Tesla would do if he were still alive. The use of clean energy would probably be more widespread. I want another Nikola Tesla, more than someone producing vehicles in his name.)

5. The nerve I struck

It seems that, in all this, the nerve I managed to hit can be summarised as follows: that while I consider the iPad a great device, I don’t think of it as being ‘special’. Or as special as a lot of iPad users seem to consider it.

I respect the fact that the iPad may be a revolutionary device for them personally. I very much agree that the iPad has been an important device in making a lot of people less averse to technology by being less intimidating than a computer. But as iPadOS develops, I’m wondering for how long the iPad can be used as best example of an intuitive, non-intimidating device.

I was in one of the stores in Spain the day when the iPad became first available in 2010. I saw all kinds of people — small children, older folks, non-technical people — immediately knowing their way around it. In 2020, I’ve seen more first-timers struggle with it. Maybe they just tap on an app and fiddle with it. Or they swipe a bit around the interface, but there’s more hesitation. Some, once they’re in an app, find difficult to get out of it. I know, for us geeks it’s hard to put ourselves in such inexperienced shoes. But not everyone shares our interest or involvement in these matters.

6. Nothing personal

Understand this: I’m not against the iPad, nor am I against iPad users. I’ve been told that in my criticism towards the iPad I have also sounded judgemental and dismissive of those who have chosen it as their preferred solution. I’m not. But I certainly am judgemental and dismissive of more toxic iPad users whose attitude comes across as very smug, as if to say, We’re the enlightened, we’re living the future, and you don’t get it.

I certainly don’t think less of a person who has chosen the iPad as their main or sole environment. That would be quite immature on my part. But I’ve received some unpublishable feedback from people who made very clear they do think less of me because I have not chosen the iPad way. This is what happens when people get religious about their preferences.

And finally, speaking of preferences, if I’m staying on the Mac there’s nothing religious about it. In recent times I have been quite critical of Mac OS as well, in case you’ve missed it. It’s simply the environment in which I work best. I am waiting for the next best thing, and at the moment the iPad is just not it for me.

Yes to everything

Tech Life

Every time I gather observations and thoughts for a piece on the iPad, I feel I keep returning to the same old insights I’ve had for years. I knew Apple would complicate the iPad’s user interface this way. That many people are happy with it doesn’t mean it’s inherently a good idea. 

Anyway. The other day, Apple introduced new iPad Pros, and an updated MacBook Air line-up. Most notably on the iPad hardware front, along with improving whatever feature was improvable, Apple has presented a new accessory — the Magic Keyboard. It has a trackpad. And on the software front, the upcoming iPadOS 13.4 will offer full mouse and trackpad support. 

Trackpad support was of course well received by iPad fans and all the people using the iPad as a main (or sole) computing device for work and leisure. Some praised the innovation of the new cursor, which Apple in their marketing describe as being The biggest thing to happen to the cursor since point and click. (Let me pause and eyeroll for a moment here). It’s an interesting take and a good execution. It’s also the least Apple could do on such a device — devising a cursor that is more context-aware and responsive than the one you find in a traditional computer is frankly more consequential than innovative.

As is consequential the fact that now the iPad supports mouse/trackpad input. Some of the comments I saw floating around mentioned how Apple has finally given in to the pressing requests from the iPad community, from people who wanted a more ‘Surface-like’ approach for the iPad, so as to make it a more suitable device for productivity.

While that may also be true, what I think is that Apple has actually given in to adding mouse/trackpad support to the iPad because they were essentially out of options. And because for them it is a convenient problem solver. It’s Mr Wolf in Pulp Fiction: the one you call when you need a professional to clean up your mess.

And the iPad’s user interface still looks a bit messy. You may be accustomed to it. You may be so proficient at moving inside of it that you even love it. I’m not here to criticise your preferences or the iPad as a device. You wanted a ‘faster horse’ — enjoy your faster horse[1]. I’m simply speaking from a conceptual standpoint. And from that standpoint, what I see is that the iPad’s user interface is a patchwork. Features, gestures, combinations of gestures, user interface layers, all stitched together over the years. 

Steve Jobs was quoted as saying: “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying ‘no’ to 1,000 things.”

By contrast, it appears the iPad is increasingly saying yes to everything.

Those who have no problems with the poor discoverability of several gestures or features still see the iPad as a flexible device that adapts to the needs of its users. They say, “If you feel that the multitasking interface is opaque, it’s okay. You’re not accustomed to it, and you probably don’t need it. The iPad keeps being intuitive for those who only use it at a basic level.”

From a visual standpoint, there might be very little difference between a feature that is not visible and a feature that is out of the way. Conceptually, this is a big deal instead. A feature that is not visible and your only way to find it is by reading about it somewhere, or seeing a video tutorial, is something undiscoverable and poorly executed. A feature that is out of the way, but you get hints of its existence by the system, is an indication of at least a modicum of design-oriented thinking behind it. If the iPad’s user interface were truly well thought-out, the more so-called ‘pro’ features would be more discoverable. I wouldn’t get feedback messages from regular folks telling me, I didn’t know I could do this on my iPad, with some even adding that they discovered some gesture or feature while erroneously performing a known one.

The more layers of interaction you give to the device, the trickier things get. If the solution to a previously undiscoverable feature is to make the feature (more) discoverable through the use of a different input source, you may have found a way out of the dead end you got stuck in, but it’s not good design, strictly speaking. (I remember an exchange between a woman and an electronics shop’s employee: after buying a Windows laptop she returned to the shop to complain about the poor trackpad performance, and the employee told her to “just use a mouse”. Why not make a better trackpad, instead?) 

The comparison with Microsoft’s Surface

The iPad getting proper mouse input support, and the new Magic Keyboard for the iPad featuring a regular trackpad, have naturally invited people and reviewers to draw comparisons between the iPad and the Surface. But I don’t see it as Apple ‘catching up’ with Microsoft. I see it more as Apple bringing their racing car to a different kind of championship.

Microsoft’s Surface may have its flaws. Its user interface may have its inconsistencies and limitations, but it doesn’t bear the signs of the iPad’s long-standing identity crisis. The Surface and the iPad have different origin stories, and those are reflected in the way you approach and use these devices.

The Surface wasn’t really born as a pure tablet with a tailored mobile operating system on it. The concept Microsoft wanted to contribute was of an ultracompact laptop first, with tablet functionalities added on as a convenient alternative to perform quick tasks as needed, without burdening the user with a device fixed in its laptop configuration and behaving like a laptop all the time. 

Still, all the devices in the different Surface product lines are essentially laptops (of different weights and capabilities) that can work as, or transform into, tablets when the need arises. Even the first generation of Surface devices back in 2012–2013 were hardly ever seen in the wild without their keyboard, despite it being ‘optional’. They’re very much touchscreen computers with a tablet mode, with productivity as their main purpose. Technically, their Apple counterpart would be something more akin to a ModBook than an iPad.

Their operating system, in a way or another, has always been some version of Windows with additional touch- and tablet-friendly features enabled, to make the Surface a more versatile device. 

The Surface knows what it is. And Surface users know what to expect from it, in terms of functionality and interface. The user interface could be improved here and there, but it’s not ambiguous. The levels of interaction comfort aren’t either. There is a distinctive best/good/okay comfort range as you go from operating a Surface like a Windows laptop, to using it as a tablet with pen input, to using it with touch input with just your fingers. But that feels fine because that’s the experience the Surface is supposed to provide. 

What Microsoft has strived to do over the past eight or so years has been to improve the Surface experience within that model, within that paradigm, and I’d say they’ve been rather successful at that. The next step is represented by devices like the Neo and the Duo, that introduce the new dual screen idea in form and function. The aim is, again, to improve productivity by creating a literal dual space to multitask and facilitate interoperation between apps and tasks, if and when needed. 

The iPad, on the other hand, has had a more varied history, and has been more of a chameleon — with regard to both purpose and interface. It was born as a separate device with unique characteristics to fill the perceived void between a laptop and a smartphone. In 2010, when introducing the iPad, Steve Jobs said, In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re gonna have to be far better at doing some really important things: better than the laptop, better than the smartphone.

And in its first iterations, the iPad was exactly that; its identity pretty clear — ‘a big iPhone’ that could be just as easy to use as an iPhone, but better at doing certain things due to its bigger display. And better than a laptop because certain basic tasks and operations were simply more intuitive to carry out thanks to the multi-touch interface. That really killed all the remaining netbooks still in use at the time, and many non-tech-savvy people were happy to use a small laptop-sized device that was much less intimidating to use than a traditional computer. All thanks to its user interface and its very operating system, that was not Mac OS X slapped on a touch-based device, but something that felt much more integrated and suitable for such a device. The learning curve was also low because people already knew iOS thanks to the iPhone’s success.

Then, unfortunately, Steve Jobs passed away.

I can see your eyes rolling from here, but bear with me. Although I’ve never denied my utter preference for Jobs’s leadership over Cook’s, I’m not trying to argue that the iPad would necessarily have had a better development and trajectory under Jobs, but it’s undeniable that the iPad is perhaps the device that has suffered the most from Jobs’s absence. Under his tenure, Apple released the first-generation iPad and the iPad 2. The iPad 2 was a first real improvement over the iPad 1: it was thinner, more powerful, and it had cameras. The iPads that came out afterwards, between 2012 and 2015, were essentially the same thing as the iPad 2, with obvious improvements in the hardware, and some improvements in the software. Conceptually, very little moved forward. The iPad Air 2, produced between 2014 and 2016, for all intents and purposes was just like the first iPad, just faster, better, and with more capable apps.

As for its conceptual evolution, as for changing the computing experience altogether, however, the iPad felt like a device stuck in stagnant waters. And it still felt pretty much like a device that didn’t know what it wanted to become. It was created as a consumption device first, with the ability to serve as an artistic tool for creation and to do the occasional productivity task if you tried really hard, with the right apps, and jumping through the right hoops. Styluses and external keyboards have always been usable on it, but the iPad has always been a ‘touch-first’ device, meant to be used like a tablet, not like an ultraportable laptop. I can’t speak for Jobs here, but I’m pretty sure he would have said something like, If you need to use the iPad as a laptop replacement, maybe it’s better if you just used a real laptop.

But then an increasing number of people, especially tech nerds, started to demand from Apple something more akin to Microsoft’s Surface in features and functionality. And Apple, from 2015–2016 onwards, started to oblige, little by little. And so they have been repurposing the iPad as it goes along without really jettisoning anything. The process has been utterly additive. Employing the famous Jobs’s analogy of trucks and cars, I’d say that from its origins as a sports car, the iPad has progressively become a sports car that can be retrofitted with a trailer, off-road tyres, a 4WD transmission, and so forth. 

Some look at the latest iPad Pro, at the full support for mouse input in iPadOS 13.4, at the new Magic Keyboard with trackpad, as a winning combination of tools that make the iPad a truly versatile device. And maybe it is so from a practical standpoint. Again, conceptually, I look at ten years of the iPad and I see its trajectory as going from being a ‘jack of some trades, master of some’ to being a ‘jack of all trades, still master of some, but not all’. 

The story and evolution of Microsoft’s Surface are perhaps simpler and less ambitious, but over the years have proceeded with a much clearer process, iterations, and intentions. Apple now probably aims for the iPad to be a sort of blank-slate device, so technically capable that it can do anything you want it to do. But all this retrofitting to make it also behave like a compact laptop has been — still is — a painful process to behold. I keep feeling the iPad could have been so much more in so many different, countercurrent ways, and all it has done in ten years is to become something more conventional.

Where the iPad is truly at the forefront today is hardware (industrial design + manufacturing + tech specs). But idea, concept, purpose? Not anymore. Others are trying to match the iPad in hardware, Apple is borrowing ideas and purposes from others. If there’s combined progress in all this, it’s inertial.

Again, I can’t be sure, I don’t have the ability to see alternate timelines, but I truly wonder what was Jobs’s ultimate idea for the iPad. What direction he wanted to point it. I’m not saying that things would have been better if Steve Jobs were still among us. But I’m sure we would have felt a stronger sense of direction for the iPad. A clearer vision, even if more polarising, perhaps. 

What I felt back in 2010–2011 was that Jobs’s plan could have been to gradually evolve the iPad into a unique computing device, using the tablet format and the multi-touch interface to effectively revolutionise what it meant to be productive using something that is not a traditional computer; to end up with a device that could go beyond the old and established paradigms and metaphors of traditional desktop computing. If he had wanted the iPad to progressively become a Surface-like device, he would have probably sherlocked the aforementioned ModBook and create a touch MacBook with Mac OS X.

Maybe this is the root of my general feeling of disappointment in the iPad — that Apple didn’t make enough efforts to come up with a transformative UI that could revolutionise how people can be productive on a tablet, without having to resort to traditional paradigms and input devices. Without reinventing the computing wheel for so many tasks just so they can be easily carried out on an iPad, even when it would make much more sense to just use a laptop.

Yes, maybe my expectations have always been high on this front. But not unreasoningly so. Is it really too much to ask of a tablet today, after seeing how innovative certain parts of the Apple Newton’s user interface could be more than 20 years ago?

For some, having an iPad acquire more Surface-like capabilities may be a success, a much awaited move that will solve so many things. For me this move, that brings the iPad even closer to a Mac laptop in functionality, in turn makes the iPad even less compelling. 

The big picture

Judging by previous feedback I received after publishing other articles on the iPad and ranting about my disappointment, a lot of people think I’m still clinging to the past, to the Mac and traditional computers, that I’m averse to change, that I’m ‘old’ and not flexible enough to adapt to this bright future of computing spearheaded by this incredibly awesome and innovative device that is the iPad. 

Others mistake my criticism for the iPad at the conceptual level for criticism aimed at the device itself. Nothing could be further from the truth. I do think the iPad is an impressive device. I don’t deny it’s an engineering feat. I absolutely think you can do all kinds of serious work on it. And I’m happy for all those who are able to make the most of it. (Yes, whenever the iPad vs Mac debate rages on Twitter, I have indeed indulged in some sarcasm. But come on, who doesn’t on Twitter?)

However, as someone who for several years has cultivated a deep interest for the history of computing and the user interface, I simply can’t look at the iPad (or the Surface, for that matter) and see real progress. Again, I’m not talking about computing power and features. The iPad Pro today is so much more powerful than a supercomputer from the 1970s. I’m talking conceptually. The ideas that drove the computer scientists at RAND corporation to create the RAND tablet in the mid-1960s were more advanced in scope than the ideas behind any tablet available today. And in certain respects more daring, as that tablet was meant to be operated without any keyboard whatsoever. It had an amazing handwriting recognition for the time, and all input came via its stylus. And some of the capabilities of Sketchpad, the groundbreaking program written by Ivan Sutherland in 1963, are still hard to beat in intuitiveness and execution, almost sixty years later.

So when I see a tablet device in 2020 become more usable thanks to it finally supporting mouse input of all things, and not because of some other advancement in touch technology, input method, user interaction or user interface design, forgive me if I feel underwhelmed and a bit disheartened. What we do with our devices today is something people like Alan Kay envisaged in the 1960s and 1970s. So no, I’m not clinging to the past or averse to change. I see where we are today and I’m baffled we haven’t advanced further. Or rather, the hardware has. But concepts, paradigms and metaphors are still the ones that have been circulating for more than sixty years. Today I see future-looking hardware marred by backward-looking software, interfaces, and interactions. In a sense, everyone’s clinging to the past, in a way or another.

Then why do I still choose the Mac over the iPad? Until I see real progress on those fronts I mentioned above, why should I waste time, money, and energies to be able to do on an iPad the same things I can already do with ease, experience and efficiency on a Mac? I would gladly undergo the re-learning process if that meant mastering a new device or interface concept that would bring significant benefits over ‘the old ways’ in terms of interaction, productivity, fulfilment, and so forth — or even something new in a meaningful way, something that was not possible before. But for now I keep seeing ‘the old ways’ re-emerge here and there behind the external layer of coolness of the iPad. I can’t be averse to change when I don’t even really perceive change in the first place.

 


  • 1. A reference to the famous quote attributed to Henry Ford (used by Steve Jobs as well): “If I had asked people what they wanted, they would have said faster horses.” ↩︎

 

The Machine That Changed The World — Transcription of the interview with Alan Kay (Part 2)

Tech Life

MCTW Kay interview d2

Introduction

This was an interview conducted for the same The Machine That Changed The World documentary series featuring the interviews with Larry Tesler and Steve Jobs I recently transcribed and published here. This interview took place over two days in July 1990. Only portions of it were featured in the documentary series.

I’ve always been an admirer of Alan Kay and his work. As I was watching this interview I realised there were so, so many things worth taking note of, and worth sharing, that I decided to carry out a full transcription of it. As you can see, it was an energy-draining, time-consuming task, but I’m happy to have done it. There’s so much food for thought here that it’s a veritable banquet.

For a comprehensive look into the series, I recommend checking out the excellent work by Andy Baio in 2008 on his waxy.org website.

About the interview

The video of the interview can be watched here. [Update, March 2022 — The original YouTube link doesn’t work anymore. You can watch the interview here instead. Note also that the video is accompanied by a transcript on the WGBH website, but it doesn’t look accurate in certain places; maybe it is an automated transcription?]

This is the full transcript of the interview. As mentioned above, the interview was recorded over two days, so it’s quite long (about 2 hours and 45 minutes in total). Therefore I thought it was best to split it in two parts, one for each day. This is Part 2.

I’ve applied gentle editing in some places to make the context of certain questions, and the meaning of some convoluted passages a bit more understandable. Understanding how the interviewer formulated his questions was sometimes hard, due to the low volume (he didn’t sound as he was miked), and to the fact that his remarks could be somewhat meandering.

Topics include the evolution of the computer and its role in society (past, present, and future), user interfaces, Doug Engelbart’s famous demonstration, the FLEX machine, the computer as a medium (including parallels with, and excursions on, the evolution of the book), the experience at Xerox PARC, Steve Jobs, Apple, the Macintosh UI in relation to what was pioneered at PARC, the Alto, the Dynabook concept, working with children, the stages of development and the ways to learn about the world, interfaces and application software, the universal machine, the computer interface as user illusion and the concept of virtual machine, the future of computing, virtual reality, and general considerations on the evolution and the revolution of the computer. Summarising these is a bit difficult, because all topics keep surfacing and returning throughout the whole flow of the conversation.

Disclaimer: I have done this transcription work and chosen to publish it here in good faith, for educational purposes. I don’t make any money from my website, as it is completely ad-free. In any case, should any copyright holder contact me requesting the removal of the following material, I will certainly comply.

Enjoy the conversation.

Read More

The Machine That Changed The World — Transcription of the interview with Alan Kay (Part 1)

Tech Life

MCTW Kay interview d1

Introduction

This was an interview conducted for the same The Machine That Changed The World documentary series featuring the interviews with Larry Tesler and Steve Jobs I recently transcribed and published here. This interview took place over two days in July 1990. Only portions of it were featured in the documentary series.

I’ve always been an admirer of Alan Kay and his work. As I was watching this interview I realised there were so, so many things worth taking note of, and worth sharing, that I decided to carry out a full transcription of it. As you can see, it was an energy-draining, time-consuming task, but I’m happy to have done it. There’s so much food for thought here that it’s a veritable banquet.

For a comprehensive look into the series, I recommend checking out the excellent work by Andy Baio in 2008 on his waxy.org website.

About the interview

The video of the interview can be watched here. [Update, March 2022 — The original YouTube link doesn’t work anymore. You can watch the interview here instead. Note also that the video is accompanied by a transcript on the WGBH website, but it doesn’t look accurate in certain places; maybe it is an automated transcription?]

This is the full transcript of the interview. As mentioned above, the interview was recorded over two days, so it’s quite long (about 2 hours and 45 minutes in total). Therefore I thought it was best to split it in two parts, one for each day. This is Part 1.

I’ve applied gentle editing in some places to make the context of certain questions, and the meaning of some convoluted passages a bit more understandable. Understanding how the interviewer formulated his questions was sometimes hard, due to the low volume (he didn’t sound as he was miked), and to the fact that his remarks could be somewhat meandering.

Topics include the evolution of the computer and its role in society (past, present, and future), user interfaces, Doug Engelbart’s famous demonstration, the FLEX machine, the computer as a medium (including parallels with, and excursions on, the evolution of the book), the experience at Xerox PARC, Steve Jobs, Apple, the Macintosh UI in relation to what was pioneered at PARC, the Alto, the Dynabook concept, working with children, the stages of development and the ways to learn about the world, interfaces and application software, the universal machine, the computer interface as user illusion and the concept of virtual machine, the future of computing, virtual reality, and general considerations on the evolution and the revolution of the computer. Summarising these is a bit difficult, because all topics keep surfacing and returning throughout the whole flow of the conversation.

Disclaimer: I have done this transcription work and chosen to publish it here in good faith, for educational purposes. I don’t make any money from my website, as it is completely ad-free. In any case, should any copyright holder contact me requesting the removal of the following material, I will certainly comply.

Enjoy the conversation.

Read More