It’s utterly amazing how a lot of people have all the iPad’s shortcomings figured out already. And by just watching an hour and a half keynote!
After the iPad’s introduction, I feel it is 2007 all over again. A lot of other people have already mentioned how the iPad seems to suffer from the same myopic criticism as the first iPhone 3 years ago, but there’s a specific shade of that criticism which baffles me. Both those who have dismissed, and those who seem frightened by the iPad and how it may affect the future of personal computing, share one common trait: they’re treating this iPad as it were an immutable device. As if it could never change or improve with time and future iterations.
The iPhone, too, lacked this feature and that commodity at first. Then they came. 3G came, MMS came, copy & paste came, tethering came (not for everyone, apparently, but still), third-party development came. The device evolved. The platform evolved. From the iPhone/iPod touch user’s standpoint, things got better and better. And from what I saw and what I read from people who actually tried the iPad, I have the feeling that the whole touch platform will soon be taken to the next level.
To some programmers and tinkerers the iPad looks scary, because apparently this iPhone’s big stupid brother could be the harbinger of a simpler computing experience, too simple and limited for their tastes. I’ll reply first by quoting this excerpt from John Gruber’s piece titled The Tablet (and written before the iPad came out):
The iPhone […] was conceived and has flourished as a general-purpose handheld computing platform. It was not introduced as such publicly, and is not pitched as such in Apple’s marketing, but clearly that’s what it is. The iPhone was described by Jobs in his on-stage introduction as three devices in one: a widescreen iPod with touch controls, a revolutionary mobile phone, a breakthrough Internet communicator. Thus, it was clear what people would want to do with it: watch videos, listen to music, make phone calls, surf the web, do email.
The way Apple made one device that did a credible job of all these widely-varying features was by making it a general-purpose computer with minimal specificity in the hardware and maximal specificity in the software. And, now, through the App Store and third-party developers, it does much more: serving as everything from a game player to a medical device.
(The emphasis is mine). With this in mind, I think the iPad is going to be an even more sophisticated device than the iPhone/iPod touch, software-wise. The oh so wide range of possibilities should entice enough curiosity as to invite to tinker all those who are inclined to do so. The key, in my opinion, is not to limit your view to the iPad you saw on January 27. If, as Steven Frank so brilliantly pointed out, we come to a scenario where you can develop an iPad application on an iPad itself, the Old World computing may have its days numbered. Tinkering? Not so much.
This leads to the second point I’d like to make: I’m not so sure that simple, closed devices represent a threat to tinkering with them. In my experience, the opposite has often proven to be the case. Pilgrim pines for machines like his old Apple ][e. Yes, it was certainly the opposite of an iPad or iPhone in terms of openness and customisation and that “I can do whatever I want” feeling. But it was also simple and friendly, at least considering the personal computing experience of those days. To me, simplicity in an object, in any device, has always been an essential feature driving my curiosity, capturing my attention and interest in the object or device; wanting to know more. When that spark is ignited, nothing can stop a true tinkerer.
When I was young, I was not attracted by complex-looking objects; my grandfather had a professional hi-fi stereo system, and all those levers, gauges, knobs, sliders intimidated me. They were surely a heaven of customisation for the expert audiophile, but they didn’t make me want to know more about that stereo system. If now I know a lot about stereos, record-players, radios, amplifiers, I really have to thank simple-looking devices, sometimes even toy-looking devices. Their simplicity, their sparseness of controls, the fact that they could perform complex tasks while looking so simple and easy to operate boggled my mind: I had to open them, to discover their secrets.
Similarly, I wouldn’t know what I know now about photography and cameras if I had started handling a professional SLR back in my teens. My dad had a Canon A-1 which looked impossibly abstruse to me. He occasionally let me handle it, when he wanted me to take a photo of him or my mother, and I remember asking him many times what I had to do, if everything was already set, because I didn’t want to screw things up. My first camera was an Agfa Silette LK Sensor — a sturdy, simple rangefinder with four shutter speeds (plus a Bulb setting), an aperture ring, a focusing ring and nothing else. In the viewfinder there was a needle, indicating the light measurement taken by the frontal selenium cell. When my dad gave it to me he said: choose a combination of shutter speed and aperture so that the needle stays in the middle. If you are taking landscape shots in a sunny day, focus to infinity, choose a shutter speed of 1/125 or 1/300 (the maximum in that camera) and you’re set.
To me it was much simpler than his Canon A-1, and I happily snapped dozens of landscape photos. (I was also given disposable cameras, and things were even easier). But I never felt that that camera was ‘enough’. I never thought that photography was ‘just that’. I had to explore, to know more. I started experimenting with low light, a tripod, the Bulb setting; I wasted rolls and rolls trying to obtain decent photographs of the moon and night scenes. I used self-made, improvised filters to create colour shifts and effects in otherwise ordinary shots. I tinkered as much as I could, because that camera did not intimidate me, and little by little I was knowing its ‘secrets’.
This is why I don’t really understand some fears related to the iPad and its possible impact on tomorrow’s personal computing. I think Steven Frank has a point when he writes:
We worry that these New World devices are stifling the next generation of programmers. But can anyone point to evidence that that’s really happening? I don’t know about you, but I see more people carrying handheld computers than at any point in history. If even a small percentage of them are interested in “what makes this thing tick?” then we’ve got quite a few new programmers in the pipeline.
Any object, device, appliance, from what I’ve seen so far, is more likely to get people interested in ‘what makes it tick’ if it is user-friendly, if it looks simple and not intimidating, if it gives the user the impression that it can be easily and wholly grasped. I’ll say that again: when the tinkering spark is ignited, it’s a road downhill.