That notch on the new MacBook Pros, and thoughts on hardware design

Tech Life

Oh boy, where to begin?

As usual, when faced with new designs and solutions — and with pretty much anything else, really — we have an emotional response followed by a more rational assessment. Sometimes, things that initially don’t seem to make sense to our emotional part, or that rub us up the wrong way, are later rationalised and we begin to understand, even accept, why they’re there.

When I was following the Unleashed Apple event on 18 October, and Apple revealed the design for the new 14 and 16-inch MacBook Pro models, I was initially surprised to see certain details that hark back to the Titanium and Aluminium PowerBooks — details that admittedly struck the right nostalgia chord in me. But when I saw that their displays featured an iPhone-like notch right there at the top, I went into a fit of rage and punched my side desk so hard that my G4 Cube woke up from sleep.

For the 10–15 minutes after that moment, I tuned out everything that was said in the MacBook Pro introduction. I was in a state that could be described as a sort of shell shock. I know it sounds so dramatic, but that’s how I was feeling. Then I came back and started processing everything, waiting for my rational side to kick in and help me analyse and understand this new design choice on Apple’s part that, on the surface, makes absolutely no sense to me. It’s better I cool down and write about this in a few days, I said to myself.

Well, here we are. It’s Rick’s rational side speaking, and this notch on the Mac makes absolutely no sense to me either. It’s a stupid, unnecessary detail that doesn’t really solve any problem, but creates a few. And while I understand that a notch is a compromise on the iPhone because the front camera array is more sophisticated as it has to take care of FaceID authentication, on the Mac this was completely avoidable. The front camera is just a regular webcam, though at least it’s HD.

The most common reactions I’ve heard from people who don’t oppose the notch are:

  1. It’s not a big deal: After a while you won’t even notice it. / It doesn’t really get in your way anyway.
  2. It’s actually a good thing because you gain more screen real estate. This added real estate is basically the area that should have belonged to the bezel at the sides of the webcam and that is now recessed and part of the display. See this tweet from David Pogue to visualise it.

Objection to №1, After a while you won’t even notice it. / It doesn’t really get in your way anyway.

I don’t think this is going to work like with the iPhone. On the iPhone, the interaction with the notch area is minimal. Your eyes start filtering out the notch because when you use the phone they’re often focused elsewhere on the screen. On the iPhone, the notch may become noticeable again whenever some activity happening on the screen makes it stand out, e.g. when playing a fullscreen video in landscape mode. 

On the Mac it’s a different story, in my opinion. On the Mac, the notch visually splits the menu bar, a UI element you interact with all the time. The notch covers, occupies a part of the menu bar that could be devoted to displaying menu items and menu extras. This isn’t a real problem when you have apps with just a few menus. But with more sophisticated and professional apps, with many menus on the menu bar reaching and even surpassing the middle point, then yes, the notch is definitely in your way and you can’t tell me you’re not going to notice it. When you launch an app with lots of menus on one of the new MacBook Pros, all the ‘excess menus’ will get moved on the right, and the notch will of course be a sort of gap between them. So, according to Linda Dong (Apple Design Evangelist), developers now need to take the notch into account when designing their apps (more unnecessary work for them, but who cares, right Apple?) and says:

Either way it’s still a great idea to keep menu bar titles short and consolidate menus when you can for usability’s sake! Hunting through a million menus is never fun even on pro software. 

And I say here what I said on Twitter: for usability’s sake there shouldn’t be a notch in the first place. Hunting through a million menus may not be fun, but it’s certainly better and clearer than deciphering tiny icons and controls in an app toolbar or panel. If you stop and think about it, it’s utterly ludicrous that a developer should alter their app design to accommodate an element which was arbitrarily put in place by Apple and that is so intrusive it can’t possibly help developers make their app better, UI-wise or usability-wise.

But the problems in the menu bar also come from the right: the increasing amount of menu extras (icons). If my 13-inch retina MacBook Pro had a notch, it would already be problematic and I would be forced to resort to third-party solutions like Bartender to hide most of the menu extras. Don’t get me wrong, Bartender is a great tool, but I want to see those menu extras all the time, because some of them indicate a state, and don’t simply function as a clickable element to access application options. 

Again, the notch is an unnecessary hindrance, because even in the best case scenario, it makes you reconsider the way you interact with menu bar elements.

Objection to №2, It’s actually a good thing because you gain more screen real estate.

I thought about this, and my answer is, You gain very little, and it’s not worth the hassle.

The added strip of pixels at the sides of the notch serves to accommodate the menu bar, so in normal use, and compared with a MacBook with a regular top bezel, what you gain vertically is just that, a bunch of pixels corresponding to the height of the menu bar. If you use an app in fullscreen mode, it won’t make use of the extra space on top. The app’s interface will be displayed in the ‘safe area’ below the notch. In other words, when fullscreen, you’ll have the same available space as on a Mac with a regular bezel.

In other words, you gain very little. This is the same misguided principle driving the redesign in Safari 15, at least initially, when according to the genius designers at Apple, having the address bar and the row of browser tabs on the same line is great because you would gain more vertical space to display a website. We are not living in the late 1990s anymore. We’re not dealing with screen resolutions of 640×480 or 800×600 pixels where every trick to gain vertical space was more than welcome. These are dense retina displays with 3024×1964 and 3456×2234 pixels for the 14 and 16-inch MacBook Pros, respectively. The vertical ‘gained space’ amounts to what, 30 pixels? Come on.

A few thoughts about Mac hardware design in recent years

By ‘hardware design’ here I’m not referring to the internals, but to the outer industrial design. A few days ago someone on Twitter said or maybe referenced an article saying that Mac hardware design has actually improved since Jonathan Ive’s departure. Someone else suggested that, since designing hardware is a time-consuming process that doesn’t happen over a few weeks, it was possible that the design process for these latest MacBook Pros started when Ive was still at Apple. I have no idea. I may not have liked every design decision made by Ive, and while he brought the notch on the iPhone, I seriously doubt he would have approved the same solution on the Mac.

Certain details and solutions of Ive’s designs may have been opinionated, but at least reflected a strong personality with actual opinions that shaped the design. The hardware design of recent Macs, instead, feels like the work of a committee… of design students. The M1 24-inch iMac looks like a design exercise where the assignment is Make the thinnest possible desktop Mac. Don’t question why it has to be the thinnest, just do it.

MacBook design is now at its most iterative and regurgitative. The current M1 MacBook Air perpetuates the same wedge-like profile as the late-2010 model, and the display assembly design is essentially the same as the 2015 12-inch retina MacBook. MacBook Pros have retained the same design since they went unibody in 2008. Over the years they’ve become thinner, their trackpads bigger and wider (too big and wide, if you ask me), and some models acquired a Touch Bar at the top of the keyboard.

If the design of the newest MacBook Pros finally breaks this decade-long iterative path, on the other hand it can be seen as a remix of previously-executed design cues. The truly distinctive details are the visibly protruding feet and the notch on the display. I am obviously not a fan of either, but I understand that those taller feet are part of the thermal design of the MacBook Pro, and will help in keeping the computer cooler when under load. The notch is the truly gratuitous, unnecessary novelty that sometimes I think was put there by Apple’s design team as retribution for having to remove the Touch Bar. 

Seriously now, and circling back to the notch: it was completely avoidable. You can justify it however you want, but it has the same fundamental characteristic as its iPhone counterpart — it’s just plain ugly. It is indeed a design compromise on the iPhone because on such a portable device on the one hand there’s the need to maximise screen real estate, and on the other there’s the simple fact that you have to provide a sophisticated front-facing camera with the necessary technology to enable FaceID. So you design a display with a screen that reaches the top where possible, i.e. the area surrounding the notch. You provide as many pixels as possible given the circumstances.

And yes, putting that notch on the MacBook Pros might have originated from the same impulse — maximising screen real estate. But while on the iPhone this was a need, on the Mac it’s just a want. Again, with displays as big and pixel-dense as those in the new 14 and 16-inch MacBook Pro models there’s no need to maximise screen real estate. You don’t need to carve a space up top where to shoehorn the menu bar, as if it were an annoying, restricting UI element, and splitting it up in the process. To me, this makes no sense from a design-is-how-it-works standpoint. It looks like an urge to make a design statement for design statement’s sake — as if Apple products needed some signature design quirk to be recognisable. This, among other things, makes me wonder whether there’s still a strong industrial design leader within Apple. Someone who looks at the final display design drafts, sees the notch, and utters, What the fuck is this?

As an outside observer and long-time Mac user, I feel a certain lack of direction and, dare I say, resolve in many areas of Apple’s hardware and software design. Look at the progression of desktop & laptop computer designs and port selection under Jobs’s tenure. How many times Jobs’s Apple made a hardware design decision that had to be overturned later because something about such decision went nowhere or was not well accepted? The only oddity that comes to mind (and it’s a rather mild one) was the late 2008 aluminium unibody MacBook (non Pro). When this MacBook was introduced, many thought Apple would bring aluminium and a premium finish even to the consumer-oriented MacBook line, after years of polycarbonate iBooks and MacBooks. But then, in 2009, this 13-inch MacBook became the 13-inch MacBook Pro, joining the 15 and 17-inch models, and the humble MacBook went back to being made in white durable polycarbonate for two more iterations.

Now we see ports that were previously ‘courageously’ removed making their return, triumphantly announced as if they were a magnanimous concession on Apple’s part because “Apple has listened to the feedback from their pro users”. If you need to be told that removing MagSafe, the HDMI port, and the SD card slot is a bad idea; if you need to be told — and showed, many many times — that the butterfly mechanism in MacBook’s keyboards is a bad implementation, then you’re not doing a good job at designing hardware. You just make edgy design choices to ‘try new angles’ and hope that you’ll be validated by your reputation. 

The Touch Bar is another odd case: I think the idea had potential, but it has felt like an unfinished project. It could have been iterated and improved upon in so many ways, but it’s like Apple gave up on it. Oh, you don’t like it much. Yeah, okay, we’re getting rid of it, whatever. Why not implement the Touch Bar as an additional strip placed at a slight angle above a full keyboard, instead of using it to replace the top row of keys? Heck, why not place the Touch Bar in the bezel area below the screen, making its customisable controls way more glanceable and operable?

I’ve said it too many times now: part of Apple’s software and hardware design today feels more random, haphazard and trial-and-error than before. I know well that trial and error is an important part of the design process, but with today’s Apple it feels as if this part of the process isn’t happening internally enough, if you know what I mean. It feels that we as users (or developers) are subtly getting involved in it. It feels like a public beta. Some actually like this — those who later write articles talking about how great it is that Apple listens to its users. I would like from Apple a more internally pondered design process that leads to more thoughtful design decisions, executed with the confidence that this is the path to follow and build upon. The notch is a quirk that goes nowhere.

Assorted musings on social media

Tech Life

While reading Mike Rockwell’s very good blog Initial Charge, I bookmarked a couple of link-posts he recently wrote, both about social media.

The first is from 9 September. The title is Reconsidering Your Relationship to Social Media ➝, and the post links to Scott Banwart’s The Inevitable Decline of Social Media. Mike quotes Scott’s introduction:

I have become disillusioned with the state of social media. At one time it was a fun way to connect with people I would otherwise not a have a chance to meet and talk about topics of mutual interest. Now it is largely a breeding ground for tribalism, intolerance, and general meanness. This is making me question why I would want to continue participating in this ecosystem. 

And at the end of his commentary, Mike writes:

Mastodon feels like the early days of Twitter to me — it feels new, fresh, and exciting. There’s no algorithmic timeline, boneheaded features designed to increase engagement, or “influencers” that are willing to say literally anything to get attention. It’s nice.

I’m not exactly a Twitter early adopter — I joined in March 2008 — but I’d say those were early-enough days that I know what Mike means. Those were the times where Twitter felt like leisure, not work. Like a public space where everybody hanging around was being personal and informal in a casual, fun way. It was ’social-good’. You followed people because you wanted to know what they were up to, what project they were working on, how their day was going. You didn’t want them to be a surrogate of the daily news, or to remind you how shitty this world can be, retweet after retweet. 

Over the years, Twitter expanded dramatically, and went from a peaceful town where most people know one another and exchange understanding nods, to the urban equivalent of a chaotic, cynical, divided, post-industrial megalopolis. I’ve always been good at filtering out the most unpleasant aspects of Twitter, but I nevertheless felt a bit overwhelmed and saturated just when App.net (also called ADN, for ‘App Dot Net’) came around in late 2012. I knew people who saw ADN’s great potential and jumped ship leaving Twitter behind altogether. I took a more moderate approach, and for as long as ADN lasted (until March 2017), I gave both Twitter and ADN the same priority. But ADN felt better, and in my experience stayed better until the very end. The social environment didn’t really deteriorate over time.

When ADN shut down, many of its hardcore users were naturally upset, and this diaspora gave birth (or renewed impulse) to other smaller social networks/microblogging sites which, as far as I know, have successfully maintained ADN’s positive social atmosphere and environment. Wanting to stay in touch with as many ‘ADN expats’ as possible, I opened accounts on all of them — pnut, 10Centuries, then Mastodon and Micro.blog — but it soon became apparent that keeping up with Twitter and all these other networks was not feasible. Today, Twitter is still my main social place online. I check on pnut fairly regularly, and occasionally post on Mastodon.

But why has Twitter remained my №1, when it’s possibly the worst among the social networks I mentioned above? The most succinct and perhaps catchy answer I can think of is, Because while Twitter has changed over the years, I have remained the same. Meaning that I have essentially been using Twitter in more or less the same way as I was using it back in 2008. 

To continue with the urban metaphor, as far as I’m concerned, the small town Twitter was at the beginning has become my reference neighbourhood within the chaotic and often toxic mega-city Twitter is today. 

In a more recent link-post, Some thoughts on social media ➝, Mike links to Chris Hannah’s post with the same title. Chris writes:

We can all see the distinction between what happens in real life and what appears on social media.

I think that is where Micro.blog has felt different to platforms like Twitter for me. In a sense, it feels slower, but at the same time, it feels like you are connecting with real people. Whereas when I use Twitter, most of the time it feels like I’m interacting with an online account rather than the person behind it.

I’ve definitely fallen into the trap before, where I’ve used Twitter as a place to share perfect photos, links to my blog posts, and anything else that can bring external validation. But I think I’m going to try and just use it like a normal person for a while, and see how it goes. 

And Mike comments:

This matches my experiences perfectly and is part of the reason I mostly left Twitter. Everyone’s vying for attention and thinking too much about metrics rather than having genuine interactions with real people. That’s why everyone has the same opinion — if you don’t agree, you’re not part of the club, and therefore will lose followers. […]

Although I fall into the trap of sharing almost exclusively the best photos on Instagram and Pixelfed, I try to be a bit more real on Mastodon. That’s the place where I can just share my thoughts — whether it’s complaining about software updates, posting links to music I’m listening to, or anything in between. 

Over the years, I’ve heard and read similar arguments from people who were ‘fed up with Twitter’ and wanted to either take a break from it or leave it for good. Note that I’m putting ‘fed up with Twitter’ in quotes not because I’m belittling a sentiment — I’m simply reporting the words they’ve used over and over again. Of all the people I know who wanted to leave Twitter for good, only two have truly acted on their words and intentions. Twitter’s gravitational pull is strong, especially for those who joined many years ago and have formed a subnetwork of meaningful bonds with like-minded people and friends.

It’s what you make of it

I ultimately think that social media, social networks, and Twitter in particular, are really what you make of them. And what I want to make of my Twitter experience is for it to be something that is constantly positive, where I can share my views and have exchanges with followers and mutual acquaintances that remain non-escalating even when we disagree about something. I want my Twitter experience to be a place where I can share the occasional rant or bad joke, and know that my followers are listening to my rant or eyerolling at my joke. And I make sure to reciprocate, listen to them when they rant, help them if they’re stuck and voice their issue, and so on.

This, of course, takes some work on my part. My Twitter experience isn’t something I’m exclusively, passively exposed to. It’s something I actively contribute to. This is something I fortunately understood at the beginning, after a few false steps where I just ‘didn’t get’ Twitter and thought about leaving myself.

This attitude of mine has been rather transparent from my early days on Twitter in 2008 onward. And I have without doubt reaped what I have sown, because I evidently attracted a lot of like-minded people and kindred spirits. And that’s why I don’t share Chris Hannah’s feelings when he writes that …when I use Twitter, most of the time it feels like I’m interacting with an online account rather than the person behind it.

The unspoken contract I’ve developed with anyone who interacts with me on Twitter is that what you see of me on Twitter is as real as if you met me in person. I’m honest, truthful, respectful of other people, and I ask for the same treatment. And a lot of people I’ve interacted with over the years seem to get this immediately, and our exchanges and social relationship have stayed healthy over time. And if a misunderstanding would arise, I’ve always tried to clarify things without letting a relationship go south or sour.

It takes work if you care about your experience

After thirteen or so years using social media and Twitter, I’ll reiterate, I feel you need to be willing to do some work if you want Twitter (or your social network of choice) to be a pleasant, beneficial experience. You can’t expect the network to enjoy and entertain you without giving something back. I’ve often heard people complain about their timeline being toxic, but apart from sponsored tweets, Twitter doesn’t really push anything extraneous on you that you don’t want. If your timeline is toxic is because you follow people who either post toxic content or are serial retweeters who routinely disseminate unbelievable amounts of crap. Or maybe your timeline is toxic because toxic people start following you for some reason and tweet abusive things at you all the time. Or maybe your whole experience is toxic because you spend literal hours doomscrolling and pay attention to every single stupid tweet you see. 

Twitter can deploy some tools to mitigate toxicity and, for example, reduce exposure to misinformation and fake news, but filtering toxicity is hard because the whole matter can be incredibly subjective and fine-grained. You are the best filter. Stop following people who flood your timeline with crap. Block people who tend to be abusive and gratuitous towards you. But also try to develop a way to approach and use Twitter that can prevent you from ending up having a miserable experience. 

I’m sharing this advice and observations thinking in normal terms for the average Twitter user. I am sadly aware of many cases of abuse and bullying and doxxing where the targeted person is simply too overwhelmed to do anything except maybe leave the platform, which is the goal of the harassers. These are extreme cases and no amount of personal work or personal filtering is enough to stop the hæmorrhage.

But back to more normal situations, I keep hearing people complain about their timeline as if it was some kind of demonic TV set that cannot be turned off and forces them to watch its programmes. Once again, my personal experience is that on Twitter, maybe more than anywhere else, you reap what you sow. Note that I’m not advising to put up appearances or behave in ways that may make you likeable, or always be politically correct to avoid debates or conflicts. 

I’m advising to be yourself, to be genuine, but also to behave wisely. Be personal if you want, but don’t put yourself in situations that make you vulnerable. You can definitely participate, even generate a heated debate if you trust your followers and interlocutors to engage in something constructive. Don’t pick fights with people you barely know just because they said something you don’t like. There is often the urge to ‘right the wrong’ on Twitter, but even when you’re objectively right (because facts back you up) and the other person is clearly wrong or believes in horrible things or spreads misguided notions, act wisely. Think before typing. Pick that fight, if you like, but prepare for any consequence and ask yourself if the fleeting pleasure of calling a moron for what they are is worth the potential subsequent grief. 

If you want to virtue-signal at all costs, I prefer the subtler, more intelligent approach of “show, don’t tell”. If your followers are well-adjusted, thoughtful, perceptive people, they’ll know that black lives matter to you even if you never use #BLM in your tweets. They’ll know if you are pro-LGBT and pro-Trans rights even if you don’t put rainbow flags in every tweet. What you post, what you retweet, what you reply and react to, all these things in the end define you socially online. 

You obviously can’t fully control your Twitter experience, and you may end up disappointed or dissatisfied with it no matter how hard you try to make it better, therefore seeking out alternatives that may be more suitable for you. This is good and understandable, and the position Mike, Scott, and Chris seem to have found themselves in. In fact, I’m not criticising them (I used their quotes here as a starting point for my reflections, not to teach them a lesson). I am more critical of those who complain about how bad Twitter is, how dreadful their experience is, just standing with their arms crossed and with a sense of entitlement as if to say, Someone needs to fix this for me; Twitter has to do something, anything. As if they had no part in how things shape socially online. In these cases, leaving the platform is just an empty, theatrical, rage-quit. You’re going to have the same problem in whatever next social network you dive in.

Magnitude is relative

And speaking of alternatives, it’s always fascinating to me how the ‘best’ experience often seems tied to the social network’s actual or perceived scale. Twitter is huge, millions and millions of users, therefore its scale must be one of the causes of its degradation. It can be, of course, but I also feel that the true magnitude (and impact) of Twitter is as big as your actual network of contacts/followers/people you follow within Twitter. After 13 years on Twitter, I still follow a reasonable, manageable amount of people: I don’t feel overwhelmed and I don’t feel as things are getting out of hand. My Twitter still feels like the small town of the early days. That’s also because my focus and priority is still the personal interaction, not the “I’m a channel broadcasting my stuff and I seek constant growth” attitude other people have on Twitter and social media in general. It really boils down to what you want from social. If all your needs are egotistical in nature (you want to attract attention, ‘grow your audience’, be an ‘influencer’, etc.), then you’ll be loud, superficial, and the resulting experience will be chaotic. Maybe in a way that pleases you, maybe in a way that pisses you off, but in either case you asked for it.

I prioritise people. Dialogue. Exchanges. Sharing interesting stuff, facts, links, observations, photos, music suggestions, and so forth. I’m naturally curious, I celebrate differences, I also do my best to listen to what people tell me. I don’t care about metrics, I don’t crave attention, or want to ‘grow my audience’. I’m not a cult leader. It’s the same as with my books or writings: while I would be certainly flattered if my fiction sold well, for the time being I’m more interested in a meaningful diffusion, in knowing that maybe this month I only sold 10 copies of one of my books, but then through feedback I learn that those 10 readers, or 5 readers, appreciated my work. On social I very much prefer having 1,100 followers than 100,000 fans. I hope I’m making sense here.

Instagram, Glass, barriers to entry

By the way, during the years I was active on Instagram (the pre-Facebook era), I was doing exactly the same there. But Facebook did poison the well, and weaponised and commercialised something that was fun, laid back, and casual. It has transformed a quiet place into something that flashes and autoplays and screams and shoves extraneous content down my throat every time I open the app. I still use Instagram to like and comment on other people’s photos and posts, but the experience of finding my contacts and exchanging comments with them feels like trying to find a friend of yours at a huge rave party.

Instagram is pretty much unsalvageable unless someone else acquires it and does a gigantic, radical reboot. In the meantime there’s Glass, a photo community which is doing a lot of core things right, in my book. So far, I’m enjoying the relaxed atmosphere there, and I’m happy there are no ‘likes’ or metrics. Comments are the only way to tell someone you like their photos. And they may be scarce, but (at least in my experience) feel genuine and articulated. This ‘going against the grain’ in Glass’s philosophy is admirable and it’s evident that comes from people who care to create a product that is successful in a quality-over-quantity way. 

But one aspect worth mentioning is the barrier to entry, which in my opinion is fundamental in setting the tone from the start when you launch a social product. I’m generalising and there are always exceptions, but typically a free product, a free social space, will inevitably attract terrible people, chaos and toxicity. Spraying graffiti over a building is fun. When the building is yours, even in a very small part, you’re more hesitant to deface it. At launch, ADN wasn’t entirely free to access. If I remember well, it was invitation-based, and the person you invited got a free trial period, but the backbone was made of paid accounts. I remember I kept paying for my account monthly ($5) instead of yearly, even if a yearly subscription was less expensive, because I wanted to support the platform as long as possible. Barriers to entry are a great first filter, they keep the cheapskates away, they keep advertising away, and generally ensure that all participants (or at least the majority) are invested enough in the place to make it pleasant for themselves and everyone else. People who argue, for example, that Glass will never be as successful as Instagram because it lacks this and that, are missing the point. There are many ways to measure success. Glass and Instagram are like apples and oranges. 

There is no conclusion

There is no conclusion or moral of the story. These are notes, not a narrative. But since I have to end the article in a way or another, I’ll share a note I jotted down in Notational Velocity a few years back when I wanted to talk about social networks: Your social presence is your own radio show, but make sure you take your listeners’ calls while on the air.

Share critically.

Beyond camera technology upgrades

Tech Life

The California Streaming Apple event that took place last 14 September was — unlike the famous California Dreamin’ song — utterly unmemorable. The only two things that piqued my interest have been the new 6th-generation iPad mini, and what has been upgraded in the iPhone line. There is a third item, actually, which is what happened to the strongly rumoured Apple Watch Series 7 redesign, but maybe that’s a story for another piece.

The event felt unexciting. As I tweeted afterwards, these pre-packaged events are starting to feel repetitive and uninspired. The structure remains unchanged, somewhat predictable, and most presenters seem more concerned with delivering their script than trying to really make you feel their enthusiasm for what they’re showing you.

Even when it comes to one of the most crucial moments — talking about the innovations in the iPhone 13’s camera technology — the presentation was adequately put together, but failed to captivate me. It failed to make me go like Man, I can’t wait to check out these new iPhones once they’re available at the Apple Store! On YouTube, people like Dave Lee, Marques Brownlee, and Peter McKinnon, all did a much better job at communicating why these camera improvements and new features are kind of a big deal.

For me however, this is going to be another year without upgrading my iPhone. It’s not that I don’t deem the iPhone 13 worthy of an upgrade, far from that, but I’m sticking to my anti-notch design stance. When the iPhone X came out, I purchased the traditional-looking iPhone 8 and said that my next iPhone upgrade would happen when Apple manages to remove that ugly black thing on the top of the display. According to several rumours, apparently this will happen next year with the iPhone 14, so I’m hopeful.

Anyway, in the meantime I’ve been reading a fair amount of iPhone 13 reviews and watching video reviews. The consensus is that it’s an incremental upgrade compared with the iPhone 12, and that the two major improvements regard camera technology and battery life. Both of which are great things… provided they are a priority for how you use your phone.

Agreed, battery life matters pretty much to everyone, but cameras are a different story. You’re probably thinking, Come on, Rick, you know that everyone cares about having great cameras in their phones. For a lot of people, smartphones are the only cameras they own.

But hear me out. Let’s put aside people like me, camera enthusiasts who prefer shooting with traditional cameras and don’t really care about their phone’s camera capabilities. There are a lot of regular folks who, granted, have no other cameras apart from their smartphones and use their smartphones as the handy point-and-shoot camera that’s always with them. They aren’t professionals, they probably know very little about photography, and they just want to have a tool ready to capture moments when needed. 

For people like these, the camera technology in older iPhones like the first-generation SE or the iPhone X is good enough to meet their needs. If they upgrade is often because their iPhone has reached other limits, like storage or battery life. Yes, shockingly there are people who buy 32GB iPhones, fill them with photos, videos, and documents, know nothing about backups, and when their iPhone is full, well, time to get another one. I have rarely, if ever, heard a non-tech person talk about wanting to get a new iPhone because it has a bigger camera sensor, because now you can take real macro shots, or shoot more cinematic videos, or because now Night Mode is even better, and other assorted photo-video nerdery. The attitude is more like, Now my current iPhone is getting old, it’s time to buy a new one; I heard it takes better photos and battery lasts longer, so hey, that’s a bonus.

The point I’m trying to make here is not to belittle the camera improvements Apple keeps delivering year after year. I’m perfectly aware of their magnitude and usefulness. Instead, my question is: Is camera technology becoming the only defining characteristic of smartphones in general, and the iPhone in particular?

Because I’m starting to feel that, apart from camera technology, there’s very little going on with smartphones in the innovation department. I’m not counting foldable display technology here not because I don’t think it’s innovative per se, but because for now it doesn’t really advance the smartphone category when it comes to new applications (in the sense of ‘uses’, not ‘apps’).

If you make the thought experiment of removing camera technology upgrades from current phones, where are the practical advancements? That’s why those people who are not into photography are perfectly fine using older phones and don’t really feel pressured to upgrade, not even when their phone stops receiving system software updates. If you remove the camera aspect in an iPhone, there’s little a 2016 iPhone SE can’t do compared with a current model. 

It seems, however, that enough people are interested in having good cameras in their smartphones, otherwise Apple wouldn’t be so hell-bent on pushing camera technology in the iPhone, year after year. It matters so much to Apple that it has become more important than the overall industrial design of the device itself. Because let’s be honest, the design of the latest three or four generations of iPhones may be ‘iconic’, but that camera array on the back of the device is a sore sight, and the very image of an extra part that is bolted on the machine, design be damned. One of the rare instances where Apple prioritises function over æsthetics.

And, for now, Apple’s approach is rather typical of Cook’s administration: find what appears to be the gold vein, and extract all the gold you can until there’s nothing but debris. It certainly makes sense from a mere business and financial standpoint, but to me it’s disappointing: is this the grand plan for the iPhone? Make it become the best camera you have with you at all times, and that’s pretty much it? 

I can’t help but think that Jobs would have recognised this kind of stagnation and worked towards creating something to stir things up instead of iterating, iterating, iterating, and offering ‘faster horses’ after ‘faster horses’, if you know what I mean. He probably would have posed the problem of what we can do next with these phones, and the answer Much better photos and videos than last year would probably have left him wanting more. Okay, maybe I’m projecting a little here: it certainly leaves me wanting more.

But wait, wasn’t I the one against change for change’s sake? I was, and still am. Here, however, I’m talking about progress, reflecting on it somewhat theoretically, if you like. This is a broad discussion, but to avoid wandering off topic too much, I simply think that wanting to make smartphones become excellent pocketable cameras, while being a respectable goal, at the same time feels a bit like a waste of potential of what is already a supercomputer in your pocket. 

Yes, yes, I know, computational photography! Apple is leading here, they’re ahead of the competition, and so on and so forth. I’m simplifying here, but essentially computational photography is something created to take advantage of processing power and software to circumvent the hardware limitations of having small camera sensors, small lenses, and little physical space to operate within the chassis of a smartphone. And from what I’ve seen so far, the goal of having such advanced computational photography is to make your iPhone take photos as closer to reality as possible, especially when it comes to low-light photography. 

I’m not arguing its usefulness or Apple’s innovative efforts on this front, at all. The philosophical problem I have with that is that most of photography is not about reproducing reality with 100% fidelity. Every time I look at the photo samples Apple shows while touting the iPhone’s ever-improved camera system, the neutral, high-definition, surgically precise nature of such samples doesn’t appeal, inspire, or move me at all.

I want to see something happening in this field that pushes regular people beyond just using their smartphones to take snaps, chat, play Candy-Crush-Saga-like games, check maps, scroll Instagram feeds, watch YouTube and TikTok videos, and little else. Are smartphones destined to become just great cameras that can also be used to make phone calls, and that’s the end of the line, or is there maybe some new territory to explore beyond camera technology upgrades?

August short №2: Glass

Briefly

A few days ago I was made aware of Glass, a new photo sharing app and community with a design and intent that positively reminds me of early-days Instagram. 

At the time of writing, the app is still iOS-only, and to sign up you either need to receive an invite from someone who’s already in, or sign up within the app and get on the waiting list. When your turn comes, you receive an invite from Glass itself.

Notable characteristics:

  • No ‘likes’. You like someone’s photo? Write them a comment.
  • No gimmicky photo filters.
  • No statistics or other analytics.
  • No ads or algorithms. Glass is supported by subscriptions: $4.99/month or $49.99/year ($29.99 at launch). There is a 14-day free trial period. Everything is handled via the App Store.
  • Because there are no algorithms, you can enjoy a simple, chronological feed.
  • No data tracking.
  • The ability to download your data anytime you want.

Design-wise, one could say that the app is quite minimalistic, almost bare-bones in places. You have a tab for your feed; you have a sort of ‘discovery’ tab where you can look at other photographers’ profiles and photos, and follow them if you like; you have a tab with your profile; and finally a notification tab. On the bottom right there’s a separate (+) button to upload your photo. That’s it. Notifications, too, are pleasantly restrained: you can get a notification when someone follows you and when someone leaves a comment to your photos. There is no following/followers count. You can see a list of people you follow and people who follow you by tapping on Following and Followers on your profile page. All this absence of numbers, metrics, and quantification is truly refreshing in this day and age.

I love everything about Glass, and I’ve signed up for a yearly subscription right away, even if I’m famously averse to app subscriptions. But Glass looks and feels perfectly tailored to my photo sharing needs and expectations. For me it’s even better than pre-Facebook Instagram in the sense that it pushes me to select and share what I think are good photos (same as it happens with Flickr), rather than making me obsess with getting ‘the Instagram shot’ at all costs every day or multiple times in a day. It doesn’t cheapen photography like Instagram has done for years. 

That’s why I hope Glass’s founders/developers will resist feature creep. Resist user objections like: I don’t think Glass is offering that much for the subscription price they’re asking. There are a lot of people who will gladly pay for having a cleaner, simpler, focused experience. 

And that’s pretty much it for now, I think. If you want to find me there, my handle is @morrick, just like on Twitter.

August short №1: Constraints

Briefly

I’m technically on holiday, away from home, and my only way to access the Internet is by using mobile data on my iPhone, and my data plan is somewhat limited. As luck would have it, this place I’m writing from also has poor cellular coverage, so I get two signal bars on a good day. As a consequence, going online feels like a luxury, and I’m constantly aware that I’m consuming data for every little thing I do.

Still, I wanted to keep updating my blog when I can, though it’s unlikely I’ll have time and concentration for long-form articles. Hence, the idea of these August shorts.

The theme of this first short is Constraints and stems from my current situation. When I have to travel, and I know I won’t be able to leave my work behind, the first issue is to decide what to pack, especially if I’ll be travelling by plane. If you’re not an iPad-first or iPad-only user, and you have to rely on a Mac like I do, I believe there’s no better machine than the 11-inch MacBook Air. Unless, of course, what you do for work requires a more powerful computer. 

While having to work on a non-retina, 11-inch display is not a problem for me, such reduced screen real estate can certainly feel like a constraint when it’s the only option to work with for several days. For someone like me, with a well-organised Mac-centric workflow, even working on a 11-inch machine is better than having to use just an iPad. A couple of years ago I made the mistake of bringing only the iPad with an external keyboard, and I ended up feeling positively stranded.

This time I’m travelling by car, and so I indulged in a little bit of ‘tech overpacking’. The 11-inch Air was not enough anyway, and I had to bring the retina 13-inch MacBook Pro as well. The second constraint has been my working location inside the house — the only place where I can comfortably work is in the kitchen, and there are practically no available wall sockets. Fortunately, both laptops still have decent batteries, so I alternate between the two, and when I’m using one, the other is upstairs, recharging. 

The third constraint is relying on the iPhone’s personal hotspot for the Internet connection, and sharing the connection this way for extended periods of time drains the iPhone’s battery pretty quickly. So I keep it connected to a battery pack. This in turn means that I had to bring the battery pack, the cable to connect the iPhone to it, a different USB cable to charge the battery pack, and another charger.

The fourth constraint is what I mentioned at the beginning: my cellular data plan only allows me a few gigabytes at high speeds per month, so I have to keep a careful eye on what I’m doing once online. Work has precedence, naturally, but suddenly every other leisure activity feels wasteful. YouTube videos that are longer than 5–7 minutes become ‘too long; didn’t watch’ (and you start noticing just how many creators seemingly can’t produce videos shorter than 10–15 minutes). Sharing photos becomes an exercise in thoughtful selection. And so on. 

As I mentioned on Twitter a few days ago, in a way this whole routine (ending up connecting to the Internet a couple of times in a day, being extra aware of the amount of data I consume for every session, etc.) feels like 1998 again for me, when I used to be on dialup connection, and you went online rather than being online all the time. Back then the constraints were connection speed and cost of the service, but the ‘online’ dimension was still felt as something separate from the ‘offline’ life — a place to go, explore, and return. Not a 24/7 overlay, staying with you wherever you go.

All these constraints have an interesting upside, though: focus. Since I have to make the most of when and for how long I can connect, I can’t afford being unfocused and unproductive. It becomes a sort of mindful self-discipline that works really well, making me more efficient and essentially productive on demand. It’s something I wish I could replicate once back to my headquarters, but it’s hard to follow this kind of ‘diet’ once you’re re-injected into the Matrix.