It’s increasingly hard to be critical in tech

Tech Life

A couple of weeks ago I published two articles about Apple Vision Pro, the AR/VR headset Apple presented at WWDC23 at the beginning of June. In those articles I expressed and explained my general scepticism towards the product, but mostly towards the vision behind it, which I find — at least currently — lacking and unconvincing.

That’s not the first time I’ve criticised Apple, far from it, and therefore it’s not the first time I have had to deal with the subsequent backlash via email and private messages. I can deal with disagreements. I don’t expect every one of my readers (usual or new) to agree with me all the time. When the person who took the time to write me expresses their disagreement in a cogent, articulate manner, I’m very eager to listen. I’m not infallible and I might have missed some huge things in my analyses. It happens, and I can change my mind and opinions on something. If you write me to insult me or to say dumb things at me, you’re wasting your time, you’re showing me your colours, and the impact of what you say is less than zero.

But the negative emails and messages I’ve received after speaking my mind about Vision Pro are worth mentioning. Not because they’re particularly intelligent or articulate (most aren’t, I’m sorry to report), but because they’re emblematic of the way certain tech discourse is degrading nowadays.

I have used Apple products since the late 1980s. Back then, Apple wasn’t a giant, but an underdog, and I’ve experienced some of the worst moments in Apple’s history, when the company was actually doomed. Being a Mac user back then, when the platform was truly niche in a world surrounded by Windows and IBM PC-compatible hardware, was an interesting experience for sure. It created a strong community culture, because every time there was debate, we were always on the defensive. It was often frustrating, because back then the Mac was a demonstrably better platform, but convincing people to adopt it over the path of least resistance (Windows and the PC) was hard.

This, I think, created the basis of a ‘defence culture’ when it comes to Apple. The ‘other side’ called us zealots, drew religious parallelisms, called us a ‘cult’, and so forth. And sure, there were Mac users who really displayed a nasty, prejudiced, and even combative attitude towards the ‘PC Windows guys’, but for the most part (at least in my experience) Mac User Groups were occasions for like-minded people to meet and help one another, sharing tips and experiences, pointing people to certain software applications that might fit their needs and that they were unaware of. And the banter with Windows users was generally non-toxic (again, in my experience). And while I myself have been a so-called ‘Apple evangelist’ for a few years in the 1990s, my approach in trying to make the Mac platform more known and appreciated wasn’t blunt or confrontational. I always tried to demonstrate how certain tasks could be carried out more efficiently with a Mac, and how so many myths about incompatibilities between the Mac and the PC were indeed myths. But if someone was not convinced or simply could not afford to switch their entire business to the Mac (especially in the 1990s, where there was great uncertainty about the future of Apple as a company), I didn’t think less of them; I didn’t look down on them; and I certainly didn’t storm out of their offices insulting them for using Windows and PCs.

But that defence culture I mentioned before — it persisted over the years and grew stronger, layer after layer. And today we can see it at work every time there’s any kind of criticism or scepticism towards Apple or any of their products. A lot of die-hard Apple fans today display a level of close-mindedness and zealotry that sometimes is downright concerning. I’ve had interactions with some fans who literally represent the dictionary definition of fanatic (“a person filled with excessive and single-minded zeal, especially for an extreme religious or political cause”). People who will defend Apple no matter what, even when certain Apple practices can be consumer-hostile; even when certain design decisions (in hardware and software) are demonstrably misguided. People who consider whatever Apple makes to be the best product, the right product. People who essentially consider Apple a sort of infallible entity even when faced with obvious Apple screw-ups like bending iPhones or atrocious laptop keyboard design. They act like those religious fundamentalists who justify the evil in this world by telling you that their God operates in mysterious ways we mere mortals cannot comprehend, and that it’s all part of the plan.

This fanaticism and the toxicity it brings, this impoverishment of intelligent discourse in tech (in general, but especially when Apple is concerned) is extremely tiring and unproductive. Back to certain feedback I received about my articles on Vision Pro, let’s observe a couple of examples.

The first trend in some of the responses is people who are offended because they think that, in criticising Vision Pro, I want to put myself in a holier-than-thou position. One wrote me: It’s like telling me I’m a moron for loving Vision Pro and for thinking AR is the future. In this case, I wrote back: If this is your sole takeaway from my articles, then yes, you’re kind of a moron.

Now, wisecracking remarks apart, it’s fascinating to me how these people are projecting my criticism towards a product and transforming it into a criticism towards their personal choices and towards them as people. It’s as if they’re worried that, by criticising a product they love, you (and others who criticise it) directly hurt the enjoyment they get out of it, or even contribute to its future failure or disappearance. I hope you realise how this kind of reaction strongly reminds of children’s behaviour.

I always tend to be specific and explicit in my analyses. If I had wanted to criticise or mock those who unconditionally love Vision Pro and the idea behind it, I would have clearly done so. My doubts about Vision Pro are mine and mine only. The fact that this thing, and the ‘vision’ behind it, has yet to convince me is something entirely subjective. But at least I have tried to analyse and express why I find it lacking and unconvincing. Instead, all the negative reactions to my criticism have been simplistic, dogmatic, aggressive, black-or-white stances.

And we come to the second trend in such responses, exemplified by what another guy wrote me: How can you be so sure Vision Pro’s gonna be a flop? Note, I never wrote or implied that Vision Pro is going to be a flop. But I appreciate the doubting attitude and the search for an honest exchange of views. The problem is his next sentence: Vision Pro will definitely be a success like the iPhone. You see what the problem is here, right? I am not allowed to be ‘so sure’ about something (I’m not, by the way), but this guy, oh he is certain Vision Pro will be a success. It feels indeed like arguing with a member of a cult. There is no further elaboration past the dogmatic stance. You’re interacting with someone who’s covering their ears and going la-la-la while you’re trying to have a discussion.

Tech discourse today is progressively going down the drain, and for many reasons. Here are a few I have noticed, in no particular order of importance:

  1. Many tech pundits aren’t candid or candid enough in their observations because they don’t want to lose access with big tech companies. They tread carefully. While I understand this to an extent, on the other hand it’s not helpful or conducive to a healthy debate. Prominent tech pundits are read and followed by many people, and whether they like it or not, they’re influencers. And if a company — especially Apple — introduces questionable changes in its hardware or software, such issues have to be surfaced and criticised. Instead, it’s not infrequent that I read opinion pieces where the pundit of the day basically makes excuses for the company. When some aspects of a product aren’t particularly strong, the pundit will often observe that the company knows what they’re doing, and that they’ll straighten things out eventually.
  2. Some tech pundits also tend to avoid making certain critiques that sound too stark and countercurrent because they don’t want to look like fools when they later find themselves on the wrong side of history. So, instead of openly calling bullshit on certain things, they prefer a more concessive approach. “I’m not much of a fan of this new feature, change, etc., but it’s no big deal and I can adapt”, “We have to remember that this is just beta software / a first-generation product, and surely it’ll get better with time”, and so forth. So, when design atrocities like the notch on the iPhone or on MacBooks become non-issues because the public largely doesn’t care (and even if customers cared the only option for them would be to not purchase the product — and many people just cave when faced with this all-or-nothing proposition), they can say I told you it wasn’t a big deal. Hey, good job pundit, here’s the medal you wanted so badly. I’ll get back to this point later.
  3. As Josh Calvetti quite aptly put it in a Mastodon reply, people assume opinions are inherently an attack on their preferences, and thus them. This reflects an even bigger problem — the inability to engage in critical thinking, which starts by taking the time to read and understand what’s in front of you before broadcasting your knee-jerk reaction. I’m not a sociologist, I don’t know if this problem is connected with the fact that today the way people consume content and the way their attention is constantly fragmented leads them to favour shorter and simpler stuff that is easy to digest and therefore easy to react to in a similarly superficial way, but I’ve been noticing an increasing avoidance of deeper discussions or deeper conversations. Long-form pieces are a bore — hence the infamous TL;DR (Too Long; Didn’t Read) acronym — so people seem to always want the Cliff’s Notes version. There can’t be meaningful debate when one part doesn’t even want to actually listen to the other. Put simply, it’s tribalism.

Back to point 2 above, and back to Vision Pro specifically, another type of feedback I received about my criticism of the headset is from people who sort of want to defuse the whole discussion by saying essentially that any criticism towards Vision Pro is moot. Why? They cite past Apple products that were initially criticised for this or that reason, and say that such products became huge successes anyway, so the pattern is bound to repeat once again for Vision Pro. Remember the reactions and the criticism when the first Mac was introduced? Remember what journalists and the competition said about the first iPod? Remember those fools who criticised the iPhone for not having a physical keyboard? — they say — Haha, where are those people now?

This is a shallow and childish stance. It’s like starting to watch a superhero movie, then quickly skipping to the end and declaring See? The good guys won anyway, eventually. Yeah, they did. But what about the characters’ development? What about the choices they made? What about their flaws? A hero can win in the end, but their character’s flaws remain. A product can be a huge success eventually, or even relatively quickly, but that doesn’t mean it’s flawless. 

Again, I’ve owned Apple products since the late 1980s, and I had used Apple products even before that. I read negative articles about the first Mac, the first portable Mac, the first RISC Mac, the iMac G3 (which was the first Mac after Steve Jobs returned to Apple in 1997), the first iPod, the first iPhone, the first iPad, the first Apple Watch… Some criticism was indeed superficial, uninformed, misguided and even downright trollish. But some critics also made valid points. The fact that those Apple products became successful later doesn’t make such points less valid. 

Criticism isn’t a zero-sum game. It’s not a matter of winning and losing. A successful product may be successful despite having some design flaws. Its success may make some of such flaws less relevant, but it doesn’t make them disappear. And pointing out those flaws doesn’t make someone ‘wrong’. And pointing out those flaws doesn’t mean someone ‘doesn’t get technology’. 

People also often react to criticism as if the critic were just posturing and taking a contrarian stance simply for the sake of sounding different than the mainstream choir of opinions. And while it’s true that there are quite the contrarians out there who share their hot takes betting on the chance that a product might actually fail, to then gloat and bask in their I‑told-you-so attitude, there are also people — like yours truly — who prefer to share their doubts and criticism towards what they have before their eyes right now, and aren’t even concerned whether the product will be a success or not. 

Example 1: When the iPhone 6 and 6 Plus were announced, I criticised them for being too big. I thought their size would make them more difficult to handle, and the interface more awkward for one-hand use. Those iPhones were a huge success commercially, and initiated the unstoppable trend of big iPhones that continues to this day. And big iPhones are still a success, but that doesn’t invalidate my initial criticism directed at the iPhone 6 and especially at the 6 Plus. The iPhone 14 Pro and Pro Max are still difficult to handle, and their interface remains awkward for one-hand use. You can barely take a photo using just one hand with these beasts. 

Example 2: The notch, both on iPhones and especially MacBooks, is a terrible design element and a terrible design decision (as I pointed out here and here). No one denies the great success both notched phones and laptops have had, but that doesn’t automatically make their notch a good design element or decision. The Dynamic Island is an ingenious workaround for sure, but I’d vastly prefer to see and interact with a display devoid of interfering elements.

And another thing: criticism — as far as I’m concerned, and especially when writing about Apple stuff — is never intended to be an attack against what you like, or your preferences, or you as a person. Usually the subject of my criticism is specified right there in the article I’m writing, without subterfuge or intellectual dishonesty. When I wrote those aforementioned pieces criticising the notch in MacBooks, I remember getting some feedback like this: Your piece sort of makes me feel judged by deciding to purchase a MacBook with a notch, almost as if I were told that I have bad taste when it comes to design. I can understand that someone might feel like this, but in cases like this, if you stop and think about it, it’s clear that the sole target of my criticism is Apple. It’s their design decision. It’s they who force their design choices on customers in a take-it-or-leave-it fashion. 

In a recent conversation with a friend, he asked me tongue-in-cheek, Aren’t you tired of being a tech critic? And I jokingly replied that It’s a dirty job, but someone has to do it. On a more serious note, it’s not that I love to always look for something to criticise and I still do enjoy technology and tech gadgets. I’m very happy with my new M2 Pro Mac mini, and just the other day I’ve finally upgraded my Sony WH-1000MX3 noise-cancelling headphones by getting the WH-1000MX5 — and I’m really satisfied with them: they’re a noticeable improvement over the MX3 with regard to noise-cancelling technology and sound quality.

However, what I’m noticing nowadays more and more frequently is just how uncritically accepting so many people are when it comes to technology and tech products/services. I personally feel it’s a dangerous attitude that leads to technology and big tech companies controlling our lives, where the opposite should be true (that’s why I’m generally in favour of legislation regulating what tech companies are allowed to do). And before we get to yet another misunderstanding: no, I’m not judging you and your love for all kind of tech gadgets. But if your position is to tell me I should just ‘enjoy life’ and approach these things in the same uncritical way as you do, then I’m afraid we’ll have to agree to disagree.

First impressions of my new Mac setup

Tech Life

On 21 June I finally updated my main Mac workstation. That ‘finally’ is mostly work-related. My Intel 2017 21.5‑inch iMac still running Mac OS X 10.13 High Sierra remains a very capable workhorse, a Mac I still enjoy using, and a Mac that — up to a couple of months ago — still allowed me to do 100% of the things I needed to do. Now that percentage is more like 95%, but that 5% is important. In recent times, in order to carry out certain translation/localisation work, I needed to run Mac apps requiring Mac OS Ventura, and none of my Macs was supported by Ventura (apart from the iMac, which I didn’t want to update, to preserve compatibility with other apps and games).

So here we are.

The setup

 

The new Mac is a Mac mini with an M2 Pro chip, in the standard configuration Apple provides on their site, i.e. with a 10-core CPU, a 16-core GPU, 16 GB of RAM, and a 512 GB SSD. Unlike other Macs, whose base configuration always feels a bit lacking, this was actually perfectly adequate for my needs. I briefly considered a built-to-order option with either 32 GB of RAM or 1 TB of storage, but for such modest upgrades Apple wants too much money. With the €230 I saved for not choosing a 1 TB internal SSD, I can easily buy a good 2 TB external NVMe SSD.

Choosing a stock configuration also made me save time. I purchased the Mac mini in the early afternoon, and shortly after it was available for pickup at the local Apple Store.

The display is an LG 28-inch DualUp Monitor with Ergo Stand and USB Type‑C. As you can see, it’s a portrait display with an aspect ratio of 16:18. If you want to know more, The Verge published a good review last year. I’ll add a few remarks later.

The keyboard is a Razer Blackwidow V3 Mini Hyperspeed, with Razer’s yellow switches, which are linear and silent. I’ve had a remarkable experience with the Razer Blackwidow Elite (a full-size, wired model featuring Razer’s green switches, which are clicky and similar in feel to the classic Cherry MX Blue switches), and when my wife needed a more compact, wireless keyboard, I found the Blackwidow V3 Mini Hyperspeed for her. As soon as she let me try it, I knew I wanted one for myself.

The mouse is a Razer Basilisk V3 X Hyperspeed. When I was looking for a mouse for my Legion 7i gaming laptop, I found this at a local department store at a good discounted price. I very much enjoyed its ergonomics and the overall experience, so I got another one for my Mac mini setup.

Assorted remarks

1.

One feature I really like in both Razer products is that they have multiple connectivity. Both mouse and keyboard have Bluetooth and a Wireless 2.4 GHz connection. Both come with a USB‑A dongle, but you can use just one dongle to connect both devices to the computer via Wireless(*). The keyboard also comes with a USB‑C cable to connect it to the computer when you need to charge the internal battery.

(*) After checking the Razer website, I don’t think this is going to be possible if you’re using a Mac. The software that enables this functionality appears to be Windows-only.

2.

Since I’m not writing a review for a tech website or magazine, I haven’t conducted any meaningful tests to assess the Mac mini’s performance. But in normal use, you can instantly feel it’s a quiet beast. Everything is instant, everything is effortless. The Mac mini remains cool no matter what I throw at it. I was already accustomed to fast boot times ever since I updated all my Macs to solid-state drives, but the Mac mini managed to surprise me all the same. It cold boots in probably about 15 seconds, and restarts are even faster. Restarting is so fast I basically don’t even see the Apple logo. In the time my iMac performs a complete logout, I could probably restart the mini twice. When you upgrade often, these performance leaps are less noticeable, but coming from a quad-core i5 Intel Mac, the leap to a 10-core Apple Silicon M2 Pro is exhilarating. Apple hardware is as impressive as Apple software is disappointing.

3.

What about Mac OS Ventura? I haven’t dug deep so far, but on the surface it’s… tolerable. I am especially glad Stage Manager is off by default. System Settings is cause of continued frustration, however, and every time I open it, it’s like visiting your favourite supermarket or shopping mall and finding out they have rearranged everything, and not very logically either. In the previous System Preferences app, I may have used the Search function two or three times in fifteen years. In System Settings it’s a constant trip to the Search field. When I initially complained about this unnecessary reshuffling of preference panes that is System Settings, so many people wrote me saying they were glad Apple reorganised it because they “never found anything at a glance” in the old System Preferences app, something I frankly find hard to believe. System Preferences was not perfect, but many panes were grouped together more logically. I know Apple insists on this homogenisation between iOS, iPadOS, and Mac OS’s UI (which, again, isn’t really necessary because people today aren’t tech illiterate like they were in the 1980s), but the fundamental problem with this is that, well, Mac OS is not iOS and a Mac is not a phone or a tablet.

4.

This new Mac mini will mostly be used for work, but I installed Steam anyway just to see how dire the situation was for games, compatibility-wise. I have a total of 84 games in my library. 44 have the ? symbol next to them, meaning they won’t work (they still require a 32-bit compatible machine). Of the remaining 40, 26 are Windows-only titles. I’m left with 14 games that *should* work fine under Apple Silicon. And that’s why I got a gaming laptop a few months ago…

5.

Back to the display. The reason I chose it over more predictable candidates of the 4K/5K widescreen variety is that I wanted something more in line with my work, and since I work a lot with text and documents, a portrait display was the obvious choice. With the LG DualUp, it’s like having two 21.5‑inch displays stacked on top of each other. Which means that when I visit a website or open a PDF, now I can see double the contents I see on my iMac.

Other features I like about the LG DualUp. First, it comes with a generous amount of ports. Second, it has a built-in KVM switch, meaning you can connect two computers to the display and control them both with one mouse and keyboard. Quoting the aforementioned Verge review:

The DualUp has two HDMI 2.0 ports, one DisplayPort v1.4 port, a USB‑C port with video and 90W of passthrough power, a headphone jack (to use in place of its passable but not fantastic built-in speakers), and two USB‑A 3.0 downstream ports for accessories. Additionally, the DualUp has a built-in KVM switch, allowing one keyboard and mouse to control two computers connected to the monitor via USB‑C and DisplayPort (with the included USB upstream cable tethered to the computer connected via DisplayPort). After installing the Dual Controller software and configuring my work MacBook Pro and a Dell laptop to connect via IP address, going between the two inputs in picture-by-picture mode was essentially seamless. Mousing over to the dividing line switches the computer that I was controlling. There’s also a keyboard shortcut that can swap the source that you’re controlling. You can transfer up to 10 files (no greater than 2GB) between sources at one time in this mode as well.

I would have preferred trying out the display in person before purchasing it, but no local shop had it available, so I had to trust a few reviews on the Web and YouTube. One minor concern I had was the resolution. Coming from a smaller but retina 4K display that provides amazing text sharpness and legibility, I wondered how the LG — with its default resolution of 2560×2880 —would fare. It turns out that it’s quite fine anyway. The display is bright and, sure, if I get very close to it, I can see the pixels and what’s displayed doesn’t have the same sharpness of my iMac’s retina display. But I managed to adjust the display to just the right spot where reading/writing is very pleasant.

And I even had to scale the resolution down a notch. At its native resolution, UI elements like the menu bar, and icons and text within Finder windows, were just too small to be comfortable. So I switched to 2048×2304 and I also went to System Settings > Accessibility > Display and selected Menu bar size: Large, so that the end result size-wise was more or less similar to what I was seeing on my iMac.

Yet another feature of this display worth mentioning is its Ergo stand. It’s easy to install, it’s very robust, and it’s impressively flexible. Quoting again the Verge review:

  • It can be pulled forward or pushed back a total of 210mm.
  • It can be swiveled nearly 360 degrees to the left or right.
  • It can be lowered by 35mm to bring it closer to your desk.
  • It allows for 90 degrees of counterclockwise rotation.
  • It can be tilted up or down by 25 degrees.

The monitor arm’s flexibility allows for more adjustments than many aftermarket monitor arms. So, having it included with the DualUp helps to justify its high sticker price.

Speaking of price, I got the display for €599, which I believe is about €100 less its original price. I think it’s good value for what it offers.

6.

Back to the keyboard. To anticipate possible enquiries, yes, Razer products aren’t particularly Mac-friendly in general. The keyboard layout is for Windows PCs, and so is 99% of Razer software. How’s the compatibility with a Mac? I’d say it’s 97–98% compatible.

  • You can’t install the latest version of Razer’s Synapse software to have fine-grained control over the RGB lighting effects, but there’s an open source application for Mac, called Razer macOS that is a good-enough alternative. And the keyboard has some built-in shortcuts to quickly switch through various lighting effects and colours.
  • Despite having some modifier keys in different locations compared to a native Mac keyboard, they are correctly recognised by the OS. So, while on a Mac keyboard you have the sequence Fn — Control — Alt/Option — Command keys to the left of the Space bar, and on this keyboard you have Control — Windows — Alt keys, by pressing them you get exactly their corresponding function (obviously the Windows key acts as Command key). I have no real issues going from these keyboards to Mac keyboards and back. My muscle memory is not as rudimentary as I thought, heh.
  • The only issue I had, layout-wise, was that pressing the ‘<’ key to the right of the left Shift returned a completely different character (‘º’). This was the only mismatch between the keyboard and Mac OS’s Spanish ISO layout. Since I use ‘<’ and ‘>’ very often, and ‘º’ and ‘ª’ almost never, I immediately went on the hunt for an app to remap such key. I remembered Karabiner, but it turned out to be too complicated to achieve what I wanted, and the whole package felt a bit overkill. I found a much simpler, more elegant solution: Ukelele. The app is not super-intuitive (but thankfully it comes with a very useful manual), but after learning the basics I was able to simply create a copy of the Spanish keyboard layout, drag and drop the ‘<’ and ‘>’ symbols on the key that wasn’t correctly recognised, and save the modified keyboard layout in a .bundle file. Double-clicking on the file opened a System utility called Keyboard Installer, which installed the layout in (user)/Library/Keyboard Layouts. I then restarted the Mac, went to System Settings > Keyboard > Input Sources > Edit, and in the pane that appears, after pressing [+] on the bottom left to add a new input source, the new layout was available under the Others category at the bottom of the languages list.

As I said, these are really non-issues for me, and are vastly outweighed by the main upside: Razer keyboards are good-quality mechanical keyboards. And they represent a good ready-to-use solution for those who, like me, are into mechanical keyboards but not to a nerdy extreme (meaning you don’t really want to build custom keyboards by sourcing every single component needed for the job). And this particular key mismatch problem seems to be limited to this keyboard (or maybe they changed something in how Mac OS Ventura recognises third-party keyboards, I don’t know). The older Blackwidow Elite connected to my iMac is fully recognised by High Sierra, including the dedicated media keys and the volume wheel.

7.

Overall, after a week, I’m very satisfied with this new setup. It didn’t cost me a fortune (less than a similarly-specced 14-inch MacBook Pro) and I feel I’ve got a good bang for the buck, so to speak. This setup is also rather compact and saves space in my otherwise cramped desk. And this M2 Pro Mac mini is probably one of the most balanced Mac Apple has produced in years, when it comes to capabilities and features. It is a very good middle ground between a consumer and pro computer; it has a useful array of ports; and it’s powerful enough for my needs to last me a good while. Certainly until Apple decides to remove that goddamn notch from all of their laptops.

Apple Vision Pro — Further considerations

Tech Life

This serves as an addendum to my previous piece. It takes into account some feedback I received, includes things I forgot to mention previously, and other odds and ends.

The ‘First-generation’ excuse is starting to seriously get on my nerves

In my previous article I wrote about how Apple’s constant mantra when they introduce something — We can’t wait to see what you’ll do with it! — annoys me because it actually feels like a cop-out on Apple’s part. It signals lack of ideas, and lack of a truly thought-out plan for how to take advantage of the product’s potential. It also shows… how can I put it? Lack of proactivity? Show me a wider range of use cases, but most importantly tell me why this product should matter to me — what seems to be the problem you have identified and how this product was created to address it. 

But even more annoying is the response from many tech enthusiasts, that this is a first-generation product, that you have to imagine it three iterations later, five iterations later… This is an awful excuse that further normalises this idiotic status quo in tech, where everything is in a constant ‘beta state’. When you’re at your next job interview, try telling the interviewer (clearly not impressed by your résumé) that they shouldn’t look at your qualifications today, that this is just the 1.0 version of you, that they should imagine what you’ll become in the company three years from now, five years from now. Good luck with that.

I understand how iterations work in hardware. As I perfectly understand that “Apple has already the next two iterations of the Vision Pro on their project table internally”, but that’s not really the point. Apple excels at hardware manufacturing, but ever since Jobs passed away, Apple’s excellence in also delivering a vision, a plan, a clear purpose for their products hasn’t been so great. So, I can accept that a product may not be perfect in its first-generation state. But I’m not equally tolerant when it comes to its fundamental idea and purpose. When there’s an excellent idea behind a product, when you can feel the eureka moment during its first presentation, you tend to be more forgiving if it’s a bit rough around the edges hardware-wise, because that kind of refinement is a bit easier to execute than having to find new ideas and additional purposes down the road. And Vision Pro is astounding technology with a meh fundamental concept and plan behind it. As Jon Prosser aptly observed in his video, It is Apple’s responsibility to tell us why and how this matters. On this front, Vision Pro is as unconvincing as an Apple TV and as unconvincing as an iPad as the perfect substitute for a traditional computer.

A missed opportunity

Speaking of purposes for the Vision Pro — and this is something I had in my notes for my previous article but eventually forgot to add — I was surprised Apple didn’t mention one obvious use case and a great opportunity to demonstrate Vision Pro’s potential utility: computing for people with disabilities. Vision Pro could have tremendous assistive capabilities for people with physical impairments. Eye-tracking and minimal hand gestures is the perfect interface/interaction for those with reduced mobility and coordination who usually struggle with traditional devices like computers, tablets, phones. Adding this aspect to the keynote presentation would have had a stronger impact and would have made the Vision Pro feel more human than this dystopian appendage I see every time I browse Apple’s marketing materials.

It’s a device for pros — is it really, though?

Among the responses I’ve received after publishing my previous piece on Vision Pro, a few people reached out to ‘reassure’ me regarding my doubts on how Vision Pro fits in the daily routine. Their feedback can be summarised as follows: Don’t worry about people using this device for hours on end and getting lost in the Matrix — This is a device for pros, aimed at specific uses for limited time periods. The starting price, for one, should be a dead giveaway.

Yeah, no. I’m not convinced. In many promotional videos and images, you don’t see Vision Pro in use by professionals doing critical work. You see it in use by regular people either doing lightweight work-related stuff, or just for personal entertainment. Everything is made to look and feel very casual. The purported use cases seem to put Vision Pro very much in a consumer space… but the price is premium. This is ‘Pro’ like an iPad Pro, if you know what I mean.

From Apple’s general message and the examples in their marketing, users seem to be encouraged to spend extended periods of time inside Vision Pro. How can this be ‘the future of computing’ if you spend just an hour or two each day in it, right?

Incidentally, that’s another missed opportunity: Apple could have presented Vision Pro as a truly pro device, unconditionally embracing the niche segment of AR/VR headsets, and showcase a series of specific, technical, professional use cases where Vision Pro could be employed and become the better alternative to, say, Microsoft’s HoloLens. Clear examples that demonstrate a clear vision — that you’re working on making something specific way better than it currently is; way better than all the solutions provided by your competitors. Instead we have a generic, vague proposition, where the main takeaway seems to be, Vision Pro is yet another environment where you can do the same stuff you’ve been doing on your computer, tablet, phone; but it’s an even cooler environment this time!

This other AR/VR headset can do basically the same things and costs a fraction of the Vision Pro is not the point” is not the point

Then we have the usual Apple fans, the starry-eyed “Only Apple can do things like this” crowd. Who get annoyed at those who say, I have this other AR/VR headset, and it can do essentially the same things Vision Pro does. Yes, maybe a bit worse, but it also costs 15% the price of Apple’s headset. And they reply something like, That’s not the point! Look at the Mac, look at the iPod, look at the iPhone, look at the iPad, at the Watch… All products that did most of the same stuff other products in their respective categories already did, but Apple’s innovation was in making a better experience.

I can agree to an extent, but in the case of AR and VR, Apple had a unique opportunity to present an innovative fundamental concept rather than a somewhat fresh-looking approach to what has already been tried. The AR/VR space is interesting and peculiar because on the one hand there’s a decades-long literature about it, with so many concepts, ideas, prototypes to study and understand what worked and what didn’t work. On the other hand, if we look at other headsets currently on the market and the actual use cases that have had some success among their users, what we see are comparatively limited scopes and applications. My educated guess is that, as Quinn Nelson pointed out in his video essay, AR/VR devices require intentionality on the user’s part. They really aren’t ‘casual’ devices like tablets, smartphones, smart home appliances, etc. You can’t use them to quickly check on stuff. You can’t use them to compose an urgent email response. And even in the case of a FaceTime call, especially if it’s not planned and it’s just a spur-of-the-moment thing, you don’t scramble to take your headset out, calibrate and wear it just for that call. You grab your phone. Or you’re already in front of your laptop. (This is also why I don’t buy the argument that with Vision Pro you can definitely get rid of all your external displays.) 

And all this could be a valid starting point to assess how to implement a more refined core idea. Apple’s message could have been, We have studied the idea of how to move inside a mixed-reality space for years, and we think that what has been tried so far has failed for these and those reasons. We think we can offer a much better, more useful perspective on the matter.

What I saw at the WWDC23 keynote was the above but only from a mere technological, hardware design angle. And once again with this Apple, the result is a truly groundbreaking engineering feat, but not a truly groundbreaking concept. Maybe I’m wrong, and maybe everyone will soon want to lose themselves and literally be surrounded by the same operating system windows and apps they’ve been losing themselves so far on their computers, tablets, and phones, and ‘spatial computing’ will be a thing. I don’t know. For me, that is the least appealing aspect of an AR experience. I want to be immersed in fun activities, not in work. We are all busy and immersed in work today already, even without AR/VR headsets. Do we really want more of that?

A few thoughts on Apple Vision Pro

Tech Life

Apple’s WWDC23 keynote presentation was going so well. I wasn’t loving everything, mind you. The new 15-inch MacBook Air didn’t impress me and I still believe it’s an unnecessary addition that further crowds the MacBook line. And the new Mac Pro feels like a product Apple had to release more than a product Apple wanted to release. But I’m happy there was a refresh of the Mac Studio, and I’m okay with the new features in the upcoming iOS 17, iPadOS 17, and Mac OS Sonoma. Nothing in these platforms excites me anymore, but at least they’re not getting worse (than expected).

Then came the One More Thing. Apple Vision Pro. My reaction was viscerally negative. Before my eyes, rather than windows floating in a mixed reality space, were flashing bits of dystopian novels and films. As the presentation went on, where the presenter spoke about human connection, I thought isolation; where they spoke about immersion, I thought intrusion. When they showed examples of a Vision Pro wearer interacting with friends and family as if it was the most normal thing to do, the words that came to my mind were “weird” and “creepy”.

In the online debate that followed the keynote, it appears that we Vision Pro sceptics already worried by the personal and societal impact of this device have been chastised by the technologists and fanboys for being the usual buzzkills and party poopers. And while I imparted no judgment whatsoever on those who felt excited and energised by the new headset, some were quick to send me private messages calling me an idiot for not liking the Vision Pro. Those, like me, who were instantly worried about this device bringing more isolation, self-centeredness, and people burying themselves even more into their artificial bubbles, were told that we can’t possibly know this is what’s going to happen, that this is just a 1.0 device, that these are early days, that the Vision Pro is clearly not a device you’re going to wear 24 hours a day, and so forth.

Perhaps. But our worries aren’t completely unfounded or unwarranted. When the iPhone was a 1.0 device, it offered the cleanest smartphone interface and experience at the time, and while it was the coolest smartphone, it was essentially used in the same ways as the competition’s smartphones. But in just a matter of few years its presence and usage have transformed completely, and while I won’t deny its usefulness as a tool, when you go out and look around you, and see 95% of the people in the streets buried in their smartphone, it’s not a pretty sight. If Vision Pro turns out to be even half as successful as the iPhone, somehow it’s hard for me to imagine that things are going to get better from a social standpoint.

Let’s focus on more immediate matters

All of the above stems from my initial, visceral reaction. And even though it can be viewed as wild speculation surrounding a product that won’t even be released before 2024, I think it’s worth discussing nonetheless. 

But as the Vision Pro presentation progressed, and I had finally managed to control the impulsive cringing, I started wondering about more technical, practical, and user-experience aspects of the headset. 

User interface and interaction

If I had to use just one phrase to sum up my impressions, it probably would be, Sophisticated and limited at the same time. There’s visual elegance and polish, that’s undeniable. All those who have actually tried the headset unanimously praise the eye-tracking technology, saying that it essentially has no latency. Good, because any visual lag in such an interface would break it immediately. Eye-tracking is the first of five ways to interact with objects in visionOS. You highlight an object or UI element by looking at it. Then you have the pinching with your thumb and index finger to select the object. Then you have pinching then flicking to scroll through content. Then you have dictation. Then you have Siri to rely on when you want to perform certain actions (good luck with that, by the way). That’s it.

First concern: Since Apple is trying to position Vision Pro as a productivity device, more than just another VR-oriented headset aimed at pure entertainment, I struggle to see how really productive one can be with such rudimental interaction model. It’s simultaneously fun and alarming to watch what Apple considers productive activities in their glossy marketing material. Some light web browsing, some quick emailing, lots of videoconferencing, reading a PDF, maybe jotting down a note, little else. On social media, I quipped that this looks more like ‘productivity for CEOs’. You look, you read, you check, you select. You don’t really make/create. It feels like a company executive’s wet dream: sitting in their minimalistic office, using nothing more than their goggles. Effortless supervision.

Second concern: Feedback. Or lack thereof. It’s merely visual, from what I can tell. Maybe in part auditory as well. But it’s worse than multi-touch. At least with multi-touch, even if we are not exactly touching the object we’re manipulating, we’re touching something — the glass pane of an iPhone or iPad, or a laptop screen. At least there’s a haptic engine that can give a pretty good tactile illusion. In the abstract world of Vision Pro, you move projections, ethereal objects you can’t even feel you’re really touching. There is even a projected keyboard you’re supposed to type on. Even if you never tried the headset, you can do this quick exercise: imagine a keyboard in front of you, and just type on it. Your fingers move in the air, without touching anything. How does it feel? Could you even type like this for 10 minutes straight? Even if you visually see the projected keyboard as a touchable object that visually reacts to your air-typing (by highlighting the air-pressed air-key), it can’t be a relaxing experience for your hands. And typing is a large part of so many people’s productivity. 

Sure, it seems you can use a Bluetooth keyboard/mouse/gamepad as input methods, but now things get awkward, as you constantly move between a real object and a projected window/interface. Of all the written pieces and video essays on Vision Pro I’ve checked, Quinn Nelson’s has been the most interesting to me and the one I felt more in agreement with, because he expresses similar concerns as mine when it comes to user interface and use cases for the headset. On this matter of using traditional input devices such as keyboard, mouse, gamepad, Quinn rightly wonders:

How does a mouse/cursor work in 3D space? Does it jump from window pane to window pane? Can you move the cursor outside of your field of view? If you move your head, does it re-snap where your vision is centered? 

I’ll be quoting Quinn more in my article, as he has some interesting insights. 

Third concern: Pure and simple fatigue. “Spatial computing” is a nice-sounding expression. And as cool and immersive as browsing stuff and fiddling with 2D and 3D objects in an AR environment is, I wonder after how long it becomes overwhelming, distracting, sensory-overloading, fatiguing. Having to scan a page or an AR window with your eyes with intent because your eyes are now the pointer I imagine is more tiring than doing the same with a mouse or similar input devices on traditional, non-AR/VR environments.

The misguided idea of simplifying by subtracting

A few days ago I wrote on Mastodon:

The trend with UI in every Apple platform, including especially visionOS, is to simplify the OS environment instead of the process (the human process, i.e. activity, workflow). On the contrary, this fixation on simplifying the interface actually hinders the process, because you constantly hunt for UI affordances that used to be there and now are hard to discover or memorise. 

I admit, maintaining a good balance between how an user interface looks and how it works isn’t easy. Cluttered and complex is just as bad as Terse and basic. But it can be done. The proof are many past versions of Mac OS, and even the first iOS versions before iOS 7. How you handle intuition is key. In the past I had the opportunity to help an acquaintance conduct some UI and usability tests with regular, non-tech people. I still remember one of the answers to the question “What makes an interface intuitive for you?” — the answer was, When, after looking at it, I instantly have a pretty good idea of what to do with it. Which means:

  • Buttons that look like buttons;
  • Icons that are self-explanatory;
  • Visual clues that help you understand how an element can be manipulated (this window can be resized by dragging here; if I click/tap this button, a drop-down menu will appear; this menu command is dimmer than the others, so it won’t do anything in this context; etc.);
  • Feedback that informs you about the result of your action (an alert sound, a dialog box with a warning, an icon that bounces back to its original position, etc.);
  • Consistency, which is essential because it begets predictability. It’s the basis for the user to understand patterns and behaviours in the OS environment, to then build on them to create his/her ‘process’, his/her workflow.

Another intriguing answer from that test was about tutorials. One of the participants wrote that, in their opinion, a tutorial was a “double-edged sword”: On the one hand, it’s great because it walks you through an unfamiliar application. On the other, when the tutorial gets too long-winded, I start questioning the whole application design and think they could have done a better job when creating it.

This little excursion serves to illustrate a point: Apple’s obsession with providing clean, sleek, good-looking user interfaces has brought a worrying amount of subtraction in the user interface design. By subtraction I don’t necessarily mean the removal of a feature (though that has happened as well), rather the visual disappearance of elements and affordances that helped to make the interface more intuitive and usable. So we have:

  • Buttons that sometimes don’t look like buttons;
  • UI elements that appear only when hovered over;
  • (Similar to the previous point) Information that remains hidden until some kind of interaction happens;
  • Icons and UI elements that aren’t immediately decipherable and understandable;
  • Inconsistent feedback, and general inconsistency in the OS environment: you do the same action within System App 1 and System App 2, and the results are different. Unpredictability brings confusion, users make more mistakes and their flow is constantly interrupted because the environment gets in the way.

Going from Mac OS to iOS/iPadOS to visionOS, the OS environment has become progressively more ‘subtractive’ and abstracted. The ways the user has to interact with the system have become simpler and simpler, and yet somehow Apple thinks people can fully utilise visionOS and Vision Pro as productively as a Mac. Imagine for a moment to try out Vision Pro for the first time without having paid much attention to the marketing materials and explanatory pages on Apple’s website. Is the OS environment intuitive? Do you, after looking at it, have a pretty good idea of what to do with it? My impression is that it’s going to feel like the first swimming lesson: you’re thrown into the water and you start moving your limbs in panic and gasping for air. Immersion and intuition can go hand by hand, but from what I’ve seen, it doesn’t seem to be the case in Vision Pro. But it’s a new platform, of course you need a tutorial!, I can hear you protest. I saw regular people trying the iPhone when it was first publicly available. I saw regular people trying the iPad when it was first publicly available. I saw regular people trying the Apple Watch when it was first publicly available. They didn’t need a guided tour. Maybe a little guidance for the less evident features, but not for the basics or for finding their way around the system.

What for? Why should I use this?

Back to Quinn Nelson’s video, at a certain point he starts wondering about the Vision Pro’s big picture, much in the same way I’ve been wondering about it myself:

The problem is that, with no new experiences beyond “Can you imagine??”, Apple is leaving the use cases for this headset to developers to figure out.

Look, you might say, “Hold on! Watching 3D video in a virtual movie theatre is cool! Using the device as an external display for your Mac is great! Browsing the Web with the flick of a finger is neat! And meditating through the included Mindfulness app is serene”. If these things sound awesome, you’re right. And congratulations, you’re a nerd like me, and you could have been enjoying using VR for, like, the last five years doing these same things but just a little bit worse.

There wasn’t a single application shown throughout the entirety of the keynote — not one — that hasn’t been demoed and released in one iteration or another on previous AR/VR headsets.

VR isn’t dying because hand tracking isn’t quite good enough. No. The problem with these devices is that they require intentionality and there’s no viable use case for them. It’s not like the iPhone, that you can just pick up for seconds or minutes at a time.

Maybe the SDKs and frameworks that Apple is providing to developers will enable them to create an app store so compelling that their work sells the device for Apple, much like the App Store did for the iPhone. But hardware has not been the problem with VR. It hasn’t been for years. It has been the software.

I expected to see a Black Swan, a suite of apps and games that made me think, “Duh! Why has nobody thought of this before!? This is what AR needs”. But there really wasn’t much of anything other than AR apps that I already have on my iPhone and my Mac, and that I can use without strapping on a headset to my face making myself look like a dick and spending $3,500 in the process! I hope this is the next iPhone, but right now I’m not as sure as I thought I’d be. 

Apologies for the long quote, but I couldn’t have driven the point home any better than this. As Quinn was saying this, it felt like we had worked together on his script, really. The only detail I’m not in agreement with Quinn is that I hope Vision Pro won’t be the next iPhone. A lot of people seem to buy into the idea that AR is the future of computing. I’m still very sceptical about it. In my previous piece, in the section about AR, I wrote:

I am indeed curious to see how Apple is going to introduce their AR goggles and what kind of picture they’re going to paint to pique people’s interest. I’m very sceptical overall. While I don’t entirely exclude the possibility of purchasing an AR or VR set in the future, I know it’s going to be for very delimited, specific applications. VR gaming is making decent progress finally, and that’s something I’m interested in exploring. But what Facebook/Meta and Apple (judging from the rumours, at least) seem interested in is to promote use cases that are more embedded in the day to day. 

As effortless as Apple went to great lengths to depict it, I still see a great deal of friction and awkwardness in using this headset as a part of the day to day routine. And I don’t mean the looking like a dork aspect. I mean from a mere utility standpoint. To be the future of computing, this ‘spatial computing’ has to be better than traditional computing. And if you remove the ‘shock & awe’ and ‘immersion’ factors, I don’t see these great advantages in using Apple’s headset versus a Mac or an iPad. It doesn’t feel faster, it doesn’t feel lighter, it doesn’t feel more practical, or more productive. It looks cool. It looks pretty. It makes you go ‘wow’. It’s shallow, and exactly in tune with the general direction of Apple’s UI and software these days.

Another surprisingly refreshing take on the Vision Pro came from Jon Prosser. In his YouTube video, This is NOT the future — Apple Vision Pro, Jon speaks of his disappointment towards the headset, and makes some thought-provoking points in the process. Here are some relevant quotes (emphasis mine):

First impressions really matter, especially for an entirely new product category. It is Apple’s responsibility to tell us why and how this matters. Tech demos, cool things, shiny new thing aside, that is their actual job. Apple isn’t technology. Apple is marketing. And that’s what separates them from the other guys. When we take the leap into not only an entirely new product category but a foreign product category at that, it’s Apple’s responsibility to make the first impression positive for regular people.

VR and AR is already such a small niche little market. Comparing AR/VR products against Apple’s Vision Pro is nearly pointless because the market is so small that they might as well be first to their user base. It’s not about comparing Vision Pro versus something like the Meta Quest, because if you compare them of course it’s not even close. Apple dumped an obscene amount of resources into this project for so many years and are willing to put a price tag on it that Zuckerberg wouldn’t dare try. Apple needed to go on stage and not just introduce people to a mixed reality product. Apple needed to go on stage and introduce those people to mixed reality, period.

I think for once in a very long time — especially with a product announcement or announcement at all — Apple came across as… confused; wildly disconnected and disassociated from their users. People. The way Apple announced Vision Pro, the way they announce any product, is by showing us how they see us using it. And what did they show us for this? We saw people alone in their rooms watching movies, alone in their rooms working. It’s almost like they were, like, Hey, you know that stuff you do every day? Yeah, you still get to do that but we’re gonna add a step to it. You’re welcome! Oh but it’s big now! It looks so big! If this was any other product at any other price tag from any other company, sure, those are cool gimmicks, I’ll take them. Apple doing them? I’m sure they’re gonna be with an incredible quality. Wow, amazing. But… is that really life-changing? 

I want to make this clear: I do not doubt, even a little bit, that Vision Pro is revolutionary. It’s looking to be objectively the best, highest-fidelity, AR and VR experience available on the entire planet. This is completely over-engineered to hell. It is technologically one of the most impressive things I have ever seen. But are we really at the point where we’re just gonna reward [Apple] for just… making the thing? […] It doesn’t matter how hard you work on a thing. That is not enough if it doesn’t fit into other people’s lives. Apple has always been about the marriage of taking technology and making it more human, letting boundaries fade away, and connecting people to the experience of using those devices, bridging gaps between people by using technology. And with Vision Pro… it feels like Apple made something that is entirely Tech first, Human last.

It’s not the idea that matters. It’s the implementation. The idea will only ever be as good as the implementation. […] If this mixed reality vision is truly Apple’s end goal, and the things they showed us on stage are the things that they want us to focus on — if those things are all that this first-gen product was mainly ever meant to do, then they put this in way too big of a package. 

If this was more of a larger focus on VR and gaming and putting you someplace else, like the Quest products, then yeah, I’m fine with wearing this massive thing on my face. But they demoed a concept that works way better with a much smaller wearable, like glasses maybe. First-gen product, again, yeah I know. But also, again, first impressions matter. They introduced the future of Apple, the company after the iPhone, with this dystopian, foreign, disconnected product. […] They expect you — according to all this — to live in this thing. Countless videos of people just… actually living in it. […] This is a technological masterpiece, but this isn’t our iPhone moment. This isn’t our Apple Watch moment. 

Another interesting aspect Prosser emphasises — a detail I too did notice during the keynote but something I didn’t think much of at the time — is that you don’t see any Apple executive wearing the headset. Again, this could be just a coincidence, but also a bit of a Freudian slip — a little subliminal hint that reveals they want to actually distance themselves from this product. Almost like with the Apple silicon Mac Pro, Vision Pro feels like a product Apple had to release more than a product Apple wanted to release. Make of this detail what you want, but let me tell you: if Vision Pro had been a Steve Jobs’s idea and pet project, you can bet your arse he himself would have demoed it on stage.

Again, apologies for the massive quoting above, but I couldn’t refrain from sharing Quinn Nelson and Jon Prosser’s insights because they’re so much on the same page as my whole stance on this device, it hurts.

I’ll add that, product direction-wise, I see a lot of similarities between Vision Pro and the iPad. Something Apple produces without a clear plan, a clear… vision. In both cases, Apple introduces some device propelled by an initial idea, a sort of answer to the question, What if we made a device that did this and this?, but then the whole thing loses momentum because the burden of figuring out what to do with such device, how to fit it in daily life, is immediately shifted to developers and end users. One of Cook’s Apple most favourite phrases is, We can’t wait to see what you’ll do with it! It sounds like a benevolent encouragement, like you’re being invited into the project of making this thing great. But what I see behind those trite words is a more banal lack of ideas and inspiration on Apple’s part. And it’s painful to see just how many techies keep cutting Apple so much slack about this. It’s painful to see so many techies stop at how technologically impressive the new headset is, but very few seem interested to discuss whether the idea, the vision behind it, is equally impressive. People in the tech world are so constantly hungry for anything resembling ‘progress’ and ‘future’ that they’ll eat whatever well-presented plate they’re given. 

AR is the future” — but why?

I see AR and VR as interesting developments for specific activities and forms of entertainment. Places you go for a limited amount of time for leisure. From a user interface standpoint, I can’t see how a person would want to engage in hours-long working sessions in a mixed-reality environment. The interaction model is rudimentary, the interface looks pretty but pretty is not enough if there’s less intuitiveness and more fatigue than using a Mac or an iPad or an iPhone. Everything that Apple has shown you can do with Apple Vision Pro, every use case they proposed, it’s something I can do faster and more efficiently on any other device. I don’t think that replicating the interface of iOS, iPadOS and Mac OS by projecting it on a virtual 3D space is the best implementation for an AR/VR device. It makes for a cool demo. It makes you look like you finally made real something we used to see in sci-fi shows and films. But in day-to-day sustained use, is it actually a viable, practical solution? Or is it more like a gimmick? 

I see the potential in AR/VR for specific things you can really enjoy being fully immersed in, like gaming and entertainment, and even for some kind of creative 3D project making. But why should ‘being inside an operating system’ be the future of computing? What’s appealing about it? Perhaps my perspective is biased due to the fact that I’m from a generation that knows life before the Web, but I always considered technological devices as tools you use, and the online sphere a place you go to when you so choose. So I fail to see the appeal of virtually being inside a tool, inside an ecosystem, or being constantly online and connected into the Matrix. An operating system in AR, like visionOS, still feels like the next unnecessary reinvention of the wheel. You’re still doing the same things you’ve been doing for years on your computer, tablet, smartphone — not as quickly, not as efficiently. But hey, it looks cool. It makes you feel you’re really there. It’s so immersive. 

And that’s it. What a future awaits us.

Where are we going? — Notes gathered over a 2-month tech detox period

Tech Life

Where have I been?

This is probably one of the longest hiatuses I’ve taken from updating this blog. Over the years the frequency of my articles has indeed been decreasing, but I typically managed to write at least a couple of pieces per month. I’m surely stating the obvious, but for an article to appear here, three main conditions have to be fulfilled:

  1. I have something to talk about, something to say. Ever since I started writing online, this has been a guiding principle for me. I don’t like filler content. I don’t like updating for updating’s sake. If I have to link to some other content and throw a one-line comment, I’ll just use social media.
  2. I feel I have something useful to add to the conversation. Having a subject or an idea for an article isn’t enough for me. I also need to feel that my opinion or perspective on a certain topic is worth sharing. Half-hidden by the huge amount of chaff in the tech world, one can find some brilliant tech writers and commenters out there. I read them before thinking about adding my contribution. I often agree with them, and on many occasions I think they’ve already said what I wanted to say more effectively and succinctly than I could possibly convey. When that happens, I usually refrain from posting.
  3. I have time and will to commit to writing and publishing a piece. I’ll briefly remind you that English isn’t my first language, and while I’m very fluent and while I ‘think in English’ when I’m writing, the time I’ll spend writing and editing a 2,000-word article is likely to be longer than what it would take an English-speaking tech writer to accomplish the same task.

During this hiatus, that has lasted all March, all April, and half of May, none of these conditions was fulfilled. An unexpected surge in my workload, combined with days of illness (nothing too serious, just a prolonged flu-like cold and cough), took all my time and energies. I also had to take care of some personal business that involved a quick yet exhausting trip abroad, so there was that as well.

But there was also another important factor in the mix — a general sense of ‘tech fatigue’ and lack of enthusiasm towards tech-oriented topics. For the first time in years, I also stopped reading tech stuff, letting my feed reader accumulate dozens of unread posts.

This period of tech detox wasn’t planned or sought after, at all. It just happened — and frankly, I’m glad it did.

Really, nothing more than notes

Sometimes, when choosing a title for an article, I’ll use the term ‘note’ as synonym for observation, opinion, remark, implying that there’s something organic and organised tying all these notes and observations together. But in this instance, what follows are nothing more than quick thoughts hastily recorded during spare moments. They’re impressions. Fragments. Feelings I wanted to share, not observations of a tech expert assembling a careful, well-documented essay. Keep this in mind as you read along.

Lack of real forward movement

Lack of enthusiasm for technology lately seems to be connected to the feeling that general progress — true progress, not what headlines scream at you — has slowed down to a crawl. I’m Gen X, so I have lived the transition between pre-Web world and what we have today. The 15 years between 1993 and 2008 were wild compared with the 15 years between 2008 and 2023. I know you can point at many awesome things that have appeared in the last fifteen years, but so many things happened between 1993 and 2008 that were or felt like huge breakthroughs, while a lot of stuff between 2008 and 2023, as great as it is, feels mostly iterative.

I don’t expect leaps and bounds everywhere all the time, of course. I actually believe that tech today needs more periods of lull, so that existing hardware and software can (ideally!) be perfected and improved upon. But what bores me to no end as of late is all this buzz around certain trends that are advertised as ‘progress’ and ‘the future’ — augmented reality and artificial intelligence, to name just two — which I think are way overblown. Little substance, lots of fanfare.

Digital toys

A tweet from back in March — So much tech today feels more focused on the creation of ‘digital toys’ more than on innovation that can actually, unequivocally positively help and advance humankind. And [I feel] that a lot of resources are being wasted on things whose real usefulness is debatable, e.g. self-driving cars. 

A lot of unease I’ve been feeling in recent times boils down to what I perceive to be a widening disconnect between the tech sphere and the world at large, the real world that is going to shit and down the drain day after day. 

The tech sphere looks more and more like a sandbox for escapism. Don’t get me wrong, some escapism is always good and healthy as a coping mechanism, because otherwise we would be in a constant state of depression. But — and I may be wrong here — the kind of escapism I feel coming from the tech world is the sort of ‘bury your head in the sand’, ‘stay entertained and don’t worry about anything else’ escapism that want people to remained hooked to gadgets and digital toys in ways that at times feel almost sedative.

Frictionless at all costs

Recently I wrote on Mastodon — We are so hell-bent on eliminating friction in everything that anything with any trace of friction is considered ‘difficult’, ‘complex’, ‘unintuitive’. An acquaintance recently told me that they tried to open an account on Mastodon and found the process ‘daunting’. I’m all for removing friction when it comes to repetitive, mindless tasks or unnecessarily straining labour. But some friction that stimulates your brain, your thinking process and acuity should always be welcome.

I’ve often seen the smartphone described as an extension of our brain because it gives us instant access to all kind of information. Just don’t confuse ‘extension’ with ‘expansion’. Don’t get me wrong, smartphones and their multitude of apps are undeniably useful for retrieving information on the spot: you’re watching a TV series and you recognise one of the actors, but can’t remember their name or which film or series you saw them previously. You open the IMDb app and quickly look that up. You can also search Wikipedia; you can access several different dictionaries and thesauri for terms you encounter and don’t know; you can use translation apps and services to have a quick and dirty translation when you encounter something in a foreign language you need or want to understand; and so on and so forth, you get the idea. Maps and turn-by-turn directions are something I myself heavily use on a frequent basis, and have been a godsend whenever visiting new places. 

But all this isn’t really an expansion of our brain. We may indeed retain some of the notions we’ve searched, but otherwise it’s mostly a flow. We’ll forget about that actor again and we’ll look IMDb up again. Our sense of direction won’t really be improved and we’ll check Google Maps or Apple Maps again for places we already went through. It’s an accumulation of trivia, not knowledge. Smartphones and this kind of ever-ready access are like eating out every day: extremely convenient, but you won’t learn to cook.

Self-driving cars: tons of spaghetti thrown at the wall, and nothing sticks

I have this perspective on the idea of self-driving cars, and nothing so far has made me change my mind — they’re emblematic of everything that is misguided about tech today. This mentality of wanting to ‘solve’ a problem that really didn’t need a solution (or didn’t need a high-tech solution) by throwing an outlandish amount of technology at it, and solving very little in the process. While any step further introduces a whole new set of problems that need to be addressed. How? Why, by throwing even more technology at them, of course. Self-driving cars advocates will tell you that the noble goal is to reduce car accidents and make people safer on the road. That’s nice and all, but I think a more pragmatic (and cost-effective) solution would be to educate drivers better.

Getting a driver’s licence should be a stricter process instead of what amounts to a quick tutorial on the basics of driving and traffic rules. And people should really get rid of nasty habits while driving, like checking their smartphones all the time. Speaking of, I can’t shake the idea that a lot of tech bros just want self-driving cars to entirely eliminate the friction of having to drive themselves, so they can go places while fiddling with their smartphones, tablets, laptops, what have you. Just call an Uber, dude.

As for making people safer on the road, for now, just open a browser and search for “Tesla autopilot”…

AI and drinking the Kool-AI

There is nothing magic about AI, ChatGPT, and all this stuff that’s popping up everywhere like mushrooms. Computers were invented to process data faster. With time, computers have been getting faster and faster, and we have fed them more and more and more and more and more and more and more data. The result is that anything would seem ‘intelligent’ after such treatment. Once again, there may be truly good and useful use cases for AI, but so far I see a lot of people who seem happy to have a tool they can use to think less. Another shortcut that eliminates friction in ways that don’t look healthy to me. I’m not averse to technology or the many conveniences it affords today, but again, I firmly believe we shouldn’t remove that particular kind of friction that stimulates us to use our head and think for ourselves. Do we really need an AI assistant to search the Web, when we can basically find anything by simply using natural language in a query? Are we becoming this lazy and apathetic? One of the worst dystopian illustrations I’ve seen in recent years are the humans in WALL•E (watch the film if you still haven’t, it’s both really entertaining and edifying).

Augmented Reality: mind-goggling!

Do you see AR goggles or glasses in your future? Not really, as far as I’m concerned. I am indeed curious to see how Apple is going to introduce their AR goggles and what kind of picture they’re going to paint to pique people’s interest. I’m very sceptical overall. While I don’t entirely exclude the possibility of purchasing an AR or VR set in the future, I know it’s going to be for very delimited, specific applications. VR gaming is making decent progress finally, and that’s something I’m interested in exploring. But what Facebook/Meta and Apple (judging from the rumours, at least) seem interested in is to promote use cases that are more embedded in the day to day. 

Everything I’ve read so far points to ridiculous stuff, however. This idea of people wearing AR goggles to engage in videoconferences set in virtual shared spaces with hyper-realistic avatars of themselves is, again, one example of needless tech nerdification of something that can already be done without throwing additional technology at it: regular videoconferences where people can look at their real selves as they talk with one another! It can’t get more realistic than this, and no one needs to buy an expensive appendage to achieve the same task! Seriously, I can’t wait to see what kind of use cases Apple will promote to make their AR goggles a compelling product. I still think the whole Google Glass fiasco has been an excellent example of the line people draw when it comes to wearable technology in an everyday setting. Ten years have passed since, and I don’t think anything has really changed in this regard from a social standpoint.

Coda: How have I been?

Apart from a period of illness, I’ve been fine. Like I said, this hiatus and tech detox interval wasn’t planned at all, and while I hated not having the time to write and publish anything here, I enjoyed being busy elsewhere and ignoring tech news and the Latest Hyped-up Thing for a while. There was work to do, books to continue reading, music to listen to, a novel to continue writing, and a chapter in my life to finally close after many bittersweet and some painful memories. Many thanks to all who reached out to ask me how I was, and apologies if my silence here made you worry. I’m back, and as sceptical as before, if not more.