First impressions of my new Mac setup

Tech Life

On 21 June I finally updated my main Mac workstation. That ‘finally’ is mostly work-related. My Intel 2017 21.5‑inch iMac still running Mac OS X 10.13 High Sierra remains a very capable workhorse, a Mac I still enjoy using, and a Mac that — up to a couple of months ago — still allowed me to do 100% of the things I needed to do. Now that percentage is more like 95%, but that 5% is important. In recent times, in order to carry out certain translation/localisation work, I needed to run Mac apps requiring Mac OS Ventura, and none of my Macs was supported by Ventura (apart from the iMac, which I didn’t want to update, to preserve compatibility with other apps and games).

So here we are.

The setup

 

The new Mac is a Mac mini with an M2 Pro chip, in the standard configuration Apple provides on their site, i.e. with a 10-core CPU, a 16-core GPU, 16 GB of RAM, and a 512 GB SSD. Unlike other Macs, whose base configuration always feels a bit lacking, this was actually perfectly adequate for my needs. I briefly considered a built-to-order option with either 32 GB of RAM or 1 TB of storage, but for such modest upgrades Apple wants too much money. With the €230 I saved for not choosing a 1 TB internal SSD, I can easily buy a good 2 TB external NVMe SSD.

Choosing a stock configuration also made me save time. I purchased the Mac mini in the early afternoon, and shortly after it was available for pickup at the local Apple Store.

The display is an LG 28-inch DualUp Monitor with Ergo Stand and USB Type‑C. As you can see, it’s a portrait display with an aspect ratio of 16:18. If you want to know more, The Verge published a good review last year. I’ll add a few remarks later.

The keyboard is a Razer Blackwidow V3 Mini Hyperspeed, with Razer’s yellow switches, which are linear and silent. I’ve had a remarkable experience with the Razer Blackwidow Elite (a full-size, wired model featuring Razer’s green switches, which are clicky and similar in feel to the classic Cherry MX Blue switches), and when my wife needed a more compact, wireless keyboard, I found the Blackwidow V3 Mini Hyperspeed for her. As soon as she let me try it, I knew I wanted one for myself.

The mouse is a Razer Basilisk V3 X Hyperspeed. When I was looking for a mouse for my Legion 7i gaming laptop, I found this at a local department store at a good discounted price. I very much enjoyed its ergonomics and the overall experience, so I got another one for my Mac mini setup.

Assorted remarks

1.

One feature I really like in both Razer products is that they have multiple connectivity. Both mouse and keyboard have Bluetooth and a Wireless 2.4 GHz connection. Both come with a USB‑A dongle, but you can use just one dongle to connect both devices to the computer via Wireless(*). The keyboard also comes with a USB‑C cable to connect it to the computer when you need to charge the internal battery.

(*) After checking the Razer website, I don’t think this is going to be possible if you’re using a Mac. The software that enables this functionality appears to be Windows-only.

2.

Since I’m not writing a review for a tech website or magazine, I haven’t conducted any meaningful tests to assess the Mac mini’s performance. But in normal use, you can instantly feel it’s a quiet beast. Everything is instant, everything is effortless. The Mac mini remains cool no matter what I throw at it. I was already accustomed to fast boot times ever since I updated all my Macs to solid-state drives, but the Mac mini managed to surprise me all the same. It cold boots in probably about 15 seconds, and restarts are even faster. Restarting is so fast I basically don’t even see the Apple logo. In the time my iMac performs a complete logout, I could probably restart the mini twice. When you upgrade often, these performance leaps are less noticeable, but coming from a quad-core i5 Intel Mac, the leap to a 10-core Apple Silicon M2 Pro is exhilarating. Apple hardware is as impressive as Apple software is disappointing.

3.

What about Mac OS Ventura? I haven’t dug deep so far, but on the surface it’s… tolerable. I am especially glad Stage Manager is off by default. System Settings is cause of continued frustration, however, and every time I open it, it’s like visiting your favourite supermarket or shopping mall and finding out they have rearranged everything, and not very logically either. In the previous System Preferences app, I may have used the Search function two or three times in fifteen years. In System Settings it’s a constant trip to the Search field. When I initially complained about this unnecessary reshuffling of preference panes that is System Settings, so many people wrote me saying they were glad Apple reorganised it because they “never found anything at a glance” in the old System Preferences app, something I frankly find hard to believe. System Preferences was not perfect, but many panes were grouped together more logically. I know Apple insists on this homogenisation between iOS, iPadOS, and Mac OS’s UI (which, again, isn’t really necessary because people today aren’t tech illiterate like they were in the 1980s), but the fundamental problem with this is that, well, Mac OS is not iOS and a Mac is not a phone or a tablet.

4.

This new Mac mini will mostly be used for work, but I installed Steam anyway just to see how dire the situation was for games, compatibility-wise. I have a total of 84 games in my library. 44 have the 🚫 symbol next to them, meaning they won’t work (they still require a 32-bit compatible machine). Of the remaining 40, 26 are Windows-only titles. I’m left with 14 games that should work fine under Apple Silicon. And that’s why I got a gaming laptop a few months ago…

5.

Back to the display. The reason I chose it over more predictable candidates of the 4K/5K widescreen variety is that I wanted something more in line with my work, and since I work a lot with text and documents, a portrait display was the obvious choice. With the LG DualUp, it’s like having two 21.5‑inch displays stacked on top of each other. Which means that when I visit a website or open a PDF, now I can see double the contents I see on my iMac.

Other features I like about the LG DualUp. First, it comes with a generous amount of ports. Second, it has a built-in KVM switch, meaning you can connect two computers to the display and control them both with one mouse and keyboard. Quoting the aforementioned Verge review:

The DualUp has two HDMI 2.0 ports, one DisplayPort v1.4 port, a USB‑C port with video and 90W of passthrough power, a headphone jack (to use in place of its passable but not fantastic built-in speakers), and two USB‑A 3.0 downstream ports for accessories. Additionally, the DualUp has a built-in KVM switch, allowing one keyboard and mouse to control two computers connected to the monitor via USB‑C and DisplayPort (with the included USB upstream cable tethered to the computer connected via DisplayPort). After installing the Dual Controller software and configuring my work MacBook Pro and a Dell laptop to connect via IP address, going between the two inputs in picture-by-picture mode was essentially seamless. Mousing over to the dividing line switches the computer that I was controlling. There’s also a keyboard shortcut that can swap the source that you’re controlling. You can transfer up to 10 files (no greater than 2GB) between sources at one time in this mode as well.

I would have preferred trying out the display in person before purchasing it, but no local shop had it available, so I had to trust a few reviews on the Web and YouTube. One minor concern I had was the resolution. Coming from a smaller but retina 4K display that provides amazing text sharpness and legibility, I wondered how the LG — with its default resolution of 2560×2880 —would fare. It turns out that it’s quite fine anyway. The display is bright and, sure, if I get very close to it, I can see the pixels and what’s displayed doesn’t have the same sharpness of my iMac’s retina display. But I managed to adjust the display to just the right spot where reading/writing is very pleasant.

And I even had to scale the resolution down a notch. At its native resolution, UI elements like the menu bar, and icons and text within Finder windows, were just too small to be comfortable. So I switched to 2048×2304 and I also went to System Settings > Accessibility > Display and selected Menu bar size: Large, so that the end result size-wise was more or less similar to what I was seeing on my iMac.

Yet another feature of this display worth mentioning is its Ergo stand. It’s easy to install, it’s very robust, and it’s impressively flexible. Quoting again the Verge review:

  • It can be pulled forward or pushed back a total of 210mm.
  • It can be swiveled nearly 360 degrees to the left or right.
  • It can be lowered by 35mm to bring it closer to your desk.
  • It allows for 90 degrees of counterclockwise rotation.
  • It can be tilted up or down by 25 degrees.

The monitor arm’s flexibility allows for more adjustments than many aftermarket monitor arms. So, having it included with the DualUp helps to justify its high sticker price.

Speaking of price, I got the display for €599, which I believe is about €100 less its original price. I think it’s good value for what it offers.

6.

Back to the keyboard. To anticipate possible enquiries, yes, Razer products aren’t particularly Mac-friendly in general. The keyboard layout is for Windows PCs, and so is 99% of Razer software. How’s the compatibility with a Mac? I’d say it’s 97–98% compatible.

  • You can’t install the latest version of Razer’s Synapse software to have fine-grained control over the RGB lighting effects, but there’s an open source application for Mac, called Razer macOS that is a good-enough alternative. And the keyboard has some built-in shortcuts to quickly switch through various lighting effects and colours.
  • Despite having some modifier keys in different locations compared to a native Mac keyboard, they are correctly recognised by the OS. So, while on a Mac keyboard you have the sequence Fn — Control — Alt/Option — Command keys to the left of the Space bar, and on this keyboard you have Control — Windows — Alt keys, by pressing them you get exactly their corresponding function (obviously the Windows key acts as Command key). I have no real issues going from these keyboards to Mac keyboards and back. My muscle memory is not as rudimentary as I thought, heh.
  • The only issue I had, layout-wise, was that pressing the ‘<’ key to the right of the left Shift returned a completely different character (‘º’). This was the only mismatch between the keyboard and Mac OS’s Spanish ISO layout. Since I use ‘<’ and ‘>’ very often, and ‘º’ and ‘ª’ almost never, I immediately went on the hunt for an app to remap such key. I remembered Karabiner, but it turned out to be too complicated to achieve what I wanted, and the whole package felt a bit overkill. I found a much simpler, more elegant solution: Ukelele. The app is not super-intuitive (but thankfully it comes with a very useful manual), but after learning the basics I was able to simply create a copy of the Spanish keyboard layout, drag and drop the ‘<’ and ‘>’ symbols on the key that wasn’t correctly recognised, and save the modified keyboard layout in a .bundle file. Double-clicking on the file opened a System utility called Keyboard Installer, which installed the layout in (user)/Library/Keyboard Layouts. I then restarted the Mac, went to System Settings > Keyboard > Input Sources > Edit, and in the pane that appears, after pressing [+] on the bottom left to add a new input source, the new layout was available under the Others category at the bottom of the languages list.

As I said, these are really non-issues for me, and are vastly outweighed by the main upside: Razer keyboards are good-quality mechanical keyboards. And they represent a good ready-to-use solution for those who, like me, are into mechanical keyboards but not to a nerdy extreme (meaning you don’t really want to build custom keyboards by sourcing every single component needed for the job). And this particular key mismatch problem seems to be limited to this keyboard (or maybe they changed something in how Mac OS Ventura recognises third-party keyboards, I don’t know). The older Blackwidow Elite connected to my iMac is fully recognised by High Sierra, including the dedicated media keys and the volume wheel.

7.

Overall, after a week, I’m very satisfied with this new setup. It didn’t cost me a fortune (less than a similarly-specced 14-inch MacBook Pro) and I feel I’ve got a good bang for the buck, so to speak. This setup is also rather compact and saves space in my otherwise cramped desk. And this M2 Pro Mac mini is probably one of the most balanced Macs Apple has produced in years, when it comes to capabilities and features. It is a very good middle ground between a consumer and pro computer; it has a useful array of ports; and it’s powerful enough for my needs to last me a good while. Certainly until Apple decides to remove that goddamn notch from all of their laptops.

Apple Vision Pro — Further considerations

Tech Life

This serves as an addendum to my previous piece. It takes into account some feedback I received, includes things I forgot to mention previously, and other odds and ends.

The ‘First-generation’ excuse is starting to seriously get on my nerves

In my previous article I wrote about how Apple’s constant mantra when they introduce something — We can’t wait to see what you’ll do with it! — annoys me because it actually feels like a cop-out on Apple’s part. It signals lack of ideas, and lack of a truly thought-out plan for how to take advantage of the product’s potential. It also shows… how can I put it? Lack of proactivity? Show me a wider range of use cases, but most importantly tell me why this product should matter to me — what seems to be the problem you have identified and how this product was created to address it. 

But even more annoying is the response from many tech enthusiasts, that this is a first-generation product, that you have to imagine it three iterations later, five iterations later… This is an awful excuse that further normalises this idiotic status quo in tech, where everything is in a constant ‘beta state’. When you’re at your next job interview, try telling the interviewer (clearly not impressed by your résumé) that they shouldn’t look at your qualifications today, that this is just the 1.0 version of you, that they should imagine what you’ll become in the company three years from now, five years from now. Good luck with that.

I understand how iterations work in hardware. As I perfectly understand that “Apple has already the next two iterations of the Vision Pro on their project table internally”, but that’s not really the point. Apple excels at hardware manufacturing, but ever since Jobs passed away, Apple’s excellence in also delivering a vision, a plan, a clear purpose for their products hasn’t been so great. So, I can accept that a product may not be perfect in its first-generation state. But I’m not equally tolerant when it comes to its fundamental idea and purpose. When there’s an excellent idea behind a product, when you can feel the eureka moment during its first presentation, you tend to be more forgiving if it’s a bit rough around the edges hardware-wise, because that kind of refinement is a bit easier to execute than having to find new ideas and additional purposes down the road. And Vision Pro is astounding technology with a meh fundamental concept and plan behind it. As Jon Prosser aptly observed in his video, It is Apple’s responsibility to tell us why and how this matters. On this front, Vision Pro is as unconvincing as an Apple TV and as unconvincing as an iPad as the perfect substitute for a traditional computer.

A missed opportunity

Speaking of purposes for the Vision Pro — and this is something I had in my notes for my previous article but eventually forgot to add — I was surprised Apple didn’t mention one obvious use case and a great opportunity to demonstrate Vision Pro’s potential utility: computing for people with disabilities. Vision Pro could have tremendous assistive capabilities for people with physical impairments. Eye-tracking and minimal hand gestures is the perfect interface/interaction for those with reduced mobility and coordination who usually struggle with traditional devices like computers, tablets, phones. Adding this aspect to the keynote presentation would have had a stronger impact and would have made the Vision Pro feel more human than this dystopian appendage I see every time I browse Apple’s marketing materials.

It’s a device for pros — is it really, though?

Among the responses I’ve received after publishing my previous piece on Vision Pro, a few people reached out to ‘reassure’ me regarding my doubts on how Vision Pro fits in the daily routine. Their feedback can be summarised as follows: Don’t worry about people using this device for hours on end and getting lost in the Matrix — This is a device for pros, aimed at specific uses for limited time periods. The starting price, for one, should be a dead giveaway.

Yeah, no. I’m not convinced. In many promotional videos and images, you don’t see Vision Pro in use by professionals doing critical work. You see it in use by regular people either doing lightweight work-related stuff, or just for personal entertainment. Everything is made to look and feel very casual. The purported use cases seem to put Vision Pro very much in a consumer space… but the price is premium. This is ‘Pro’ like an iPad Pro, if you know what I mean.

From Apple’s general message and the examples in their marketing, users seem to be encouraged to spend extended periods of time inside Vision Pro. How can this be ‘the future of computing’ if you spend just an hour or two each day in it, right?

Incidentally, that’s another missed opportunity: Apple could have presented Vision Pro as a truly pro device, unconditionally embracing the niche segment of AR/VR headsets, and showcase a series of specific, technical, professional use cases where Vision Pro could be employed and become the better alternative to, say, Microsoft’s HoloLens. Clear examples that demonstrate a clear vision — that you’re working on making something specific way better than it currently is; way better than all the solutions provided by your competitors. Instead we have a generic, vague proposition, where the main takeaway seems to be, Vision Pro is yet another environment where you can do the same stuff you’ve been doing on your computer, tablet, phone; but it’s an even cooler environment this time!

This other AR/VR headset can do basically the same things and costs a fraction of the Vision Pro is not the point” is not the point

Then we have the usual Apple fans, the starry-eyed “Only Apple can do things like this” crowd. Who get annoyed at those who say, I have this other AR/VR headset, and it can do essentially the same things Vision Pro does. Yes, maybe a bit worse, but it also costs 15% the price of Apple’s headset. And they reply something like, That’s not the point! Look at the Mac, look at the iPod, look at the iPhone, look at the iPad, at the Watch… All products that did most of the same stuff other products in their respective categories already did, but Apple’s innovation was in making a better experience.

I can agree to an extent, but in the case of AR and VR, Apple had a unique opportunity to present an innovative fundamental concept rather than a somewhat fresh-looking approach to what has already been tried. The AR/VR space is interesting and peculiar because on the one hand there’s a decades-long literature about it, with so many concepts, ideas, prototypes to study and understand what worked and what didn’t work. On the other hand, if we look at other headsets currently on the market and the actual use cases that have had some success among their users, what we see are comparatively limited scopes and applications. My educated guess is that, as Quinn Nelson pointed out in his video essay, AR/VR devices require intentionality on the user’s part. They really aren’t ‘casual’ devices like tablets, smartphones, smart home appliances, etc. You can’t use them to quickly check on stuff. You can’t use them to compose an urgent email response. And even in the case of a FaceTime call, especially if it’s not planned and it’s just a spur-of-the-moment thing, you don’t scramble to take your headset out, calibrate and wear it just for that call. You grab your phone. Or you’re already in front of your laptop. (This is also why I don’t buy the argument that with Vision Pro you can definitely get rid of all your external displays.) 

And all this could be a valid starting point to assess how to implement a more refined core idea. Apple’s message could have been, We have studied the idea of how to move inside a mixed-reality space for years, and we think that what has been tried so far has failed for these and those reasons. We think we can offer a much better, more useful perspective on the matter.

What I saw at the WWDC23 keynote was the above but only from a mere technological, hardware design angle. And once again with this Apple, the result is a truly groundbreaking engineering feat, but not a truly groundbreaking concept. Maybe I’m wrong, and maybe everyone will soon want to lose themselves and literally be surrounded by the same operating system windows and apps they’ve been losing themselves so far on their computers, tablets, and phones, and ‘spatial computing’ will be a thing. I don’t know. For me, that is the least appealing aspect of an AR experience. I want to be immersed in fun activities, not in work. We are all busy and immersed in work today already, even without AR/VR headsets. Do we really want more of that?

A few thoughts on Apple Vision Pro

Tech Life

Apple’s WWDC23 keynote presentation was going so well. I wasn’t loving everything, mind you. The new 15-inch MacBook Air didn’t impress me and I still believe it’s an unnecessary addition that further crowds the MacBook line. And the new Mac Pro feels like a product Apple had to release more than a product Apple wanted to release. But I’m happy there was a refresh of the Mac Studio, and I’m okay with the new features in the upcoming iOS 17, iPadOS 17, and Mac OS Sonoma. Nothing in these platforms excites me anymore, but at least they’re not getting worse (than expected).

Then came the One More Thing. Apple Vision Pro. My reaction was viscerally negative. Before my eyes, rather than windows floating in a mixed reality space, were flashing bits of dystopian novels and films. As the presentation went on, where the presenter spoke about human connection, I thought isolation; where they spoke about immersion, I thought intrusion. When they showed examples of a Vision Pro wearer interacting with friends and family as if it was the most normal thing to do, the words that came to my mind were “weird” and “creepy”.

In the online debate that followed the keynote, it appears that we Vision Pro sceptics already worried by the personal and societal impact of this device have been chastised by the technologists and fanboys for being the usual buzzkills and party poopers. And while I imparted no judgment whatsoever on those who felt excited and energised by the new headset, some were quick to send me private messages calling me an idiot for not liking the Vision Pro. Those, like me, who were instantly worried about this device bringing more isolation, self-centeredness, and people burying themselves even more into their artificial bubbles, were told that we can’t possibly know this is what’s going to happen, that this is just a 1.0 device, that these are early days, that the Vision Pro is clearly not a device you’re going to wear 24 hours a day, and so forth.

Perhaps. But our worries aren’t completely unfounded or unwarranted. When the iPhone was a 1.0 device, it offered the cleanest smartphone interface and experience at the time, and while it was the coolest smartphone, it was essentially used in the same ways as the competition’s smartphones. But in just a matter of few years its presence and usage have transformed completely, and while I won’t deny its usefulness as a tool, when you go out and look around you, and see 95% of the people in the streets buried in their smartphone, it’s not a pretty sight. If Vision Pro turns out to be even half as successful as the iPhone, somehow it’s hard for me to imagine that things are going to get better from a social standpoint.

Let’s focus on more immediate matters

All of the above stems from my initial, visceral reaction. And even though it can be viewed as wild speculation surrounding a product that won’t even be released before 2024, I think it’s worth discussing nonetheless. 

But as the Vision Pro presentation progressed, and I had finally managed to control the impulsive cringing, I started wondering about more technical, practical, and user-experience aspects of the headset. 

User interface and interaction

If I had to use just one phrase to sum up my impressions, it probably would be, Sophisticated and limited at the same time. There’s visual elegance and polish, that’s undeniable. All those who have actually tried the headset unanimously praise the eye-tracking technology, saying that it essentially has no latency. Good, because any visual lag in such an interface would break it immediately. Eye-tracking is the first of five ways to interact with objects in visionOS. You highlight an object or UI element by looking at it. Then you have the pinching with your thumb and index finger to select the object. Then you have pinching then flicking to scroll through content. Then you have dictation. Then you have Siri to rely on when you want to perform certain actions (good luck with that, by the way). That’s it.

First concern: Since Apple is trying to position Vision Pro as a productivity device, more than just another VR-oriented headset aimed at pure entertainment, I struggle to see how really productive one can be with such rudimental interaction model. It’s simultaneously fun and alarming to watch what Apple considers productive activities in their glossy marketing material. Some light web browsing, some quick emailing, lots of videoconferencing, reading a PDF, maybe jotting down a note, little else. On social media, I quipped that this looks more like ‘productivity for CEOs’. You look, you read, you check, you select. You don’t really make/create. It feels like a company executive’s wet dream: sitting in their minimalistic office, using nothing more than their goggles. Effortless supervision.

Second concern: Feedback. Or lack thereof. It’s merely visual, from what I can tell. Maybe in part auditory as well. But it’s worse than multi-touch. At least with multi-touch, even if we are not exactly touching the object we’re manipulating, we’re touching something — the glass pane of an iPhone or iPad, or a laptop screen. At least there’s a haptic engine that can give a pretty good tactile illusion. In the abstract world of Vision Pro, you move projections, ethereal objects you can’t even feel you’re really touching. There is even a projected keyboard you’re supposed to type on. Even if you never tried the headset, you can do this quick exercise: imagine a keyboard in front of you, and just type on it. Your fingers move in the air, without touching anything. How does it feel? Could you even type like this for 10 minutes straight? Even if you visually see the projected keyboard as a touchable object that visually reacts to your air-typing (by highlighting the air-pressed air-key), it can’t be a relaxing experience for your hands. And typing is a large part of so many people’s productivity. 

Sure, it seems you can use a Bluetooth keyboard/mouse/gamepad as input methods, but now things get awkward, as you constantly move between a real object and a projected window/interface. Of all the written pieces and video essays on Vision Pro I’ve checked, Quinn Nelson’s has been the most interesting to me and the one I felt more in agreement with, because he expresses similar concerns as mine when it comes to user interface and use cases for the headset. On this matter of using traditional input devices such as keyboard, mouse, gamepad, Quinn rightly wonders:

How does a mouse/cursor work in 3D space? Does it jump from window pane to window pane? Can you move the cursor outside of your field of view? If you move your head, does it re-snap where your vision is centered? 

I’ll be quoting Quinn more in my article, as he has some interesting insights. 

Third concern: Pure and simple fatigue. “Spatial computing” is a nice-sounding expression. And as cool and immersive as browsing stuff and fiddling with 2D and 3D objects in an AR environment is, I wonder after how long it becomes overwhelming, distracting, sensory-overloading, fatiguing. Having to scan a page or an AR window with your eyes with intent because your eyes are now the pointer I imagine is more tiring than doing the same with a mouse or similar input devices on traditional, non-AR/VR environments.

The misguided idea of simplifying by subtracting

A few days ago I wrote on Mastodon:

The trend with UI in every Apple platform, including especially visionOS, is to simplify the OS environment instead of the process (the human process, i.e. activity, workflow). On the contrary, this fixation on simplifying the interface actually hinders the process, because you constantly hunt for UI affordances that used to be there and now are hard to discover or memorise. 

I admit, maintaining a good balance between how an user interface looks and how it works isn’t easy. Cluttered and complex is just as bad as Terse and basic. But it can be done. The proof are many past versions of Mac OS, and even the first iOS versions before iOS 7. How you handle intuition is key. In the past I had the opportunity to help an acquaintance conduct some UI and usability tests with regular, non-tech people. I still remember one of the answers to the question “What makes an interface intuitive for you?” — the answer was, When, after looking at it, I instantly have a pretty good idea of what to do with it. Which means:

  • Buttons that look like buttons;
  • Icons that are self-explanatory;
  • Visual clues that help you understand how an element can be manipulated (this window can be resized by dragging here; if I click/tap this button, a drop-down menu will appear; this menu command is dimmer than the others, so it won’t do anything in this context; etc.);
  • Feedback that informs you about the result of your action (an alert sound, a dialog box with a warning, an icon that bounces back to its original position, etc.);
  • Consistency, which is essential because it begets predictability. It’s the basis for the user to understand patterns and behaviours in the OS environment, to then build on them to create his/her ‘process’, his/her workflow.

Another intriguing answer from that test was about tutorials. One of the participants wrote that, in their opinion, a tutorial was a “double-edged sword”: On the one hand, it’s great because it walks you through an unfamiliar application. On the other, when the tutorial gets too long-winded, I start questioning the whole application design and think they could have done a better job when creating it.

This little excursion serves to illustrate a point: Apple’s obsession with providing clean, sleek, good-looking user interfaces has brought a worrying amount of subtraction in the user interface design. By subtraction I don’t necessarily mean the removal of a feature (though that has happened as well), rather the visual disappearance of elements and affordances that helped to make the interface more intuitive and usable. So we have:

  • Buttons that sometimes don’t look like buttons;
  • UI elements that appear only when hovered over;
  • (Similar to the previous point) Information that remains hidden until some kind of interaction happens;
  • Icons and UI elements that aren’t immediately decipherable and understandable;
  • Inconsistent feedback, and general inconsistency in the OS environment: you do the same action within System App 1 and System App 2, and the results are different. Unpredictability brings confusion, users make more mistakes and their flow is constantly interrupted because the environment gets in the way.

Going from Mac OS to iOS/iPadOS to visionOS, the OS environment has become progressively more ‘subtractive’ and abstracted. The ways the user has to interact with the system have become simpler and simpler, and yet somehow Apple thinks people can fully utilise visionOS and Vision Pro as productively as a Mac. Imagine for a moment to try out Vision Pro for the first time without having paid much attention to the marketing materials and explanatory pages on Apple’s website. Is the OS environment intuitive? Do you, after looking at it, have a pretty good idea of what to do with it? My impression is that it’s going to feel like the first swimming lesson: you’re thrown into the water and you start moving your limbs in panic and gasping for air. Immersion and intuition can go hand by hand, but from what I’ve seen, it doesn’t seem to be the case in Vision Pro. But it’s a new platform, of course you need a tutorial!, I can hear you protest. I saw regular people trying the iPhone when it was first publicly available. I saw regular people trying the iPad when it was first publicly available. I saw regular people trying the Apple Watch when it was first publicly available. They didn’t need a guided tour. Maybe a little guidance for the less evident features, but not for the basics or for finding their way around the system.

What for? Why should I use this?

Back to Quinn Nelson’s video, at a certain point he starts wondering about the Vision Pro’s big picture, much in the same way I’ve been wondering about it myself:

The problem is that, with no new experiences beyond “Can you imagine??”, Apple is leaving the use cases for this headset to developers to figure out.

Look, you might say, “Hold on! Watching 3D video in a virtual movie theatre is cool! Using the device as an external display for your Mac is great! Browsing the Web with the flick of a finger is neat! And meditating through the included Mindfulness app is serene”. If these things sound awesome, you’re right. And congratulations, you’re a nerd like me, and you could have been enjoying using VR for, like, the last five years doing these same things but just a little bit worse.

There wasn’t a single application shown throughout the entirety of the keynote — not one — that hasn’t been demoed and released in one iteration or another on previous AR/VR headsets.

VR isn’t dying because hand tracking isn’t quite good enough. No. The problem with these devices is that they require intentionality and there’s no viable use case for them. It’s not like the iPhone, that you can just pick up for seconds or minutes at a time.

Maybe the SDKs and frameworks that Apple is providing to developers will enable them to create an app store so compelling that their work sells the device for Apple, much like the App Store did for the iPhone. But hardware has not been the problem with VR. It hasn’t been for years. It has been the software.

I expected to see a Black Swan, a suite of apps and games that made me think, “Duh! Why has nobody thought of this before!? This is what AR needs”. But there really wasn’t much of anything other than AR apps that I already have on my iPhone and my Mac, and that I can use without strapping on a headset to my face making myself look like a dick and spending $3,500 in the process! I hope this is the next iPhone, but right now I’m not as sure as I thought I’d be. 

Apologies for the long quote, but I couldn’t have driven the point home any better than this. As Quinn was saying this, it felt like we had worked together on his script, really. The only detail I’m not in agreement with Quinn is that I hope Vision Pro won’t be the next iPhone. A lot of people seem to buy into the idea that AR is the future of computing. I’m still very sceptical about it. In my previous piece, in the section about AR, I wrote:

I am indeed curious to see how Apple is going to introduce their AR goggles and what kind of picture they’re going to paint to pique people’s interest. I’m very sceptical overall. While I don’t entirely exclude the possibility of purchasing an AR or VR set in the future, I know it’s going to be for very delimited, specific applications. VR gaming is making decent progress finally, and that’s something I’m interested in exploring. But what Facebook/Meta and Apple (judging from the rumours, at least) seem interested in is to promote use cases that are more embedded in the day to day. 

As effortless as Apple went to great lengths to depict it, I still see a great deal of friction and awkwardness in using this headset as a part of the day to day routine. And I don’t mean the looking like a dork aspect. I mean from a mere utility standpoint. To be the future of computing, this ‘spatial computing’ has to be better than traditional computing. And if you remove the ‘shock & awe’ and ‘immersion’ factors, I don’t see these great advantages in using Apple’s headset versus a Mac or an iPad. It doesn’t feel faster, it doesn’t feel lighter, it doesn’t feel more practical, or more productive. It looks cool. It looks pretty. It makes you go ‘wow’. It’s shallow, and exactly in tune with the general direction of Apple’s UI and software these days.

Another surprisingly refreshing take on the Vision Pro came from Jon Prosser. In his YouTube video, This is NOT the future — Apple Vision Pro, Jon speaks of his disappointment towards the headset, and makes some thought-provoking points in the process. Here are some relevant quotes (emphasis mine):

First impressions really matter, especially for an entirely new product category. It is Apple’s responsibility to tell us why and how this matters. Tech demos, cool things, shiny new thing aside, that is their actual job. Apple isn’t technology. Apple is marketing. And that’s what separates them from the other guys. When we take the leap into not only an entirely new product category but a foreign product category at that, it’s Apple’s responsibility to make the first impression positive for regular people.

VR and AR is already such a small niche little market. Comparing AR/VR products against Apple’s Vision Pro is nearly pointless because the market is so small that they might as well be first to their user base. It’s not about comparing Vision Pro versus something like the Meta Quest, because if you compare them of course it’s not even close. Apple dumped an obscene amount of resources into this project for so many years and are willing to put a price tag on it that Zuckerberg wouldn’t dare try. Apple needed to go on stage and not just introduce people to a mixed reality product. Apple needed to go on stage and introduce those people to mixed reality, period.

I think for once in a very long time — especially with a product announcement or announcement at all — Apple came across as… confused; wildly disconnected and disassociated from their users. People. The way Apple announced Vision Pro, the way they announce any product, is by showing us how they see us using it. And what did they show us for this? We saw people alone in their rooms watching movies, alone in their rooms working. It’s almost like they were, like, Hey, you know that stuff you do every day? Yeah, you still get to do that but we’re gonna add a step to it. You’re welcome! Oh but it’s big now! It looks so big! If this was any other product at any other price tag from any other company, sure, those are cool gimmicks, I’ll take them. Apple doing them? I’m sure they’re gonna be with an incredible quality. Wow, amazing. But… is that really life-changing? 

I want to make this clear: I do not doubt, even a little bit, that Vision Pro is revolutionary. It’s looking to be objectively the best, highest-fidelity, AR and VR experience available on the entire planet. This is completely over-engineered to hell. It is technologically one of the most impressive things I have ever seen. But are we really at the point where we’re just gonna reward [Apple] for just… making the thing? […] It doesn’t matter how hard you work on a thing. That is not enough if it doesn’t fit into other people’s lives. Apple has always been about the marriage of taking technology and making it more human, letting boundaries fade away, and connecting people to the experience of using those devices, bridging gaps between people by using technology. And with Vision Pro… it feels like Apple made something that is entirely Tech first, Human last.

It’s not the idea that matters. It’s the implementation. The idea will only ever be as good as the implementation. […] If this mixed reality vision is truly Apple’s end goal, and the things they showed us on stage are the things that they want us to focus on — if those things are all that this first-gen product was mainly ever meant to do, then they put this in way too big of a package. 

If this was more of a larger focus on VR and gaming and putting you someplace else, like the Quest products, then yeah, I’m fine with wearing this massive thing on my face. But they demoed a concept that works way better with a much smaller wearable, like glasses maybe. First-gen product, again, yeah I know. But also, again, first impressions matter. They introduced the future of Apple, the company after the iPhone, with this dystopian, foreign, disconnected product. […] They expect you — according to all this — to live in this thing. Countless videos of people just… actually living in it. […] This is a technological masterpiece, but this isn’t our iPhone moment. This isn’t our Apple Watch moment. 

Another interesting aspect Prosser emphasises — a detail I too did notice during the keynote but something I didn’t think much of at the time — is that you don’t see any Apple executive wearing the headset. Again, this could be just a coincidence, but also a bit of a Freudian slip — a little subliminal hint that reveals they want to actually distance themselves from this product. Almost like with the Apple silicon Mac Pro, Vision Pro feels like a product Apple had to release more than a product Apple wanted to release. Make of this detail what you want, but let me tell you: if Vision Pro had been a Steve Jobs’s idea and pet project, you can bet your arse he himself would have demoed it on stage.

Again, apologies for the massive quoting above, but I couldn’t refrain from sharing Quinn Nelson and Jon Prosser’s insights because they’re so much on the same page as my whole stance on this device, it hurts.

I’ll add that, product direction-wise, I see a lot of similarities between Vision Pro and the iPad. Something Apple produces without a clear plan, a clear… vision. In both cases, Apple introduces some device propelled by an initial idea, a sort of answer to the question, What if we made a device that did this and this?, but then the whole thing loses momentum because the burden of figuring out what to do with such device, how to fit it in daily life, is immediately shifted to developers and end users. One of Cook’s Apple most favourite phrases is, We can’t wait to see what you’ll do with it! It sounds like a benevolent encouragement, like you’re being invited into the project of making this thing great. But what I see behind those trite words is a more banal lack of ideas and inspiration on Apple’s part. And it’s painful to see just how many techies keep cutting Apple so much slack about this. It’s painful to see so many techies stop at how technologically impressive the new headset is, but very few seem interested to discuss whether the idea, the vision behind it, is equally impressive. People in the tech world are so constantly hungry for anything resembling ‘progress’ and ‘future’ that they’ll eat whatever well-presented plate they’re given. 

AR is the future” — but why?

I see AR and VR as interesting developments for specific activities and forms of entertainment. Places you go for a limited amount of time for leisure. From a user interface standpoint, I can’t see how a person would want to engage in hours-long working sessions in a mixed-reality environment. The interaction model is rudimentary, the interface looks pretty but pretty is not enough if there’s less intuitiveness and more fatigue than using a Mac or an iPad or an iPhone. Everything that Apple has shown you can do with Apple Vision Pro, every use case they proposed, it’s something I can do faster and more efficiently on any other device. I don’t think that replicating the interface of iOS, iPadOS and Mac OS by projecting it on a virtual 3D space is the best implementation for an AR/VR device. It makes for a cool demo. It makes you look like you finally made real something we used to see in sci-fi shows and films. But in day-to-day sustained use, is it actually a viable, practical solution? Or is it more like a gimmick? 

I see the potential in AR/VR for specific things you can really enjoy being fully immersed in, like gaming and entertainment, and even for some kind of creative 3D project making. But why should ‘being inside an operating system’ be the future of computing? What’s appealing about it? Perhaps my perspective is biased due to the fact that I’m from a generation that knows life before the Web, but I always considered technological devices as tools you use, and the online sphere a place you go to when you so choose. So I fail to see the appeal of virtually being inside a tool, inside an ecosystem, or being constantly online and connected into the Matrix. An operating system in AR, like visionOS, still feels like the next unnecessary reinvention of the wheel. You’re still doing the same things you’ve been doing for years on your computer, tablet, smartphone — not as quickly, not as efficiently. But hey, it looks cool. It makes you feel you’re really there. It’s so immersive. 

And that’s it. What a future awaits us.

Where are we going? — Notes gathered over a 2-month tech detox period

Tech Life

Where have I been?

This is probably one of the longest hiatuses I’ve taken from updating this blog. Over the years the frequency of my articles has indeed been decreasing, but I typically managed to write at least a couple of pieces per month. I’m surely stating the obvious, but for an article to appear here, three main conditions have to be fulfilled:

  1. I have something to talk about, something to say. Ever since I started writing online, this has been a guiding principle for me. I don’t like filler content. I don’t like updating for updating’s sake. If I have to link to some other content and throw a one-line comment, I’ll just use social media.
  2. I feel I have something useful to add to the conversation. Having a subject or an idea for an article isn’t enough for me. I also need to feel that my opinion or perspective on a certain topic is worth sharing. Half-hidden by the huge amount of chaff in the tech world, one can find some brilliant tech writers and commenters out there. I read them before thinking about adding my contribution. I often agree with them, and on many occasions I think they’ve already said what I wanted to say more effectively and succinctly than I could possibly convey. When that happens, I usually refrain from posting.
  3. I have time and will to commit to writing and publishing a piece. I’ll briefly remind you that English isn’t my first language, and while I’m very fluent and while I ‘think in English’ when I’m writing, the time I’ll spend writing and editing a 2,000-word article is likely to be longer than what it would take an English-speaking tech writer to accomplish the same task.

During this hiatus, that has lasted all March, all April, and half of May, none of these conditions was fulfilled. An unexpected surge in my workload, combined with days of illness (nothing too serious, just a prolonged flu-like cold and cough), took all my time and energies. I also had to take care of some personal business that involved a quick yet exhausting trip abroad, so there was that as well.

But there was also another important factor in the mix — a general sense of ‘tech fatigue’ and lack of enthusiasm towards tech-oriented topics. For the first time in years, I also stopped reading tech stuff, letting my feed reader accumulate dozens of unread posts.

This period of tech detox wasn’t planned or sought after, at all. It just happened — and frankly, I’m glad it did.

Really, nothing more than notes

Sometimes, when choosing a title for an article, I’ll use the term ‘note’ as synonym for observation, opinion, remark, implying that there’s something organic and organised tying all these notes and observations together. But in this instance, what follows are nothing more than quick thoughts hastily recorded during spare moments. They’re impressions. Fragments. Feelings I wanted to share, not observations of a tech expert assembling a careful, well-documented essay. Keep this in mind as you read along.

Lack of real forward movement

Lack of enthusiasm for technology lately seems to be connected to the feeling that general progress — true progress, not what headlines scream at you — has slowed down to a crawl. I’m Gen X, so I have lived the transition between pre-Web world and what we have today. The 15 years between 1993 and 2008 were wild compared with the 15 years between 2008 and 2023. I know you can point at many awesome things that have appeared in the last fifteen years, but so many things happened between 1993 and 2008 that were or felt like huge breakthroughs, while a lot of stuff between 2008 and 2023, as great as it is, feels mostly iterative.

I don’t expect leaps and bounds everywhere all the time, of course. I actually believe that tech today needs more periods of lull, so that existing hardware and software can (ideally!) be perfected and improved upon. But what bores me to no end as of late is all this buzz around certain trends that are advertised as ‘progress’ and ‘the future’ — augmented reality and artificial intelligence, to name just two — which I think are way overblown. Little substance, lots of fanfare.

Digital toys

A tweet from back in March — So much tech today feels more focused on the creation of ‘digital toys’ more than on innovation that can actually, unequivocally positively help and advance humankind. And [I feel] that a lot of resources are being wasted on things whose real usefulness is debatable, e.g. self-driving cars. 

A lot of unease I’ve been feeling in recent times boils down to what I perceive to be a widening disconnect between the tech sphere and the world at large, the real world that is going to shit and down the drain day after day. 

The tech sphere looks more and more like a sandbox for escapism. Don’t get me wrong, some escapism is always good and healthy as a coping mechanism, because otherwise we would be in a constant state of depression. But — and I may be wrong here — the kind of escapism I feel coming from the tech world is the sort of ‘bury your head in the sand’, ‘stay entertained and don’t worry about anything else’ escapism that want people to remained hooked to gadgets and digital toys in ways that at times feel almost sedative.

Frictionless at all costs

Recently I wrote on Mastodon — We are so hell-bent on eliminating friction in everything that anything with any trace of friction is considered ‘difficult’, ‘complex’, ‘unintuitive’. An acquaintance recently told me that they tried to open an account on Mastodon and found the process ‘daunting’. I’m all for removing friction when it comes to repetitive, mindless tasks or unnecessarily straining labour. But some friction that stimulates your brain, your thinking process and acuity should always be welcome.

I’ve often seen the smartphone described as an extension of our brain because it gives us instant access to all kind of information. Just don’t confuse ‘extension’ with ‘expansion’. Don’t get me wrong, smartphones and their multitude of apps are undeniably useful for retrieving information on the spot: you’re watching a TV series and you recognise one of the actors, but can’t remember their name or which film or series you saw them previously. You open the IMDb app and quickly look that up. You can also search Wikipedia; you can access several different dictionaries and thesauri for terms you encounter and don’t know; you can use translation apps and services to have a quick and dirty translation when you encounter something in a foreign language you need or want to understand; and so on and so forth, you get the idea. Maps and turn-by-turn directions are something I myself heavily use on a frequent basis, and have been a godsend whenever visiting new places. 

But all this isn’t really an expansion of our brain. We may indeed retain some of the notions we’ve searched, but otherwise it’s mostly a flow. We’ll forget about that actor again and we’ll look IMDb up again. Our sense of direction won’t really be improved and we’ll check Google Maps or Apple Maps again for places we already went through. It’s an accumulation of trivia, not knowledge. Smartphones and this kind of ever-ready access are like eating out every day: extremely convenient, but you won’t learn to cook.

Self-driving cars: tons of spaghetti thrown at the wall, and nothing sticks

I have this perspective on the idea of self-driving cars, and nothing so far has made me change my mind — they’re emblematic of everything that is misguided about tech today. This mentality of wanting to ‘solve’ a problem that really didn’t need a solution (or didn’t need a high-tech solution) by throwing an outlandish amount of technology at it, and solving very little in the process. While any step further introduces a whole new set of problems that need to be addressed. How? Why, by throwing even more technology at them, of course. Self-driving cars advocates will tell you that the noble goal is to reduce car accidents and make people safer on the road. That’s nice and all, but I think a more pragmatic (and cost-effective) solution would be to educate drivers better.

Getting a driver’s licence should be a stricter process instead of what amounts to a quick tutorial on the basics of driving and traffic rules. And people should really get rid of nasty habits while driving, like checking their smartphones all the time. Speaking of, I can’t shake the idea that a lot of tech bros just want self-driving cars to entirely eliminate the friction of having to drive themselves, so they can go places while fiddling with their smartphones, tablets, laptops, what have you. Just call an Uber, dude.

As for making people safer on the road, for now, just open a browser and search for “Tesla autopilot”…

AI and drinking the Kool-AI

There is nothing magic about AI, ChatGPT, and all this stuff that’s popping up everywhere like mushrooms. Computers were invented to process data faster. With time, computers have been getting faster and faster, and we have fed them more and more and more and more and more and more and more data. The result is that anything would seem ‘intelligent’ after such treatment. Once again, there may be truly good and useful use cases for AI, but so far I see a lot of people who seem happy to have a tool they can use to think less. Another shortcut that eliminates friction in ways that don’t look healthy to me. I’m not averse to technology or the many conveniences it affords today, but again, I firmly believe we shouldn’t remove that particular kind of friction that stimulates us to use our head and think for ourselves. Do we really need an AI assistant to search the Web, when we can basically find anything by simply using natural language in a query? Are we becoming this lazy and apathetic? One of the worst dystopian illustrations I’ve seen in recent years are the humans in WALL•E (watch the film if you still haven’t, it’s both really entertaining and edifying).

Augmented Reality: mind-goggling!

Do you see AR goggles or glasses in your future? Not really, as far as I’m concerned. I am indeed curious to see how Apple is going to introduce their AR goggles and what kind of picture they’re going to paint to pique people’s interest. I’m very sceptical overall. While I don’t entirely exclude the possibility of purchasing an AR or VR set in the future, I know it’s going to be for very delimited, specific applications. VR gaming is making decent progress finally, and that’s something I’m interested in exploring. But what Facebook/Meta and Apple (judging from the rumours, at least) seem interested in is to promote use cases that are more embedded in the day to day. 

Everything I’ve read so far points to ridiculous stuff, however. This idea of people wearing AR goggles to engage in videoconferences set in virtual shared spaces with hyper-realistic avatars of themselves is, again, one example of needless tech nerdification of something that can already be done without throwing additional technology at it: regular videoconferences where people can look at their real selves as they talk with one another! It can’t get more realistic than this, and no one needs to buy an expensive appendage to achieve the same task! Seriously, I can’t wait to see what kind of use cases Apple will promote to make their AR goggles a compelling product. I still think the whole Google Glass fiasco has been an excellent example of the line people draw when it comes to wearable technology in an everyday setting. Ten years have passed since, and I don’t think anything has really changed in this regard from a social standpoint.

Coda: How have I been?

Apart from a period of illness, I’ve been fine. Like I said, this hiatus and tech detox interval wasn’t planned at all, and while I hated not having the time to write and publish anything here, I enjoyed being busy elsewhere and ignoring tech news and the Latest Hyped-up Thing for a while. There was work to do, books to continue reading, music to listen to, a novel to continue writing, and a chapter in my life to finally close after many bittersweet and some painful memories. Many thanks to all who reached out to ask me how I was, and apologies if my silence here made you worry. I’m back, and as sceptical as before, if not more.

Subscription fatigue and related musings

Software

I’m still using the previous version. I really love its design and functionality, but no more subscriptions for me, sorry. 

This is an App Store review of an app I, too, have been using on my iOS devices for years. I have translated the review because it only appears on the Spanish App Store. The ‘previous version’ the reviewer refers to is the last version of the app to use the ‘free with in-app purchases’ model. Since then, the developer has switched to a ‘free with strict limitations unless you subscribe’ model.

(In this piece I won’t mention any app or developer specifically because I don’t want to point fingers and single out people or apps. My criticism is focused on certain practices, and I simply want to look at things from a customer’s perspective.)

For the past few years, my purchases in the App Stores (iOS and Mac) have been rapidly slowing down to a trickle. The main reason is saturation. I’ve been using iPhones since 2008, and the first years since the App Store started operations were rather wild. I remember constantly hunting for cool apps to add to my device, especially photo apps. I remember getting to a point where I had more than 40 different photo apps, and sometimes I was missing a crucial snapshot because I couldn’t decide which photo app to shoot with.

But obviously, sooner or later, we all reach a stage where we feel we have all the apps we need. Occasionally there’s a new entry in a certain category which looks and feels better than one of our favourite apps, and so we try it out and stick with it if it’s worthwhile. Other times something happens in the tech world that creates an interest in a new area, like what’s happening now with [airquote] AI [airquote], which spawns a new category of apps to check out. Or a particular platform loses popularity (Twitter) and another gains interest (Mastodon) and we see dozens of new clients pop up everywhere. And so forth. You get the idea: I still browse the App Store every so often, but the platform is quite mature now in terms of app selection, so I’m not constantly looking for something new like I used to back in, say, 2010–2012.

But I do routinely check the App Store. And today, whenever I find something of interest, 8 times out of 10 the app requires a subscription to function at its best, or even to function at all.

If you’ve been visiting the App Store for as long as I have, you’ll certainly be familiar with the expression race to the bottom. If you look for the definition on the Web, you’ll likely encounter serious economic explanations like this one from Investopedia: The race to the bottom refers to a competitive situation where a company, state, or nation attempts to undercut the competition’s prices by sacrificing quality standards or worker safety (often defying regulation), or reducing labor costs. A race to the bottom can also [occur] between governments to attract industry or tax revenues.

What happened in the App Store that created a ‘crisis’ in the pre-in-app purchases and pre-subscriptions era wasn’t so stark; the race to the bottom there simply meant that, in an effort to be competitive and drive purchases, developers kept lowering the price of their apps to an unsustainable level (for the developers). Customers were happily buying apps priced at $1.99 or $0.99, but in the medium-to-long term it wasn’t really sustainable for the developers, except for those rare exceptions where an app sold literally millions of copies. Smaller developers with more modest success were finding themselves in a bit of a predicament. Sure, their app’s low price had attracted enough customers to make a good launch in the App Store, but as interest and visibility waned, and with Apple taking its 30% cut of the pie, they realised that what was gained barely covered the cost of development (at best). Plus, the app may need further adjustments and fixes, as bugs and issues could appear down the road. And for how the App Store works, customers sort of expected free updates. 

The most deleterious effect of this race to the bottom within the App Store has been software depreciation. If you accustom people to obtain great-quality apps for a couple of dollars or less, many customers will keep expecting the price of most software and apps to be this low. It’s a vicious circle that’s hard to free yourself from.

Again, before in-app purchases and subscriptions, if a small developer wanted to try and return to a more sustainable situation, at that point their options were quite limited — also due to the App Store’s inherent rigidity:

  • Raise the price of the app for new customers.
  • Release a paid major update of the app.

In an ideal world, both sound like reasonable options, and I saw a few developers choose either of them. But it was hard. Hard because in a sea of $0.99 apps, your app that now costs e.g. $6.99 makes people raise an eyebrow and go, What’s so special about this one? I’m sure I can find an alternative for less money. Hard because, in the case of the paid major update, most people’s attention stops at paid instead of at major update. So, unless the paid major update was accompanied by verbose release notes or blog posts on the developer’s website explaining all the new features in detail, and explaining why it was worth the money and the new asking price, people were averse to the proposition. All too often, developers choosing this path were considered ‘greedy’. Not by tech enthusiasts or pundits, who better understood what was happening behind the scenes, but by the general public, which by then was accustomed to having great apps on their devices for pennies, and updated for free.

In-app purchases and now subscriptions have seemingly managed to make the situation for developers better than it used to be back in the era of the ‘race to the bottom’, and part of me is okay with that, because I know and understand that developing a good-quality app for any platform today is no small feat.

But as of late I keep seeing a significant abuse of the in-app purchase and subscription systems, to the point of becoming a customer-hostile and extortionate practice.

For certain apps, the situation is almost completely opposite to what it was back in the ‘race to the bottom’ era: we have mediocre or good-enough apps asking $5 to $10 monthly subscription fees. We have apps that, while good, still ask steep monthly or yearly fees without offering a paid-upfront option. Then we have apps that do provide reasonable subscription fees, but punish customers who would prefer a one-time payment option by offering one-time prices that are so artificially high that you’ll want to opt for a subscription. Again, I understand the costs of development, but eighty euros as a one-time purchase option for a photo app with basic editing features and a bunch of filters? Come on. Maybe some people will cave and give you the 5.99 euros/month you’re asking. Me, I’ll look elsewhere. Not because I’m a cheapskate, because I really am not — I’ve been paying for software since it came in boxes and physical media — but because I truly despise these tactics. It’s like we’ve gone from a race to the bottom to a race to the top. 

I am actually surprised that the subscription method has worked so well for many developers. Maybe this is really working for that selection of ‘good guys’ providing truly essential apps for reasonable subscription fees. And I’m happy for them, of course. But I genuinely don’t know for how long subscriptions in general will remain successful:

  • For one, subscription fatigue is real. A lot of people already pay monthly fees for several services that may be considered ‘essential’ today, like cloud services, music streaming services, entertainment channels. Now that an increasing number of apps — sometimes even one-purpose little utilities — offer no other choice than a subscription, a lot of people are realising that they simply can’t afford to subscribe to everything. Sure, there are a lot of apps asking very reasonable monthly fees, and that’s fine when taken individually, but cumulatively this becomes unsustainable for the regular customer rather quickly.
  • Secondly, but not less important, all those developers (and let’s don’t mince words, all those scammers) who are abusing the in-app purchase and subscription systems with extortionate prices in exchange for basic, mediocre features, are ultimately giving subscriptions a bad reputation. And I hope this doesn’t end up hurting all the good and honest developers out there. There are still people falling for App Store scams, but there’s also an increasing number of people who are getting suspicious of subscriptions on principle (I’ve been getting a lot of emails in recent times from people asking me things like, I’d like to get this app but it’s subscription-only. Do you know this dev? Should I trust them? or I want to try this app, but I don’t want to be scammed. They offer a free trial period. Is it easy to cancel everything if I don’t want to commit?)
  • Thirdly, there’s another thing I’ve noticed. And it got my attention only because of some feedback received via email, so I don’t think it’s a widespread issue, but I feel it’s still worth a mention. In a few emails I received from readers of my blog or followers on social media, I was told that they were considering cancelling their subscriptions to certain apps because — after a year or more — they didn’t feel that the developer had made good on their initial promise of keeping the app bug-free or adding ‘new exciting features’ down the road. In one particular case, my interlocutor was rather upset that, and I quote, for the past 14 months the only new thing I’ve seen from this app was a .0.1 update with minor cosmetic fixes and pretty much nothing else. I kinda feel robbed even if it’s not one of those fraudulent apps I sometimes read in the tech news.

Late last year I exchanged a few emails with a developer I collaborated with by helping out with their app’s localisation. On the subject of subscriptions (they have subscription plans, rather fair for what they offer), they told me something along the lines of, Well, at least now consumers have a clearer idea when it comes to software value… And my reaction was, Do they, really? I think they are bombarded with so many different value/price propositions they end up having no real clue. They start considering that, for instance, Spotify Premium costs them 10 euros/month in exchange for unlimited access to a vast library of music; that a Dropbox Plus plan gives an individual 2 TB of cloud space, unlimited device linking, plus other handy features for 12 euros/month; that a Netflix Basic plan gives them unlimited access to ad-free movies and TV series in HD for 8 euros/month… Then we have this photo app that (I’m not kidding) gives you unlimited save & export only if you pay 10 euros/month (but it was 12 euros/month a few weeks ago). What kind of ‘clearer idea’ about software value do we expect a regular consumer to have at the end of the day?

My impression is that people have no other choice than put up with what the App Store throws at them. I’m sure that some have understood that those prices they used to pay some years ago were simply too low to be sustainable in the long run, and they’re happy to support their favourite apps or developers by starting a subscription. But I really wish more people would come to a better understanding of how much a piece of software costs to make, and its overall value, through a more nuanced education than, Either subscribe to this app for $7/month or $75/year, or pay a one-time price of $99, or look elsewhere. I wish we could have avoided going from one extreme — great apps at ludicrously low prices — to the other — great and not so great apps with subscription plans that more often than not don’t feel particularly fair. 

I’m not putting the entire weight of my criticism on developers’ shoulders, don’t get me wrong. With Apple still wanting (for the most part) their 30% cut, with the utter joke that is Apple’s review system, and with the clunkiness that still bafflingly plagues the whole process, developers do what they can to stay in business in an environment that should celebrate them instead of thwarting them. But as a customer, the current situation of ridiculous in-app purchases and subscription plans, and tactics that discourage one-time purchases to push me toward subscribing, do not make me a happy shopper.

I’m subscribed to a few services that I feel provide me with a fair amount of benefits for the asking price. But I still view the majority of apps as products, not services. So while I understand that services require periodic payments to sustain the costs of the constant maintenance and provision of such services, products are things one should purchase and own. They’re not services in need of constant maintenance to operate.

Whenever I say I very much prefer paying a fixed amount upfront, and being quite open to purchasing a paid update down the road, typically many are quick to point out that, by subscribing to the app, I’d be doing essentially the same thing: If you’re willing to pay $10–15 upfront for App X and then pay another $10–15 for App X 2.0 a few months later, then why aren’t you willing to subscribe to App X for $1.99/month? Isn’t it the same thing, in the end? It may be the same thing from a financial standpoint, sure. But it’s not the same when it comes to the options and choices I am given. 

When I have the choice of making a one-time purchase for $10, I express my commitment to what I have before my eyes. When a new, improved version of the app comes out, I may choose to delete the older version and buy the new one for another $10 or I may decide to skip the new version and keep using the old one (in my case, this is hardly due to me not wanting to spend another $10, but more like me disliking the redesigned UI that often reshuffles interface elements unnecessarily and messes with my muscle memory). But at least I’m getting to keep the old version, as unsupported as it may get. This is not the same as starting a monthly subscription then cancelling it when the app changes in ways I don’t like, or for whatever other reason. And that’s because in most cases, when you cancel a subscription, either you stop having access to the ‘premium’ features you were paying for, or the app stops working altogether.

But suppose, for the sake of argument, that one is okay with whatever kind of subscription is thrown at them. Suppose one is okay with subscribing to every app that offers them a minimum of utility. How many subscriptions are you willing to start? How many subscriptions before things start getting a bit too much? At what point, after looking at your bank statement at the end of the month, do you draw the line? In the days of the race to the bottom in the App Store, things quickly became unsustainable for many developers. Now the scenario has changed in such a way that things can become unsustainable for many consumers probably just as quickly. And again, there are a lot of happy customers who have no problems subscribing to apps they love or that they find essential for their workflows. But I also wonder how many of those customers are not giving money to other apps they might enjoy because they had to make a decision and choose to invest only in some apps and not in others as well. (See the App Store review I quoted at the beginning). 

I have no real solutions to propose because I’ve probably missed a few things in my analysis (not that it is a simple situation to analyse, mind you). The changes I’d love to see sound perhaps too idealistic, but I think it could be great if we could go back to an App Store that is more focused on purchases rather than subscriptions. One-time purchases at more realistic prices, with an easy way of offering paid updates for subsequent major app releases, and more meaningful, less nickel-and-diming in-app purchases. A fairer system focused on app purchases would also be less exploitable than a subscription system, less prone to abuses and fraud. I also wish Apple did better at detecting scam apps and subscription schemes, and made the lives of legitimate developers easier after years of jumping through stupid hoops and being subjected to a volatile and seemingly random app review process.

 
Subscription fatigue and related musings was first published by Riccardo Mori on Morrick.me on 26 February 2023.