Goodbye iPhone SE, hello insipid rebranded iPhone SE

Handpicked

I was going with reblanded in the title as a provoking wordplay, but then I was reminded of that special portion of my audience chronically lacking in sense of humour and sending me messages and emails like, There’s a typo in your title etc.

Oh well. What an intro, eh?

In 2019, Samsung launched the Galaxy S10 line; there were two flagship models, the S10 and S10+, a bigger premium S10 5G, and later in 2020 Samsung introduced the S10 Lite, a midrange version of the S10. But this line also featured another model, perhaps the most interesting — the S10e. It wasn’t a ‘lite’ version of the S10, just a more compact variant which didn’t really skimp on features apart from having a slightly-lesser-quality display, a smaller battery, and lacking the telephoto camera. It had personality; it was the S10 for those who wanted a smaller phone. The title of The Verge’s YouTube review of the S10e sums it up pretty nicely — “Smaller, cheaper, better”. It is perhaps the last good small smartphone with a headphone jack. 

I don’t know whether that ‘e’ meant something for Samsung. It’s the only occurrence of such suffix in the Galaxy S line. I don’t know why, but this kind of suffix always suggests ‘economy’ to me, in the air travel sense. But while the Samsung S10e did cost less than the other S10 flagships, it wasn’t a ‘cheap’ phone from a hardware quality standpoint.

The iPhone 16e isn’t either.

The Samsung S10e’s essence was probably best encapsulated by Engadget’s title for their video review: Smaller, but not lesser.

What is the iPhone 16e? To me, it’s confusion. I would add ‘aimlessness’, but then I’d have to read several rebukes in my email messages, from people who would tell me that Apple has a plan, a strategy behind it, like the company always has in everything they do. And yes, of course there is strategy here somewhere. But this latest iPhone, this new ‘addition to the family’ — a family made up of many models with too little differentiation — seems rather confusing to me.

And to Luke Miani, who in his first-impressions video on the iPhone 16e, visibly shares the same kind of puzzlement. This is what he concludes, in a breathless exposition tour de force:

You or I [technology enthusiasts] might be able to sit here and go, “iPhone 15 has a mute switch, 16e has an action button, and 16 has an action button and a camera control. The iPhone 15 is on the A16 chip, the iPhone 16e on the A18 chip with one less GPU core than the iPhone 16 on the also A18 chip with an additional GPU core. iPhone 15 has a Dynamic Island, 16e doesn’t; the iPhone 16 does again, but the iPhone 16e without the Dynamic Island has the better battery life of all three phones, better than the 15 and the 16″. 

The average consumer is going to be so freaking confused by this. Why is the cheapest phone better and worse than the newer and older more expensive phones? It’s just too much; it’s too much, dude… I don’t really know how to describe it, other than Too Much. The benefit of the iPhone SE was that it was cheap. You didn’t buy it because of features; you bought it because you wanted an iPhone that would get software updates for years to come at the lowest possible price, and the iPhone 16e is not that. It is yet another midrange iPhone with a confusing suffix and a list of features that doesn’t make sense to most people. 

It’s not a back-to-basics smartphone that you buy when you don’t know what else to get — it’s just another confusing addition to the middle ground, the $500–800 smartphone range, and frankly I think that this was a bad move. I don’t know, I really want to get my hands on this phone ’cause I think [that] as a phone it will probably be very good, but as a part of Apple’s iPhone lineup, I think it just adds confusion. 

Miani speaks of the now-defunct iPhone SE line in pragmatic terms, an iPhone model targeted at pragmatic, budget-conscious customers. But what I liked of the SE line was that, conceptually, it was a standalone line with its own release schedule and its own peculiarities. Whether you liked it or not, it maintained a sort of quaint distinctiveness through its first three generations.

In my October 2024 article on the iPhone SE trajectory, I mused: 

Now, imagine a hypothetical fourth-generation iPhone with an A18 Bionic chip (or perhaps a specially-designed A17 Bionic, sort of a nerfed-A18?), the single-camera setup and technology of the iPhone XR, and of course the external design of the iPhone XR, featuring a 6.1‑inch screen (maybe with a slightly updated display technology), Face ID, etc. Let’s say it would replace both the third-generation iPhone SE and the iPhone 14 in Apple’s current offering. Its trade-offs battle would be against the regular iPhone 15. And it would be a tough one. Yes, it would have a better chip, but given how recent performance gains in iPhones have become basically imperceptible in everyday use, would such an iPhone SE 4 be a better proposition over the 15 when all it had would be same or better CPU speed and a lower price? The display would have the same size, the display technology would be worse, it would feature a notch while the iPhone 15 has a dynamic island, it would feature a decidedly worse camera setup… Sure, $429 would be a bargain compared to the $699 of the iPhone 15. But its form factor is too similar and, apart from the CPU, all the rest would be the same stuff but worse in all respects. Unless Apple is planning to do some unexpected changes, like offering a single-camera setup but with a better camera than the XR’s 12-megapixel affair, to make the next iPhone SE more appealing, I don’t see anything particularly special or worth considering in it. […]

But you know what I think would make more sense? I know I come from a biased position, but to me it would make more sense if the design and form factor of the next iPhone SE would be those of the iPhone 12/13 mini. Maybe the 13 mini, since it has a smaller notch on the front and a better battery performance. […]

Overall, it would still feel like a ‘Special Edition’ phone: compared to the mainstream iPhone lineup, it would be different/special enough, appealing enough, modern enough, all the while maintaining that classic, truly iconic design that harks back to the lines of the iPhone 4 and 5. Apple could even sell it at $499 instead of $429.

What Apple managed to assemble is a sandwich of uninterestingness and raise its final price to $599. They discontinued a line of iPhone models that was ‘midrange with personality’, and released something that isn’t distinctive in any way, its price positioning makes it difficult to recommend, and finally its name ties it to a specific iPhone release — so you’re left wondering, Is this 16e a one-off thing, like the Samsung S10e was six years ago, or are we to expect an iPhone 17e, 18e, and so on?

I’m also left wondering, Who is it for? What was the reasoning behind this iPhone? But if there’s a device that best encapsulates the overall state of Apple today, it is, without doubt, this iPhone 16e.

People and resources added to my reading list in 2024

Tech Life

Welcome to the twelfth instalment of my annual overview of my most interesting discoveries made during the previous year. Traditionally, the structure of this kind of post includes different categories of resources: blogs, YouTube channels, cool stuff on the Web, and so forth. Such structure isn’t going to change, but if my previous instalment was perhaps unusually brief, I’m afraid the current one is going to be even briefer. There are a few reasons as to why:

One. For more than half of 2024, my attention was primarily focused on personal matters. Having to find a new place to live, the process of purchasing such place, the move, and finally settling in the new apartment was a time and energy sink for both my wife and I. My time online was mostly spent working and engaging in some light social media activity, and not much else.

Two. What I wrote last year speaking about 2023 didn’t change much in 2024: I’ve often mentioned this low tide brought up by a general feeling of ‘tech fatigue’; as a consequence, [during 2023] my interest in adding technology-related sources to my reads was rather low. I even neglected to stay up-to-date with the people and blogs I was already following. That feeling of tech fatigue started receding a bit towards the end of 2024, when I received a Nothing Phone 2a as a birthday gift — an event that gave me the final push to switch to Android as my primary phone platform, leaving my iPhone SE 3 as a secondary device.

Three. Another thing I wrote in the previous instalment of this series, was this:

This exhaustion stage, this tech burnout, is necessary as well. I’m more and more convinced that more people ought to reach this stage, to then try to approach tech in a different — hopefully healthier — way. Because the next stage is to focus on whatever good remains out there after the squeeze. That’s why I’m trying to approach 2024 with the goal of finding out who and what’s really worth following, who and what is truly distinctive, who and what is ultimately worth my (and your) time. Mind you, it’s what I’ve always been trying to do when compiling these yearly overviews; the only little thing that has changed is that from now on I’ll try to be even more selective. 

You know what happens when you get even more selective? That maybe you follow a link to a blog article, and you like the article, but then you explore that blog further and you realise that such article — and perhaps a couple more — is the only highlight of that blog, and you start wondering, Is this website worth adding to my RSS feeds, or should I just share the link to that specific article and let others decide?

In most cases, I’ve ended up bookmarking & sharing articles instead of adding blogs to my reading list. But what if it turns out to be a mistake and I miss out on some good writers/bloggers? Well, if I bookmark something, chances are I’ll return to that article and website at a later date, and if I find enough stuff I like on my subsequent visits, I may decide to recommend the whole package. Also, if the author keeps writing good stuff, it’s very likely I’ll get other recommendations about them, so I don’t really miss out on anybody. And even if I do — let’s be real for a second — time is a finite resource; I’ll never be able to read or watch everything from everyone I cross paths with. 

Another thing that happens when you get more selective is that you start looking harder and harder at the resources you’ve already discovered — all those RSS feeds, all those YouTube channels, etc. — and you reassess them with a fresh pair of eyes. This is why, during 2024, I’ve been subtracting rather than adding to my resources’ reservoir, so to speak. Interests change, people change (or don’t — and that, sometimes, can be a problem), the quality of a blog or YouTube creator’s output may become less consistent or patently decline… And so it’s time for some pruning and tidying up.

Blogs

Just two:

  • Passo Uno, by Fabrizio Ferri-Benedetti. Fabrizio is a technical, UX, and programmer writer, and that should give you an idea of the main topics he covers in his blog. I like his clear, efficient writing style, and even when he talks about stuff I’m not super familiar with, I often feel stimulated to learn more about it. As for the blog’s name, as Fabrizio states in the About page: ’Passo uno’ is Italian for ‘stop-motion’. It also means ‘step one’.
  • The blog of Vitor Zanetti. I discovered it when Vitor started following me on Mastodon, as I’m always curious to check out other people’s profiles and websites when they follow me on social media. Vitor’s blog doesn’t seem to have a main focus: he may talk about technology in one post, then muse about design in another, or share observations sparked after watching a particular film. Like me, he doesn’t post frequently, but I find his writings to be inspiring and thought-provoking, and perhaps you will too.

Newsletters

I’m not typically a fan of the newsletter format; I can’t exactly explain why. The fact that, once you subscribe, the newsletter is something that comes to you instead of you going to it should be a convenient and preferable dynamic. Instead, I often end up treating it like advertising email, and ultimately ignore it or just skim the part that’s visible in my email client. Over the years I’ve subscribed to many newsletters on a whim — they were genuinely interesting and well written — but I’ve also ultimately unsubscribed from most of them due to lack of time and engagement.

The sole exception I made in 2024 was for Ed Zitron’s Where’s Your Ed At? which I basically treat as a long-form blog. I receive the email updates, but I’m also subscribed to the feed. If technology and the tech industry are your main interests, you should already know who Ed Zitron is. But if you don’t, well, it’s best if I link to the newsletter’s About section. You’ll find everything you need to know. I really, really recommend Ed’s newsletter. Each instalment is generally a long read, but very worth your time.

I started following Ed on pre-Musk Twitter years ago, and was reminded of his work again in recent times when I was looking for materials and information about ‘AI’. And I found out that Ed and I share basically the same (negative) views about it, only Ed has the know-how to talk about it with much more clarity and authority that I have on the subject. A lot of people have asked me to talk more often and more at length about ‘AI’, LLMs, the industry, and why I think it’s largely bullshit. My advice is to subscribe to Ed’s newsletter if that’s a subject of particular interest to you. You’ll find a lot of information, and you’ll know that Ed and I are on the same page. 

YouTube channels

Around September 2024 I looked at my YouTube subscriptions list and was horrified to realise that I was following 136 channels. Yeah, things had got rather out of hand, and so I started unsubscribing from a lot of channels I had added simply after discovering a single video or following a recommendation for a single video. Despite being a mature platform, I’m routinely baffled by how rudimentary YouTube’s tools are for organising content. For instance, I’d love to have the ability to categorise my subscriptions and put them in separate folders, like one does with RSS feeds, so that I can more easily get to those creators whose content could be filed under ‘photography’ or ‘tech’ or ‘gaming’ or ‘lifestyle’ or ‘cooking’ or ‘architecture’, and so forth. Instead, all YouTube offers is an unsorted list on the left sidebar of the home page, vaguely organised by creator activity/frequency of uploads. It gets messy, fast.

After spending the best part of an afternoon reviewing my subscriptions and mercilessly remove a lot of unwanted or uninteresting ones, I ended up with half the initial amount — which is still a lot, but becomes way more manageable. Again, follow my self-imposed Be more selective guideline, the only discovery really worth sharing is, in my opinion, Howtown.

The channel description is perhaps a bit terse: The “How Do They Know” show from journalists Adam Cole and Joss Fong. So it’s better if you watch their short introduction video. Essentially, Cole and Fong create video essays on different subjects to answer the question How do they know or How do we know about this particular fact or topic? In their words:

We want to tell you our guiding principles so you can hold us to them. First, we approach our stories with curiosity above all. So this isn’t a commentary channel. We’re here to make sense of the evidence. We rely on primary sources and interviews, and we’ll share those sources with you with each video. If we make any factual errors, we will post corrections that explain exactly what we got wrong. Finally, we never take money in exchange for coverage. Our sponsors don’t have any control over what we make. 

I find Cole and Fong to be entertaining, personable, and likeable; their videos are well researched and produced, and the fact that they don’t upload content frequently is a good sign in my book, because it means they’re taking time to do their homework before presenting a new essay. If you’re an intellectually curious person as I am, I think you’ll like their channel.

Podcasts

Another year, another round of copying-and-pasting the same quote from a few years ago:

In 2019 I unsubscribed from all the podcasts I was following, and I haven’t looked back. I know and respect many people who use podcasts as their main medium for expression. My moving away from podcasts is simply a pragmatic decision — I just don’t have the time for everything. I still listen to the odd episode, especially if it comes recommended by people I trust. You can find a more articulate observation on podcasts in my People and resources added to my reading list in 2019.

If you’re wondering why I keep the Podcast section in these overviews when I clearly have nothing to talk about, it’s because to this day I receive emails from people un-ironically asking me for podcast recommendations.

My RSS management

Yet again, nothing new to report on this front. I’m still using the same apps I’ve been using on all my devices for the past several years, and I haven’t found better RSS management tools / apps / services worth switching to. In my previous overviews, I used to list here all the apps I typically use to read feeds on my numerous devices, but ever since I broke my habit of obsessively reading feeds everywhere on whatever device, I’ll only list the apps on the devices I’ve used over the past year or so. If you’re curious to read the complete rundown, check past entries (see links at the bottom of this article):

  • On my M2 Pro Mac mini running Mac OS 13 Ventura: NetNewsWire.
  • On my 17-inch MacBook Pro running Mac OS 10.14 Mojave, and on my 13-inch retina MacBook Pro running Mac OS 11 Big Sur: NetNewsWire 5.0.4 — A slightly older version of this great RSS reader.
  • On my other Intel Macs running Mac OS 10.13 High Sierra: Reeder and ReadKit.
  • On my iPad 8: UnreadReederNetNewsWire for iOS, and ReadKit.
  • On my Android phones — Nothing Phone 2a and Microsoft Surface Duo: the Feedly app.
  • On my iPhone SE 3, iPhone 8, iPhone 7 Plus, iPhone 5s, iPhone 5, iPad 3: Unread. (Though on the iPad 3 Reeder seems to be more stable and less resource-hungry).
  • On all my more recent Windows machines I use FeedLab. It’s not a bad app at all, but I’m still looking for something more elegant visually. Nextgen reader used to be a great client, but development appears long discontinued.

Past articles

In reverse chronological order:

I hope this series and my observations can be useful to you. Also, keep in mind that some links in these past articles may now be broken. And as always, if you think I’m missing out on some good writing or other kind of resource you believe might be of interest to me, let me know via email, Mastodon, or Bluesky. Thanks for reading!

A few insights by Don Norman from 30 years ago that are still relevant today

Tech Life

I was perusing some past issues of ACM Interactions magazine, and I stumbled on an interview with Don Norman, a figure I’ve always admired and one of the main forces of inspiration for me to delve deeper in matters of usability, design, and human-machine interaction.

The interview, titled A conversation with Don Norman, appeared on Volume 2, Issue 2 of the magazine, published in April 1995. And of course it’s a very interesting conversation between Don Norman and John Rheinfrank, the magazine editor at the time. There’s really very little to add to the insights I’ve chosen to extrapolate. While discovering them, my two main reactions were either, How things have changed in 30 years (especially when Norman talks about his work and experience at Apple); or, 30 years have passed yet this is still true today. I’ll keep my observations at a minimum, because I want you to focus on Norman’s words more than mine.

1. Forces in design

Don Norman: […] John, you deserve much of the credit for making me try to understand that there are many forces that come to bear in designing. Now that I’ve been at Apple, I’ve changed my mind even more. There are no ‘dumb decisions.’ Everybody has a problem to solve. What makes for bad design is trying to solve problems in isolation, so that one particular force, like time or market or compatibility or usability, dominates. The Xerox Star is a good example of a product that was optimized based on intelligent, usability principles but was a failure for lots of reasons, one of which was it was so slow as to be barely functional. 

John Rheinfrank: Then your experience at Apple is giving you a chance to play out the full spectrum of actions needed to make something both good and successful? 

DN: […] At Apple Computer the merging of industrial design considerations with behavior design considerations is a very positive trend. In general, these two disciplines still tend to be somewhat separate and they talk different languages. When I was at the university, I assumed that design was essentially the behavioral analysis of tasks that people do and that was all that was required. Now that I’ve been at Apple, I’ve begun to realize how wrong that approach was. Design, even just the usability, let alone the aesthetics, requires a team of people with extremely different talents. You need somebody, for example, with a good visual design abilities and skills and someone who understands behavior. You need somebody who’s a good prototyper and someone who knows how to test and observe behavior. All of these skills turn out to be very different and it’s a very rare individual who has more than one or two of them. I’ve really come to appreciate the need for this kind of interdisciplinary design team. And the design team has to work closely with the marketing and engineering teams. An important factor for all the teams is the increasing need for a new product to work across international boundaries. So the number of people that have to be involved in a design is amazing.

Observation: This was 1995, so before Steve Jobs returned at Apple. But Jobs’s Apple seemed to approach design with this mixture of forces. The results often showed the power of these synergies at play behind the scenes. Today’s Apple perhaps still works that way within the walls of Apple Park, but often the results don’t seem to reflect synergetic forces between teams or across one design team — It’s more like, there were conflicts along the way, and an executive decision prevailed. (No, not like with Jobs, because he better understood design and engineering than current Apple executives).

2. Design can only improve with industry restructuring

JR: You just said that there may be some things about the computer industry, or any industry, that make it difficult to do good design. You said that design could only improve with industry restructuring. Can you say more? 

DN: Let’s look at the personal computer, which had gotten itself into a most amazing state, one of increasing and seemingly never-ending complexity. There’s no way of getting out. Today’s personal computer has an operating system that is more complex than any of the big mainframes of a few years ago. It is so complex that the companies making the operating systems are no longer capable of really understanding them themselves. I won’t single out any one company; I believe this is true of Hewlett-Packard, Silicon Graphics, Digital Equipment Corporation, IBM, Apple, Microsoft, name your company — these operating systems are so complex they defy convention and they defy description or understanding. The machines themselves fill your desk and occupy more and more territory in your office. The displays are ever bigger, the software is ever more complex. 

In addition, business has been pulled into the software subscription model. The way you make money in software is by getting people to buy the upgrade. You make more money in the upgrade than in the original item. Well, how do you sell somebody an upgrade? First, you have to convince them that it’s better than what they had before and better means it must do everything they had before plus more. That guarantees that it has to be more complicated, has to have more commands, have more instructions, be a bigger program, be more expensive, take up more memory — and probably be slower and less efficient.

3. Why changing is hard in the tech industry

DN: […] Now, how on earth do you move the software industry from here to there? The surety of the installed base really defeats us. For instance, Apple has 15,000,000 computers out there. We cannot bring out a product that would bring harm to those 15,000,000 customers. In addition, if we brought out a revolutionary new product, there’s the danger that people would say the old one is not being supported, so they’ll stop buying it. But they don’t trust this new one yet. “Apple might be right but meanwhile we better switch to a competitor.” This story is played out throughout the computer industry. It’s not just true of Apple. Look at Microsoft, which has an even worse problem, with a much larger installed base. It’s been a problem for many companies. I think the reason why a lot of companies don’t make the transition into new technologies is that they can’t get out of their installed base. 

Mind you, the installed base insists upon the current technology. There’s a wonderful Harvard Business Review article on just this: Why don’t companies see the new technology coming? The answer is, they do. The best companies often are developing new technology. But look at the 8‑inch disk drive which has replaced the 14-inch Winchester drives. It was developed and checked with the most forward-looking customers, who said, “That will never work for us.” So the 8‑inch drive wasn’t pushed. Despite everything being done to analyze the market, in retrospect, the wrong decision was made. At the time, by the way, it was thought to be the correct decision. 

It’s really hard to understand how you take a mature industry and change it. The model that seems to work is that young upstart companies do it. Change almost always seems to come from outside the circle of major players in the industry and not within. There are exceptions, of course, of which IBM is an interesting one. IBM was once the dominant force in mechanical calculating machines and young Thomas Watson, Jr., the upstart, thought that digital computers were the coming thing. Thomas Watson, Sr. thought this was an idiotic decision. But actually Junior managed to get the company to do create the transformation. It’s one of the better examples of change in technological direction, and it also was successful.

About Norman’s last remarks, see Wikipedia: “Watson became president of IBM in 1952 and was named as the company’s CEO shortly before the death of his father, Watson Sr., in 1956. Up to this time IBM was dedicated to electromechanical punched card systems for its commercial products. Watson Sr. had repeatedly rejected electronic computers as overpriced and unreliable, except for one-of-a-kind projects such as the IBM SSEC. Tom Jr. took the company in a new direction, hiring electrical engineers by the hundreds and putting them to work designing mainframe computers. Many of IBM’s technical experts also did not think computer products were practical since there were only about a dozen computers in the entire world at the time.”

4. “Personal computers”

JR: So it looks as though we have another transition to manage. It’s very strange that they call these devices ‘personal computers.’ 

DN: Yes. First of all they’re not personal and second, we don’t use them for computing. We’re using these things to get information, to build documents, to exchange ideas with other people. The cellular phone is actually a pretty powerful computer that is used for communication and collaboration. 

Observation: This brief remark by Norman about mobile phones is rather amazing, considering that it was made back in 1995 when smartphones didn’t exist yet — the functions of what we now consider a smartphone were still split between mobile phones and Personal Digital Assistants (PDAs). Also the mention that these devices (personal computers) are not really personal still sounds especially relevant today, for different reasons. See for example this recent piece by Benj Edwards: The PC is Dead: It’s Time to Make Computing Personal Again.

5. Interface design, interaction, and building a personality into a device

JR: So in what direction do you think computer-interface design should go? Many companies are making moves to simplify entry and interaction (Packard Bell’s Navigator and Microsoft’s BOB). In the short term, how does this fit your vision? 

DN: The question really is, in what direction do I see our future computers moving? Microsoft has introduced BOB as a social interface, which they think is an important new direction. Let me respond to the direction and I’ll comment later on BOB. As I’ve said before, I believe our machines have just become too complex. When one machine does everything, it in some sense does nothing especially well, although its complexity increases. My Swiss Army knife is an example: It is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or a scissors. My Swiss Army knife also has so many tools I don’t think I ever open the correct one first. Whenever I try to get the knife, I always get the nail file and whenever I try to get the scissors, I get the awl, etc. It’s not a big deal but it’s only about six parts. Imagine a computer with hundreds or thousands of ‘parts.’ I think the correct solution is to create devices that fit the needs of people better, so that the device ‘looks like’ the task. By this I just mean that, if we become expert in the task, then the device just feels natural to us. So my goal is to minimize the need for instruction and assistance and guidance. 

Microsoft had another problem. Their applications are indeed very complex and their model is based on the need to have multiple applications running to do, say, a person’s correspondence, communication, checkbook, finances. How did they deal with the complexity with which they were faced? There has been some very interesting social-science research done at Stanford University by Cliff Reeves and Byron Nash, which argues that people essentially treat anthropomorphically the objects with which they interact, that is they treat them as things with personalities. We kick our automobile and call it names. Responding to computers in fact has a tendency to go further because computers actually enter into dialogues with people, not very sociable dialogues, but dialogues nevertheless. So from their research, Reeves and Nash did some interesting analysis (somewhat controversial, by the way) in the social-science community about the social interactions between people and inanimate objects. That’s all very fine, and you can take that research and draw interesting conclusions from it. It’s a very big step, however, to take that research and say that, because people impart devices with personalities, you should therefore build a personality into a device. That was not supported by the research. There was no research, in fact, about how you should use these results in actual device construction. 

Observation: The bit I emphasised in Norman’s response made me wonder. And made me think that maybe this is one of the reasons why most automated ‘AI’ assistants — Alexa, Siri, etc. — remain ineffectual ways to devise and implement human-machine interaction to this day. Perhaps it’s because we fundamentally want to always be the ones in charge in this kind of relationship, and do not like devices (or even abstract entities such as ‘AI’ chatbots) to radiate perceived personality traits that weren’t imparted by us. By the way, I hope we’ll keep holding on to that feeling, because, among others, it’s at the root of a healthy distrust towards this overhyped ‘artificial intelligence’.

It’s very difficult to decide what is the very best way of building something which has not been studied very well. I think where Microsoft went wrong was that, first of all, they had this hard problem and they tried to solve it by what I consider a patch, that is, adding an intelligent assistant to the problem. I think the proper way would have been to make the problem less complex in the first place so the assistance wouldn’t be needed. I also think they may have misread some of the research and tried to create a character with an extra cute personality. 

In his response, Norman continues with another interesting remark (emphasis mine, again). Despite referring to a product we now know did not succeed — Microsoft BOB — I think he manages to succinctly nail the problem with digital assistants and offer a possible, radical workaround; though I seriously doubt tech companies today would want to engage in this level of rethinking, preferring to keep shoving ‘AI’ and digital assistants down our throats.

6. Making devices that fit the task

JR: It seems as if substantial changes in design will take a long time to develop. Will we have something good enough for the ten-year-old with ‘Nintendo thumb’ before he or she grows up? 

DN: I think for a while things aren’t going to look very different. The personal computer paragon could be with us another decade. Maybe in a decade it will be over with. I’d like to hope it will be. But as long as it’s with us, there aren’t too many alternatives. We really haven’t thought of any better ways of getting stuff in or out besides pushing buttons, sound, voice, and video. Certainly we could do more with recognition of simple gestures; that’s been done for a very long time, but we don’t use gestures yet in front of our machines. I mean gestures like lifting my hand up in the air. We could, of course, have pen-based gestures as well and we could have a pen and a mouse and a joystick and touch-sensitive screens. Then there is speech input, which will be a long time in coming. Simple command recognition can be done today but to understand, that’s a long time away. 

So in my opinion the real advance is going to be in making devices that fit the task. For instance, I really believe within five years most dictionaries will be electronic, within ten years even the pulp novel, the stuff you buy in the airport to read on the airplane, will have a reader. What you’ll do is go to the dispenser and instead of the best 25 best-selling books, it will have 1,000 or 2,000 books for browsing. When you find a book that you like, you’ll put in your credit card and the book will download to your book reader. The reader will be roughly the size of a paperback book today and look more like a book than a computer. The screen will be just as readable as a real book. Then look at any professional, say a design professional. You couldn’t really do your design without a pencil. Look how many pencils good artists will use. They may have 50 or 70 or 100 different kinds of drawing implements. We have to have at least that kind of fine-detail variation in the input style in the world of computers. I don’t think we’ll have the power that we have today with manual instruments until we reach that level. I think the only way to get that power, though, is to have task-specific devices. That’s the direction in which I see us moving.

Observation: There was, indeed, a time, when tech seemed to move in the direction envisaged by Norman, with devices designed for specific tasks. When Steve Jobs illustrated the ‘digital hub’ in the first half of the 2000s, the Mac was the central hub where we would process and work with materials coming from different, specialised devices: the digital camera, the camcorder, the MP3 player, the audio CD, the DVD, the sound-recording equipment. At the time, all these devices were the best at their designed tasks.

But then the iPhone came (and all the competing smartphones based on its model), and it turned this ‘digital hub’ inside out. Now you had a single device taking up the tasks of all those separate devices. Convenient, but also a return to the Swiss Army knife metaphor Don Norman was mentioning earlier in what I indicated as section №5: “My Swiss Army knife […] is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or scissors.”

If you think about it, the Swiss Army knife is also a good metaphor to explain a big part of the iPad’s identity crisis. A big smartphone, a small laptop, a smarter and more versatile graphic tablet, among other things; and yet, it tends to do better at the task it ‘looks more like’: a tablet you use with a stylus to make digital artworks.

After years of smartphone (and similar ‘everything bucket’ devices) fatigue, it seems that we may be moving again towards task-specific devices, with people rediscovering digicam photography, or listening to music via specialised tools like old iPods and even portable CDs and MiniDisc players. The e‑ink device market seems to be in good health, especially when it comes to e‑ink tablets for note-taking and drawing; products like the Supernote by Ratta or the BOOX line by Onyx; or the one that likely started the trend — the ReMarkable. I have recently purchased one of these tablets, the BOOX Go 10.3, and it’s way, way better than an iPad for taking notes, drawing, and of course reading books and documents for long stretches of time.

I hope we’ll keep moving in this direction, honestly, because this obsession for convenience, the insistence on eliminating any kind of friction and any little cognitive load, and wanting single devices that ‘do everything’ is what is making interfaces become more and more complex, and making tech companies come up with debatable solutions to make such interfaces less complex. See for instance how Apple’s operating systems have been simplified at the surface level to appear cleaner, but in doing so have removed a lot of UI affordances and discoverability, burying instead of solving all the complexity that these systems have inexorably accumulated over time.

Or see for example how digital assistants have entered the picture in exactly the same way Microsoft came up with the idea of BOB in the 1990s. As Norman says, an intelligent assistant was added to the problem, becoming part of the problem instead of solving it. So we have complex user interfaces, but instead of working on how to make these interfaces more accessible, less convoluted, more discoverable, intuitive, and user friendly, tech companies have come up with the idea of the digital assistant as a shortcut. Too bad digital assistants have introduced yet another interface layer riddled with the usability and human-machine interaction issues we all know and experience on a daily basis. Imagine if we could remove this layer of awkwardness from our devices and had better-designed user interfaces that completely removed the need of a digital assistant.

[The full magazine article is available here.]

Recreating Delicious Library

Handpicked

Wil Shipley (via Michael Tsai):

Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).

I wasn’t contacted about this.

I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app. 

I closely follow Michael Tsai’s blog (and you should too, go add it to your feeds) but this bit of news somehow flew under my radar. That Delicious Library has to be retired is indeed the end of an era. The app had been going strong (and was a great example of good UI) for 20 years, and it’s sad to see great apps die just because someone at a Big Tech company decides to flip a switch.

I remember downloading a trial version of Delicious Library in late 2004 and at the time I really thought it was a nice solution for cataloguing my stuff. I ended up not using it, but the problem was me, not the app. I simply had too many things to catalogue. I could have turned the data entry into a daily habit — you know, scanning a bunch of items every day when I had some free time, and see my digitised library slowly grow and mirror my physical library — but twenty years ago I was far more impatient than I am now. The task seemed too daunting and I simply chickened out.

Fast forward to a few days ago, and I receive an email from Ding Yu, a reader of my blog whom I also know via X/Twitter. He’s a software developer based in Tokyo. And he’s had an idea: 

I’m considering making a modern web version of the beloved Delicious Library, but I’m not sure this is something worth pursuing. I’ve put up my thoughts here: Recreating Delicious Library in 2025?

Prompted for feedback, I told him that I think it’s a very good idea. Despite not being a Delicious Library user myself, I’ve always thought it was a necessary application that would fit the cataloguing needs of a lot of people. 

In writing my response to Ding, I also remembered my experience with Shelfari: I discovered it at the peak of its popularity, and I decided to give it a try. Maybe this time I would be more patient, so I started cataloguing my extensive book library. Things were going well enough, but since on the Internet we can’t have good things for too long, Shelfari was acquired by Amazon, shut down, and subsequently merged with Goodreads. Imagine my disappointment, after patiently uploading and curating data about almost 400 books… 

In recalling that disappointment, the terrible feeling of having the rug pulled from under your feet after all that work, I think that a lot of Delicious Library users must feel the same right now, especially early adopters who have been using the app for twenty years, growing their extremely curated library of physical media. So I wrote back to Ding telling him that for these people, having an alternative Web app/service that could replicate most (or all) of Delicious Library’s functions could be nice and a welcome solution. 

I also told Ding I would spread the word about his idea, so here I am. What do you think about it? If you’re interested, please go read his blog post, get in touch with him, and share your feedback. If you look at Ding’s past projects (outlined in the post), you’ll realise that he’s perfectly capable of coming up with a good product. And I understand his feelings and uncertainty when he writes, I really want to make this happen—it feels like something I’ve been working toward for years. But I’ve also built so many things that no one wanted before, so I’m not sure if this idea is worth pursuing. — I’ve been there. I know. So the more feedback he receives about this, the better.

Do principles always have to lose when it comes to tech?

Handpicked

In Principles vs Pragmatism, Pete Moore writes:

It’s a mistake to judge others for their software choices, while still making exceptions for ourselves. Our hands aren’t exactly always clean. It would be akin to me blasting someone who is using HEY or Kagi, while disregarding my use of Apple products or occasionally ordering from Amazon. Case in point: I’ve seen discourse and uproar about Tim Cook donating $1M to Trump’s inauguration fund, while simultaneously ignoring others who are doing the same thing. No doubt it’s cringeworthy beyond words, and symptomatic of the larger, more pernicious issue of political lobbying and capitalist corruption. 

Cook’s donation being cringeworthy is an understatement. I have made my tiny contribution to that discourse and uproar, by posting this on Mastodon a few days ago:

For me, this is the straw that broke the camel’s back. 

No more money spent for an Apple product until there’s some clear sign of a change in stance and direction with Cook’s successor. 

And I’m going to stand by that. I’m not ignoring the fact that Sam Altman, Jeff Bezos, and Mark Zuckerberg are doing the same thing. I am actually not surprised about that. But I also largely don’t care about their businesses or products. I don’t use any product by Meta. I stopped being active on Instagram the day after Meta acquired it in April 2012. I’ve never had a Facebook account. I only order something from Amazon if there is no real alternative option. And so forth.

And for me, the issue here isn’t what others like Cook have done. The issue is that Cook didn’t act differently. He’s the CEO of the most valuable company in the world, a company that supposedly has thinking different in its DNA and culture. A company that certainly has all the resources to shoulder possible consequences from acting differently here.

Moore continues:

Are those who are lashing out at Tim because of their principles going to abandon Apple entirely, or does their pragmatism prevent them from doing so? I believe it would be naive and unwise to assume a change in leadership at Apple—or any of these other guilty parties—will prevent this from happening again. Spoiler alert: it won’t. This is the gaping wound that’s been allowed to fester and rot in our political systems. 

We have to clarify what ‘abandoning Apple’ means. In my case, it’s not the same as rage-quitting, and that would be silly. Between current, older, and vintage models, I own about 40 Apple devices (Macs, iPhones, iPods, iPads, Newtons), purchased or acquired over the past 30 years or so. I’m not going to put them all in a crate and bring them to the recycler. Some of these devices hold sentimental value, and others were bought when Apple was overall a better company, innovation-wise and culture-wise. Plus, I study user interfaces. I want to keep having access to older versions of Mac OS/iOS/iPadOS to compare and contrast with the newer ones and analyse how they’re evolving (or not). I also need access to Apple devices for work reasons, though this requirement, over time, has become more relaxed.

Getting rid of all traces of Apple in my household as a reaction to Tim Cook donating $1M to Trump makes little sense and doesn’t really ‘boycott’ Apple in any meaningful way. However, as I said more succinctly in my Mastodon post, this is the straw that broke the camel’s back; I have been increasingly frustrated with Apple, their products, their design decisions, their software, their attitude towards third-party developers, their App Store policies, their attitude towards EU legislation’s requests, and in general with Cook’s direction over the years. This last gesture by Cook is something I find especially shameful considering the recipient of the donation and the ulterior motives behind it. So, my abandoning Apple is a process that’s starting now, by refusing to invest a single cent in an Apple product from now on, unless things were to dramatically change. And since, according to Moore, it would be naïve and unwise to assume they will, then okay, I will live with my principled decision.

I found Moore’s piece via Eric Schwarz, who comments:

Over the weekend, there have been a lot of words written about Apple CEO Tim Cook’s $1 million personal donation to the incoming administration’s inauguration fund and I think Moore nails a lot of aspects of how I’ve felt about it. It’s disgusting and we shouldn’t even be in a spot where it’s a necessity. However, while is something that will potentially benefit Apple and its shareholders, it also benefits employees and customers. Apple may get a special carve-out from the threat of tariffs and not have to raise prices. 

Or Apple may get absolutely nothing out of it. That wouldn’t surprise me in the least. Everyone I’ve talked with about this disgusting donation these past days has pretty much reacted the same way: Steve Jobs wouldn’t have donated anything and would have stood by his decision.

I’ve had plenty of instances where I swore off businesses for one reason or another, but if you keep writing them off for every little thing, you’ll run out of options. Bad customer service? Sure. Disrespect of your time and patronage? Fine. Institutional values that don’t align with your own? Okay. […] 

Except this isn’t ‘a little thing’, at least for me. It’s the cherry on top of the shit cake. 

I’ve stood by my principles and choices even when doing so had made my tech life a bit more difficult or increased friction in my workflows. I’m not completely inflexible, and I’m the kind of person who very rarely makes rash decisions. I tend to give people a second chance. In a technology context, I similarly tend to give companies, products, and services a second chance except in cases of major screw-ups that ended up impacting me severely. But when I’ve really had enough of something, it’s unlikely that I’ll reconsider. 

I think it’s increasingly important to have principles and to stand by them in a tech landscape that has never been as insidious as it is now. The only way to stand up to big tech companies is to refuse to be complacent, to refuse to play their games. We always tend to focus on how tech has made a lot of things better or easier, but we never really stop and consider the hefty price we have been paying in return. 

Moore ends his piece with this:

The choice between principles and pragmatism often means being content living within the grey area between them. 

More and more often, I feel that for many people ‘pragmatism’ means essentially ‘convenience’. Why bother taking a stand? — they rationalise, — It’s not going to change anything and I will have made my life harder for nothing. Tech companies, and every other entity that wields some kind of power over us, know this well. That’s why today only legislation seem to have enough power to make certain things change in the tech world. A lot of people like to spout about Voting with our wallet, but then they rarely act on their words. 

Before you think I’m acting all holier-than-thou on this: I’m not judging anybody, and if you’re fine with what Cook did — or if you aren’t, but stopping supporting Apple feels too much or is unfeasible for a dozen reasons — I totally understand. The people I do have a problem with, however, are those who keep whining about how Big Tech is increasingly shaping and controlling our lives, but effectively do nothing to oppose that trend, and always choose convenience while telling themselves they’re being pragmatic. By constantly choosing the path of least resistance, doesn’t this ‘pragmatism’ eventually morph into acquiescence?

If I publicise my stance and my decisions, it’s not out of a desire to virtue-signal, but to manifest my unrest and disagreement, hopefully in a meaningful way. I’m still on team Principles even if it’s clear we’re losing the battle, and I really wish more people joined our team. But in the end I do what I do because it matters to me, because it means something to me and my conscience.

Eric Schwarz:

If you’re mad at Tim Cook, that’s also fine, but what’s the alternative? Microsoft or Google products? Building your own computer and phone with open source tools and hoping it works as well? 

Well, it can be done. It takes patience and a healthy amount of tech-savvy, but it’s not impossible. There was a time when I thought, I don’t think I’ll ever touch Linux — it’s too ‘this’, and not enough ‘that’, and so forth. There is some friction, there is some workflow re-evaluation, some habits may change, but ultimately it’s a bit like those games that hit you hard in their first levels, but as you familiarise with their mechanics, you get more proficient.

But to respond to Eric more directly: Google or Microsoft aren’t necessarily better alternatives, but again, the crux of the matter here isn’t whether these and other big tech companies have donated money to Trump and how much. The matter is, quite simply, that Tim Cook did. Am I naïve and an idealist in thinking that he could have acted differently? Perhaps. That doesn’t change how I feel about it — disgusted and disappointed. 

Schwarz closes his commentary with this:

Politics are a disgusting game, probably even more now, but anyone who is tasked with running one of the largest companies has to unfortunately play a little. 

Should they, though? One thing is having to comply with, say, international laws, and with what foreign governments require (e.g. China, Russia, the EU, etc.). Another thing is participating in this nauseating show of bringing offerings to the loose cannon who is now again U.S. president, in the hope that he shall be benevolent in return; all this while forgetting he is, indeed, a loose cannon.