Who’s afraid of AI? Not I

Tech Life

Start a check in with Francesco
Let them know when you arrive at your destination.

— Siri Suggestion

A few days ago, out of the blue, I got this notification on my iPhone. Siri has been a constant letdown for me since its introduction 150 technology-years ago, and the few Siri suggestions I’ve received over time have never, ever been remotely useful. The most pattern-recognition work Siri has ever done to display a modicum of smartness was a year or so ago. For a few weeks, my wife and I started a habit of eating home-made pizza for dinner every Thursday. When the moment came to put the pizza in the oven, I would start a timer on my iPhone. Then it ceased to be every Thursday: one week was on Wednesday, another week on Friday, etc. But for a month, every Thursday at about the same time, I would receive a Siri suggestion to start the timer for the oven.

What struck me as particularly hilarious about this quoted Siri suggestion was the utter randomness of it all. First and foremost, Francesco is one of my best friends, but he lives in Italy and I live in Spain. We rarely communicate via phone, preferring Google Meet chats. ‘Starting a check in’ with him is probably the last thing I would need to do, given the circumstances. Let them know when you arrive at your destination is also hilarious, because that’s the kind of notification you would expect to receive while driving, maybe at night, maybe when you’re going back home after a night out. When I got this notification, it was mid-afternoon, I had headed down in the parking lot of my building to retrieve from the car a few items I’d forgotten there after the move. From a geolocation standpoint, I had not even left home. There really wasn’t a ‘destination’ I was ‘arriving’ at. 

I know I’m being very pedantic about this. It is, of course, not a big deal at all. But this is what you get out of Siri today. A supposedly smart assistant. Something that was introduced as being quite innovative.

Now let’s look at the first paragraphs of the Wikipedia entry for ELIZA. ELIZA is not a person, and I’m not shouting her name in Caps Lock. ELIZA is

…an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate ‘scripts’, represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient’s words to the patient), and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots (‘chatbot’ modernly) and one of the first programs capable of attempting the Turing test.

ELIZA’s creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including Weizenbaum’s secretary, attributed human-like feelings to the computer program. Many academics believed that the program would be able to positively influence the lives of many people, particularly those with psychological issues, and that it could aid doctors working on such patients’ treatment. While ELIZA was capable of engaging in discourse, it could not converse with true understanding. However, many early users were convinced of ELIZA’s intelligence and understanding, despite Weizenbaum’s insistence to the contrary. 

What’s happening with ‘AI’ today is pretty much the same thing. On the one hand, ‘AI’ tools — despite all the technological advances made since the 1960s — are at their core essentially, functionally the same as ELIZA. They are more sophisticated, sure, but simply because they have been fed enormous amounts of data which has been processed by much faster chips. On the other hand, there’s also the alarming similarity in how such tools are perceived. Some people project sentience and understanding on these ‘AI’ tools, but there simply is none. 

I’m not afraid of ‘AI’ not because I think it’s a positive thing, but because there simply is nothing behind it to be afraid of. There is no intelligence, no sentience, no technical innovation. Only what people project onto it — both the companies producing such tools, and certain segments of their userbase. In a way, what I find concerning about ‘AI’ is what surrounds it and how gullible people can be.

Arthur C. Clarke’s third law is the oft-quoted, Any sufficiently advanced technology is indistinguishable from magic. It’s a perfect fit for what ‘AI’ might look like today for some people. Language models, and large language models (LLM) are fascinating, but there isn’t any magic behind them. Just data processing. Which, thanks to the inexpensive, powerful hardware we have today, provides incredibly short computing and response times. 

Artificial intelligence’ is a marketing label applied to what is actually just machine learning. And machine learning, per se, isn’t really a sellable product. The problem is, in fact, finding a purpose for these ‘AI’ tools, and finding useful applications for them. The ‘bet on AI’ — unlike past ‘bets’ in the tech industry — isn’t a direction tech companies are deciding to follow after some technological discovery, innovation, or breakthrough. This bet is based on fooling enough consumers and tech-oriented audience into believing that ‘AI’ is the future and has some meaningful impact or utility in people’s lives. And while I don’t deny the developments and progress in the fields of large language models and neural networks over the past decades, ‘AI’ as it is today is more marketing than actual technology.

AI’ is the product of one of the most stagnant periods I’ve witnessed in the tech industry since the 1990s. I’m necessarily simplifying matters here, because the scope of the discourse would be otherwise deserving of an entire book, and a rather thick one too. The ingredients of what’s off in tech today are roughly these:

  • A lack of a proper breakthrough technology or product with an actual, useful, meaningful impact on common people’s lives.
  • A lack of visionary figures in tech, people with enough experience, brilliance, and acumen to come up with The Real Next Big Thing. Call me a Steve Jobs fanboy all you want, but after his passing, the void he left behind is only expanding.
  • The shift in focus on the part of most tech companies, where providing good-quality tools and user experiences for their customers doesn’t seem to be their primary concern anymore, whereas their obsession with growth has been increasing, reaching alarming and ultimately unsustainable levels.
  • A consequence of the previous bullet point is that, when a tech company reaches a point where it has little to offer but still wants profits and growth, it will hype whatever little it has to ridiculous proportions.
  • This in turn makes the tech landscape filled with a lot of smoke but little to no fire, so to speak. There’s a lot of empty talk, a lot of marketing pitches, a lot of buzzwords, a lot of hype for whatever product looks vaguely different and ‘out of the box’ at first sight — but very little substance behind.

It’s amazing to me that, after the huge hype-wave surrounding cryptocurrencies, web3 and stuff like that, that after all the empty promises and the actual, severely-damaging frauds perpetrated by some ‘crypto gurus’, we are expected to treat ‘AI’ as something different or useful, when it essentially is a similar plant growing out of the same kind of soil.

Artificial intelligence’ is a potentially more appealing label than ‘machine learning’ because it is quite evocative. People are familiar with the concept through all the science fiction they’ve been consuming for years. Artificial intelligence is the spaceship’s central computer that can be queried using natural language, that promptly analyses and understands all kinds of data in a situation, and provides solutions or at least meaningful guidance to solve an issue or prevent a disaster or gain some deep insight. Artificial intelligence is the sentient, helpful cyborg or android offering assistance or even doing all kinds of automated tasks that are too numbing or time-consuming for a human; again, fully understanding everything around it, perfectly contextualising tasks and queries, and acting accordingly. (But, as a comedian whose name now escapes me aptly put in a stand-up routine, Artificial intelligence is also HAL 9000 in Kubrick’s 2001: A Space Odyssey, and why would we want that!?) 

The way AI works in these fictional scenarios is pretty much identical to the way human intelligence works; it’s a scenario where the singularity has already occurred and machines are self-sufficient pieces of equipment capable of researching data on their own, internalising it, elaborating it, and making deductions and projections. There’s a sort of implied omniscience, objectivity, and infallibility about them.

But there’s a fundamental problem the ‘AI industry’ is trying to sweep under the rug as quickly as possible, in the hopes that people won’t notice: ‘AI’ today is nothing like those evocative examples of ‘true’ artificial intelligence as depicted in many sci-fi scenarios. Companies and ‘AI gurus’ desperately want people to believe that we’re getting closer and closer to those scenarios, because they want people to buy into the idea of ‘AI’ — and the imperfect, unreliable, practically useless tools they’re presently concocting. If you want any proof of the extreme lack of vision in the tech industry right now, just look at how every tech company, no matter their size, has mindlessly jumped on the ‘AI’ bandwagon by either ‘developing’ or slapping some kind of ‘AI’ product or service or feature onto their current products or offerings. Instead of focusing on real technological progress for the betterment of humankind, they’re trying to legitimate the ‘AI’ Snake Oil by making it widespread. But a placebo is a placebo: the fact that you can find it everywhere doesn’t make it an actually effective cure for an illness.

I’ve only recently subscribed to the excellent Where’s Your Ed At? newsletter by Ed Zitron, and in the 16 July issue, titled Put Up or Shut Up, he makes great point after great point on the subject of ‘AI’. Subscribe and read the whole issue, which is definitely must-read material. I’ll quote some bits for now (there are a lot of links in the original text, which I have omitted here; subscribe to Zitron’s newsletter for more information):

These stories dropped around the same time a serious-seeming piece from the Washington Post reported that OpenAI had rushed the launch of GPT-4o, its latest model, with the company ‘planning the launch after-party prior to knowing if it was safe to launch,’ inviting employees to celebrate the product before GPT-4o had passed OpenAI’s internal safety evaluations. 

You may be reading this and thinking ‘what’s the big deal?’ and the answer is ‘this isn’t really a big deal,’ other than the fact that OpenAI said it cared a lot about safety and only sort-of did, if it really cared at all, which I don’t believe it does. 

The problem with stories like this is that they suggest that OpenAI is working on Artificial General Intelligence, or something that left unchecked could somehow destroy society, as opposed to what it’s actually working on — increasingly faster iterations of a Large Language Model that’s absolutely not going to do that. 

OpenAI should already be treated with suspicion, and we should already assume that it’s rushing safety standards, but its ‘lack of safety’ here is absolutely nothing to do with ethical evaluators or ‘making sure GPT-4o doesn’t do something dangerous.’ After all, ChatGPT already spreads election misinformation, telling people how to build bombs, giving people dangerous medical information and generating buggy, vulnerable code. And, to boot, former employees have filed a complaint with the SEC that alleges its standard employment contracts are intended to discourage any legally-protected whistleblowing. 

And:

The reality is that Sam Altman and OpenAI don’t give a shit, have never given a shit, and will not give a shit, and every time they (and others) are given the opportunity to talk in flowery language about ‘safety culture’ and ‘levels of AI,’ they’re allowed to get away from the very obvious problem: that Large Language Models are peaking, will not solve the kind of complex problems that actually matter, and OpenAI (and other LLM companies) are being allowed to accumulate money and power in a way that’s allowed them to do actual damage in broad daylight. 

And my favourite bit, later on (emphasis mine):

Generative AI’s one real innovation is that it’s allowed a certain class of scam artist to use the vague idea of ‘powerful automation’ to hype companies to people that don’t really know anything. The way to cover Thrive’s AI announcement [or any ‘AI’-related announcement — RM] isn’t to say ‘huh, it said it will do this,’ or to both-sides the argument with a little cynicism, but to begin asking a very real question: what the fuck is any of this doing? Where is the product? What is any of this stuff doing, and for whom is it doing it for? Why are we, as a society or as members of the media blandly saying ‘AI is changing everything’ without doing the work to ask whether it’s actually changing anything? I understand why some feel it’s necessary to humor the idea that AI could help in healthcare, but I also think they’re wrong to do so. 

[What’s Thrive? Zitron explains: “Last week, career con-artist Arianna Huffington announced a partnership between Thrive Global (a company that sells ‘science-backed’ productivity software(?)) and OpenAI that would fund a ‘customized, hyper-personalized AI health coach’ under Thrive AI Health […]. It claims it will ‘be trained on the best peer-reviewed science as well as Thrive’s behavior change methodology,’ a mishmash of buzzwords and pseudoscientific pablum that means nothing because the company has produced no product and may likely never do so.”]

And finally:

The media seems nigh-on incapable of accepting that generative AI is a big, stupid, costly and environmentally-destructive bubble, to the point that they’ll happily accept marketing slop and vague platitudes about how big something that’s already here will be in the future based on a remarkable lack of proof. 

This quote above is essentially the better-worded version of an observation I independently wrote down a month ago, before I was made aware of Ed Zitron’s newsletter, but I prefer to quote him because he delivers the punch much more effectively than I would.

As I recently wrote on Mastodon, the greatest trick tech companies are trying to pull is convincing the world real AI exists. I can’t wait to see the reckoning, especially when the markets come knocking at these companies’ doors. Every time you see some unspecified AI-related ‘accomplishment’, the correct reaction should be “Yes. And?” The AI house of cards can’t withstand 2 or 3 rounds of “Yes. And?”

Like, “Look what this chatbot is capable of!” 

(A chatbot that’s been fed literal millions of data bits and still comes up with bad approximations and incorrect answers. But you play along and just ask:)

Yes. And?”

It all trails off. There’s nothing there. What’s the point of this tool that just wastes insane amounts of energy by vacuuming an insane amount of information, mostly stolen from the Internet without permission or compensation?

Last year I was starting to feel bad for having an increasingly unenthusiastic and cynical overlook on tech. Now more than ever I believe more people should share this outlook and attitude. I’ll always cheer the indie developer coming up with great ideas for software. I’ll always cheer anything that is actually good, useful design, in hardware, software, and UI/UX — I’m not entirely blind to good stuff in tech. But people should really stop drinking the kool-aid from the big names in tech, and should look past the big plates of bullshit they serve on a daily basis. The current situation is very much similar to Hans Christian Andersen’s famous folktale The Emperor’s New Clothes. People should cultivate a healthy skepticism, avoid buying into the ‘AI’ hype and pretence, and focus on what ‘AI’ actually does (very little), not what it is said it might be capable of in some unspecified future. ‘AI’ in its current form, is like Andersen’s emperor — it has no clothes.

The Author

Writer. Translator. Mac consultant. Enthusiast photographer. • If you like what I write, please consider supporting my writing by purchasing my short stories, Minigrooves or by making a donation. Thank you!