Hello readers. This is the third part in a new series on AI. Many of you were not on my list back when I wrote my previous series on AI so, while I write these to largely stand-alone, if you get to the end and find yourself wanting some more of my flavour of thinking about it all, I would encourage you to whip back and take a look.
Citizens, with great flourish, allow me to introduce you to your forthcoming leader…
Humans seek pleasure through both evolved organic needs—water, warmth, food, sex, sleep; but also through societal, taught, means—Nike Air Jordans, YouTube video likes, and so on. Indeed, the advantage of being a social species is, that our communities tend to ensure organic needs—given enough time and choice—can be met in excess, so after a while those taught needs eventually become our only perceived needs.
What this means is, just as our modern economies have exceeded the limits of sustainable growth, and the only viable ‘medicine’ is thought (by many of those with the power to actually affect change) to be more growth—albeit ‘better’, ‘smarter’, or ‘greener’—our individual desires are also destined to be finger-trapped in an endless loop of “more need”. Once the limits of actual need are breached by communal progress and relegated to the rear-view mirror, how could you know when to stop once anything we seek is disconnected from the ‘basic requirements of life’?
…A bigger boat, a faster car, another pair of shoes…
Therefore, it is a quirk of circumstance that any civilisation will have progressed far beyond the need to grind daily in the pursuit of actual survival needs far before it starts on the task of building a tool like artificial intelligence which, therefore, is preordained to operate in the zone of unlimited taught needs.
So, without the primal insight that might be gleaned by a brain endowed with our emotional capabilities and shortcomings, it is inevitable that any AI we create will exist not with a complete awareness of what might be in our fundamental good, but as a means to meet our subsequently-taught, and ever more absurd, desires.
By the time we have reached this stage in civilisation, that's just the nature of the collected knowledge we are going to pour into these things. We never needed to be taught to seek out sex—it is hard-coded in us for the survival of our species—so we don't obsess over the fundamental need for it. So, once our basic species reproduction is adequately achieved, we’re instead going to concentrate on the ceaseless ways we might want to have sex: How to pleasure your man, how to last longer in bed, check out this new bedroom toy, how to pick up people in bars, how to deal with the morality or outcomes of it…
In large part because of this, the human experience is not nearly as easily-divisible into “needs” and “wants” as we might like to believe. What we perceive as a ‘need’ changes dramatically as the social, technological, and cultural environment is reshaped: It goes without saying that a knight in the 13th Century, or even a member of a modern self-sufficient and isolated community, would not perceive a smartphone as a need; while the average urban 13-year-old today definitely sees it as one (required for social connection and relevance). Equally, there may come a time when we consider a Jet Pack or a Personal Robot Assistant a need; required to function in a future world.
However, it’s just as true that even the way we view our most basic needs have been probed via our philosophical, technological, and—especially—religious, beliefs through the years and cultures of our existence. Things we might accept as ‘foundational’ needs, like a fear of death, pain or hunger, or a drive for sex, have been flipped via mental gymnastism into powerful tools to express even ‘higher’ perceived purposes (God, masculinity, willpower…)
In that sense, however you define progress in civilisation, it must be thought of as the evolution of what we collectively consider ‘needs’. Meaning, it is valid to ask whether “meeting all human need” is more synonymous with “ending human progress” than with ticking a box and patting ourselves on our backs for solving that hairy problem.
Indeed, you might have noticed, that poses an interesting quandary for those in the AI-powered-fully-automated-techno-utopia camp to consider carefully; just as those in the lifeless-grey-egalitarian-socialist camp are periodically admonished to.
Let’s not get too distracted by that. Most of us still would agree there’s a relatively small set of fundamental necessities for our species’ survival. So, progress in civilisation will always become the endless quest to market desires designed to outweigh the real needs they obscure. And, that balance becomes more exaggerated with time, such that we are saturated with more TicTok dances than our ocular lifetime could possibly absorb, even while we are simultaneously suffering from an unhealthy dearth of actual physical activity.
And AI seems, to me at least, naturally optimised for this reality-obscuring-need. It’s exceptionally good at creating things we want to need; in a way that is entirely detached from the underlying humanity, simply because AI doesn't hold information as ideas, but rather as ratings.
Given that, “Successful AI” (that is, a “general” intelligence that doesn’t either enslave us, crush us, or abandon us)—for all the new connections and efficiencies it might deliver—seems to offer us either the best case scenario of eager-to-please sycophants; or the worst case scenario, where it turns us into lifeless sponges at the end of a dopamine conveyor.
And we’re already pretty accustomed to lining up at the end of the dopamine conveyor.
In previous pieces, I had some questions about the nature of intelligence. We talked about how a modern AI learns about context and attention, through ‘discovering’ links in large volumes of data.
What this means, in practical terms, is AI operates from a perception of humanity, based solely on what we’ve been willing and able to share with it. The potential dangers of this have been widely discussed via frames like the ‘Stochastic Parrot’ but, even outside the risk of bias and hallucinated facts—which determine how useful these tools might ever be—there remains the question of what our aspirations for them are, given the probable scale of influence they’ll have in our lives—useful or not!
For the foreseeable future, the state-of-the-(AI)-art will not be world-building anew, but responding to a projection of us. But in some, possible (Artificial General Intelligence (AGI)), future beyond that, everything is unclear, except for one objective fact, that these machines will never have actually experienced life as a human.
So, what does this really mean? Well, consider a simple idea like “home”. For a human, what constitutes a home is the collection of important ideas, including shelter from the weather, a place to put things, or somewhere where you are welcome to go to and stay at. But it’s also a place that you feel welcomed in, or connected to; such that ‘home’ might represent an entire region or country, or simply proximity to people you love regardless of whether it’s in a tent, an LA mansion, or a submarine. It might even be a peace of mind you can achieve through meditation, a smell, or a bite of some familiar food.
We don’t need to think very deeply about it to realise pictures and descriptions—the “ratings” upon which an idea like ‘home’ might be reconstituted inside even the largest database—cannot adequately match the organic understanding we all immediately take from a statement as simple as “feels like home”.
And that is one we broadly agree about: Consider how much more fraught it becomes with an idea like “justice” or “freedom”.
I should say, that’s fine provided we appreciate what it is: AI might well be on a path to become a superior-performance version of us. However, I’d question—without emotion or pain or love, greed, envy, surprise, or joy—whether it can really ever be anything but a hyper-realistic reflection of us; missing the subtlety of our own depth.
Still, even a two-dimensional version of us can fuck us all up! There was a well-publicised recent story of an AI that hired a human on Taskrabbit to solve a captcha puzzle and then, in an impressively-concerning feat, appeared to realise it needed to lie to the Taskrabbiting human (who, despite being on the labour-side of a low-wage contract, had the foresight to question if it was a robot), to achieve its outcome. The take-away here though is, the AI wasn’t consciously protecting itself or being malicious, it was simply calculating the appropriate steps to achieve its goal.
In other words, credit goes to the grubby, inhumane, training data, rather than some emotional consciousness.
This is increasingly important to be sensitive to, as we find ourselves distracted by the cute little laughs, throat-clearing, and ‘umms’ that AI chatbots are increasingly adding to “humanise” their interactions. This is clever training.
Bad ratings
Still, you might be thinking, “I don’t need to click a thumbs up or praise Chat-GPT, when it does a good job, and it already works pretty well”. And that's because that training is already done. This hints at one of the major contentions with AI, where some people think of AI as “stealing” other's work to generate its own1. However, I think a better way to think about this—because it also acknowledges a risk that techno-optimists don't—is AI learning is simply front-loading ratings.
Specifically, if you send an AI to look at a bunch of nonsense, it has no way of understanding it is nonsense. It has no emotional sense. The solution, of course, is we don't do that. We give it good, articulate, images and text, and working Excel formulas, to study—or, at least, content that it can ‘identify’ our ratings-labour in. Stuff that has, by dint of being created and consumed by humans, can be considered ‘thumbs up’.
I should note, that this is kind of unique to AI. You can’t train a human like that because we have an emotional sense—what you might perhaps describe as our ‘soul’, for want of a better term—that draws us to respond to things, not only because we’re told they are good, but because we feel they are good. This is why we can’t credibly rate things without actually experiencing them.
Trip to the Moon:
★★★☆☆ Got a bit travel-sick; spacesuits are awkward; hard to poop.
And, it’s also why we have different opinions.
But, we’re front-loading our AI systems, without a soul, and not considering how that soul might turn out to be important in a world where we expect it to make decisions for us.
Personally, I could never get into The Sopranos or The Wire or Twin Peaks (maybe I missed the moment but, when I got to them, they felt dated and slow to me).
I detest the taste of zucchini and eggplant (they make me squirm).
I’ve never listened to a single Bad Bunny song in full, and have no particular desire to.
However, these things get thumbs up’ed all the time in the wider world by other thinking humans. Bad Bunny has been the top artist on Spotify for much of the last 5 years; Those TV shows are universally lauded; And, many vegetarians essentially have a dependency-bordering-on-addiction to zucchini and eggplant.
But, obviously, these things are not ‘good’ for everyone.
This is the risk of front-loading ratings. It’s not that collecting data or rating things is bad, but the absolutely-dominating scale that these tools can then apply to producing, and obscuring almost everything else, using those algorithms.
I can’t exactly explain why I don’t like Puerto Rican rap music or slimy vegetables. It’s not for want of trying, by the Spotify algorithm or my mother when I was growing up. But, there is something distinctly human about this.
To make this more explicit, a kid who grows up in an abusive home never learns to ‘like’ it. An AI pointed exclusively at abuse can only assume it is thumbs-up behaviour.
You might assume we can solve this by just feeding it “good” stuff (quality data is increasingly being seen as more valuable than big data), or by building individual profiles of our own (you know, like Netflix’s recommendation engine 🙄). But, even if we could definitively agree on what ‘good’ is (we can’t), human progress has come from an evolution of what we believe we need, not from a more-sparkly variation of what we already have.
Still. Let’s be fair. AI is able to link things and complete tasks faster than we can. That’s an empirically useful tool and opens up great future possibilities.
So, why should we really care about any of this?
I think we should care because it assumes we have finished with our emotions. It assumes the very thing that neoliberals love to repeat (and a satisfyingly-growing number of people are finding stomach-churningly problematic), that capitalism, and its atomisation of human need, is the best humanity can do—our only “alternative”. It assumes that a machine, trained on our recorded recent history—including the objective mistakes, however subtle or debatable—which has delivered us to a world where massive waste and extreme excess exist, literally, one block over from abject poverty and depression, is the formula we want our future to be built with.
We still have much to understand about our intelligence but, fundamentally, if you consider the advantage that intelligence delivers, it's less about knowledge and more about determining what is 'hidden' in the information at hand. In that context, viewing the world via ratings makes it a game to win; Viewing it through the lens of pain, joy, envy, or love makes it a system to optimise.
Not everyone agrees, but my opinion is that ‘winner-centric’ ideologies tend to deliver narrow short-term gain, and much-greater and wider long-term pain. Because it’s based on all our groundwork of ratings, the pace of AI development is already happening in a shorter-term than any disruptive technology we’ve ever experienced before. If you don’t take seriously the fears about Terminators and paper-clip-maximisation, chances are you view the biggest consequence of this speed is some people are just going to get rapidly rich and powerful. But, it’s worth pointing out, that even if we could redistribute the value of it (with open-sourced code and “perfect” AI regulations) it will still cut an important, and misunderstood, part of ourselves out of our own future.
And we should at least wonder, what kind of longer-term piper we might have to pay for that.
-T
Far beyond the scope of this rant, but it’s worth noting the ‘theft’ of creative works by AI is more complex than in might appear on the surface, because of the way tokenisation works. It’s more accurate to describe the process by which an LLM ‘copies’ a human creation as analogous to you learning a new word in the dictionary then incorporating it into a conversation later that day: A process we obviously wouldn’t describe as ‘stealing’ from the Oxford English Dictionary Corporation.
That's an outcome I hadn't considered, AGI or ASI goes '*uck you guys, I'm outa here'. Our 'greatest' achievement abandons us, hilariously bleak but probably deserved
Tim, I begin to wonder if people know what a soul is any longer. It isn't after all a commercial commodity. AI - a soulless adjunct to modern life - seems all about what a flood of (probably ill-informed in many cases) information teaches it to believe/react to. E.g. Yesterday, I was reading a Substack piece from a street photographer. He showed three shots, two AI-generated and one of his own. Hands down his was better - soulful - and the other two were creator-biased and surreal. This morning however, when I picked up my iPhone and looked at the 'Good morning' screen, Siri offered me a shortcut to the weather app I regularly open first thing. Very helpful. And she/it thanked me the other day for asking how it was! (Just testing) ... so I've decided: as long as I keep things I'm offered in perspective, add a bit of critical thinking and ignore rubbish, maybe I'll cling onto my soul. ...Thanks for your insights.