“The quick brown fox jumps over the lazy dog”
On the off-chance you’ve never seen it before, this sentence is a pangram using every letter in the alphabet.
Anyway, now you know how it goes, you’ll find this puzzle easy:
“The ▬▬▬▬ brown fox ▬▬▬▬▬▬ over the lazy ▬▬▬.”
But, notice how it’s not just easy for you to fill those gaps ‘correctly’ (i.e. quick / jumped / dog). It’s also easy to fill them ‘incorrectly’, and creatively:
“The hungry brown fox salivated over the lazy mouse”
“The yellow-brown fox rejoiced over the lazy afternoon”
Have you ever considered how you can just be creative like that?
You might never have faced this exact puzzle before, yet, you instinctively know it still wouldn’t be accurate to say “The because brown fox purple over the lazy camera”: Those words don’t make sense there.
And you also know “The frail brown fox leapt over the lazy elephant” is not correct—not because it doesn’t make sense, but because it doesn’t logically follow; a frail animal is unlike to “leap”, and a (realistic) fox couldn’t leap over an elephant anyway. We could make that sentence logical, but it would require creatively padding it out a bit: “The deceptively frail brown fox used his superpowers and leapt over the lazy elephant”
It’s relatively straightforward—if time-consuming—to write rules that teach a computer to fill in blanks: This pattern requires a noun, this one a verb etc. But what modern AI does that is different is it determines the rules by itself, by “watching”.
In other words, it learns.
Of course, it’s hard to know to what degree AI learns like we do, because we didn’t exactly ‘design’ either method. Human learning processes evolved and, in a sense, so has AI learning. In a previous piece, I talked about how, what AI ends up as is akin to an extensively-referenced book, with a bunch of information the program has observed our use of and rated how each bit gets used in relation to each other bit.
This allows us to build ‘generative AI’, not because we can build an intelligent being but because we built a tool that can observe how we construct things—a giant reference book with billions of links. That’s how something that is built around scale—essentially brute force—can be leveraged to do refined and advanced things, including such expansive tasks as building out the story for us: “The sun was shining brightly as the quick brown fox jumped over the lazy dog. The dog barked, and shakily rose to his feet to chase the elusive fox”.
This clearly gives an AI a certain creative ability—or, depending on your level of cynicism, an illusion of creative ability—because a vast reference of rated possibilities allows it to reconstitute words that “work”; and even in ways unconventional enough to be considered creative. An AI music composer can, for example, mathematically rate all the interactions between familiar musical scales, modes, and rhythms, and come up with something genuinely novel, yet still entirely human-centric. Likewise, an AI image generator can reconstitute the details, from one pixel to the next, as required to create an image of a sparrow riding a motorbike.
We’ve gotten used to seeing pictures like that in the past few months. But let’s just take a moment to consider what it is.
None of us have seen a sparrow riding a motorbike. We have no photos to model how that might look, so the AI can’t just ‘copy’ one. Regardless, it built the image using an understanding of complex ideas, like a bird with short legs would need the foot brake to be higher on the bike frame, and it would need to turn the wings into “arms” to hold the handlebars. It added a helmet, jacket, and riding goggles, and decrypted the quirky nature of the prompt to evoke a cartoon style, with exaggerated flowers and smiling creatures. It even added a cute bird feeder and sidecar without any instruction to do so.
An AI ‘hawk’ (all bird-puns intended!) might wonder whether it shrank the right-wing mirror in consideration of the fact that the position of the bird feeder makes that mirror functionally redundant. An AI ‘dove’, on the other hand, might point out almost all of the elements—including the genre—are nothing but auspicious “Hallucination”, given my sparse prompt.
More pointedly, we can’t satiate either perspective, begging the question of whether we should be able to understand how/why these models ‘decide’ about these things—the “black box” dilemma, if you will.
Regardless, for a thing built out of big data and thumbs-ups, this is still impressive, right?
But, is it art?
It seems that question can be answered by acknowledging that the underlying tool could just as easily have generated nonsense, given a different set of training data or a less mature architecture. In that scenario, the hawk might—euphemistically—describe the problem as hallucination; while the dove might point out the technology is unfit for purpose.
Only, we’d still have to concede it is creating something.
When a human puts zero effort into something that is objectively garbage, we put it on a continuum between garbage and maliciousness. We don’t even try to judge it as creative, let alone art.
But does that mean an AI is just inherently creative? It’s designed to fill gaps in an innovative way—like, that’s its whole job. All other things being equal, it exerts an identical creative effort to produce something bad from bad data, as something good from good data. So, it can’t be anything except creative.
While a human can use creativity just like an AI to ‘fill gaps’, in a weirdly contradictory way, we generally measure human creativity against things like time, effort, or experience—not knowledge. A work of pre-Renaissance human art is as powerful as one created today, despite the newer piece benefiting from an additional 600 years of consolidated human knowledge. To put it another way, we wouldn’t ascribe a human’s ability to reel off Nepoleon’s birth date (15 August 1769) as “creative”, yet a generative AI uses an essentially identical mechanism to deliver that piece of ‘trivial knowledge’, as to write a ‘creative’ short story about Nepoleon’s birth day.
This makes creativity difficult to define. Might it be safest to concede AI is creative, but art is uniquely human? …Let’s come back to that.
All this is to say, while AI can creatively fill gaps, it’s not coming to its own conclusions about whether that gap-filling is good, because it has no means to do that. The result is, that this is a thing that appears increasingly ‘intelligent’ and ‘creative’, but we’re still unable to hold to the same level of accountability for those attributes that we hold ourselves to.
The more it seeps into our lives, the more salient this becomes and, while this can be regulated at arms length, I can’t see how this deeper problem can be solved. However, as long as it continues to deliver ever more well-rated cruft, most people just won’t care. The fewer obvious mistakes it makes, the less conscious we’ll even be of the fact that it’s working without morality or fear or pain or joy—integrity, basically—at least, until something bad happens.
So, if you’re a tech baron, looking for the key to sell AI to a world that doesn’t even understand what ‘intelligence’ is, you don’t need to go that deep. Just throw power at reducing the most obvious mistakes—context ones—by simply increasing the paper-thin surface area of its simulated world.
“The quick brown fox jumps over the lazy dog, then peeled banana skin tanning bed bugs.”
This is exaggerated, of course, but I just wanted to demonstrate how something as shallow as context can be perceived as creative ability. If you ask a hypothetical AI to expand on the ‘lazy dog’ story, but limit its power so it can only work with one word at a time, it could easily go off the rails after peeled. Instead of, say, producing “peeled off”, it might derive ‘banana’ from ‘peeled’ and so on, turning the entire message into nonsense: “Banana” leads to “skin”; “skin” leads to “tanning”; “tanning” to “bed”; “bed” to “bugs”… Each newly-added word is creative and accurate within the narrow confines of its single word context, but still useless.
Easy to fix though, right? Better “creativity” can be derived from adding computing power to widen the context window; all without needing any intellectual sense of what’s actually good.
My concern is this has as-yet-unknown implications, far beyond the novel creativity that automatically generates rap tracks and PowerPoint decks from single-sentence prompts. That’s because, in humans, this ‘sense’ is the backstop of social expectation. We are a social species, with a shared idea of what it feels like to be a human and, therefore, an inherent understanding of how feelings serve our survival. In simple terms, when we create something, we must—for comfort or survival reasons—apply at least some judgment about whether it’s better to put effort in, make some garbage up, or admit we don’t know. Intrinsically, we know there are consequences, for both our own state of mind and our ability to continue to live a safe and productive existence, to any creative choices we make.
This is important because, in the end, it is this unified experience that gives us the freedom we have in the world (even if it’s just the small freedom of private thought in an otherwise totalitarian, unjust, physical existence). This is freedom, evolved not just through an awareness of some responsibility for your fellow human, but as a biological inescapability. It is freedom that exists because, no matter how someone presents themselves to the world, you know they are there, alongside you, in the ark of mutually assured destruction that is our evolved social reality.
The effervescent money mania that is Generative AI—with its tech billionaire advocates excusing “mistakes” away as hallucination, small context windows, poor training data, or inadequate power, and promising “things can only get better!”—still remains a machine. It is not, and never can be, in that same boat as the rest of us.
I guess what I’m trying to say is, I know some big part of you. I know broadly what you experience when you feel love, trust, fear, pain, anger, wonder, or hope. So, if there’s one thing I can trust, it’s that your decisions—while not always sensible or correct—will always be made with roughly the same guardrails as mine.
Of course, no well-designed modern LLM model will make such obvious errors as the “banana” one above—that would make it useless. To operate in a generalised way, it needs to have access to a big surface area. But, a thing designed to be smarter and faster than us, and intended to lighten our mental load, we should hope it operates with depth. The content mill churn is already a very real problem, but right now we can pretty easily identify it. What happens when these things are still “making up” things, sans any emotional guardrails, but in spaces where our hubris or lack of complete understanding makes it less obvious?
Even if it doesn’t prove dangerous, we’re just getting started on the tsunami of pointless nonsense that is coming, made without anything as human as asking: “Is this even worth creating?” For example, as I write this, the principal barrier preventing our TVs and smartphones from being filled up with entirely AI-generated video content is the computing power required to do something as basic as ‘continuity’. This basic ‘context’ task is performed on the majority of movie and TV sets by low-paid coffee-and-donut-powered interns, simply checking the same actor and same props appear from one shot to the next. To do that basic task, current AI tools require the sorts of monstrous data centres that tangibly shift the GDP of whole countries.
Still, they’ll make it work. And so much more. Generative AI will become so useful we’ll hand it more and more of our creativity, and then many of us will hand it our autonomy. And it will appear to do it all better and better. But it will be doing it all without humanity.
This is all to say that, fundamentally, AI can and will create. But is creativity enough?
Have you ever wondered why the risk and excursion that goes into poetry, song, fiction, film, and faith have always proven so vital to our evolved civilisation? Our capacity to distinguish feelings about something, from the raw knowledge and acceptance of that thing, is—I believe—what lets us see new potential in the world, to motivate us to toss away established knowledge, and move forward.
Creativity made with a human cost. Is that perhaps art?
An unsynthesisable wellspring
I’m sure you’ve heard the phrase “the beautiful game” in reference to football (soccer). It’s not clear where the phrase originated from but, it was popularised through the skill of Brazillian superstar Pelé in the 1960s. It basically made us notice how muddy and sweaty men kicking a ball around in a paddock can be a work of art, and so it followed that young boys and girls kicking a rolled-up sweatshirt in a narrow alleyway was its own art.
Indeed, if you ask 100 people about some “art” that was meaningful to them, you get 100 different answers. An act is art, just as readily as a static sculpture or painting is: A dance, a musical performance, a cutting oration, an actor’s reaction.
Labour is art.
In an age of easier and endless content, we continue to forget this. Mozart's performances were a marvel of artistic improvisation and prodigy that inspired any musician who saw him, yet even though those acts changed musical trajectories, we hardly consider them now in the same terms as his compositions, which survive as “content” to be reproduced.
Long before we had the technology to capture the grace of a ballet dancer on film, or the skill of a great orator on wax, we had art moving society onward. The overwhelming majority of the art humanity has created is lost to time, destruction, or not being “in the room where it happened”. Every human interaction into which you bring some personal risk, honed skill, experience, or genuine energy introduces new art into the world—even when most of it is immediately lost for posterity.
Art is the application of life lived and laboured in, and a nudge that charts us forward.
You’re probably a human reading this (barely) so, you know, human intelligence is complicated. The point of these last few newsletters isn’t to dismiss the advantages that new computer tools can deliver us: I’ve fixed a bunch of spreadsheet formulas and produced a whole lot of ridiculous images for these newsletters using generative AI, so I appreciate it’s useful for something. The point is to make sure we don’t get so lost in the indistinguishable magic of AI to realise it, alone, is not the way we progress.
We can collectively drape the garments of humanity over an AI form. We will have conversations with it, and train it on obsessions from our own moment on the world. We can, together, marvel at its curious, sex-starved, God-fearing, megalomaniac responses, and write and read the inevitable shocked and compelling newspaper stories about it. We can ask it to do our bidding, and wonder if, or when, it will take our jobs.
We can even give it (“her”) a name.
This sort of understanding about the nature of AI is what leads to equal parts panic and techno-optimism, but misses a key point, that AI is a machine being built: Built for the benefit of a small subset of humans. As magical and remarkable as the AI tricks may seem, intelligence without feelings is just another tool of leverage, for those it is trained to be a cultural and practical fit for, and those either forced, or willing, to abdicate themselves to engage with it.
For my part, I think there’s still life and mystery left in our humanity. By all means, embrace technology, but let’s not front-load forever more. A good life still needs some impulsive emotion in it. AI might capture creativity; but don’t forget, art is ours.
-T
AI has no intuition, and that's one of the problems I see with its creativity. It can't intuitively know what people will enjoy or find weird/surprising. Your point about the mega data centres is also a real concern - we can only hope as AI develops that it uses less data & electricity.