It’s hard not to be a fan of the broad concept of ‘free speech’: Harder still when you’re (simultaneously) spouting opinions about random shit on a website readily available on billions of devices… and also aspiring to lead a long healthy life!
I suspect we’re going to have more discussions about free speech and obligation here at our Little Tea parties [not that one!], but today I want to think about it from the perspective of choice.
Generally, we don’t put a lot of thought into how much of our speech is free. If it’s working, it doesn’t cross our mind.
However, I should point out I’m using “working” there in a more generic way than you might have initially concluded:
You can either be not thinking about your speech because you’ve got so much freedom that nothing you do or say presents any contradiction to it in society.
Or
you can be not thinking about your speech because you’re so-well-conditioned by violence and propaganda that your brain simply shuts that facility off.
In both of these cases, ‘free speech’ is working as intended. But, the intent behind it is very different.
I should also clarify, there is not a clean division between these things—it’s a sliding scale. And I’m unlikely being overly controversial when I say—despite the way people-in-charge like to present these things—the ‘free-speech democracies’ most of my readers live in sit somewhere in the middle of that scale.
The main point I want to make is, the most ‘free’ speech we can imagine starts with a choice; but it is one we don’t make in isolation. You individually weigh up the pros and cons of saying something, or showing something, or behaving a particular way. Then society (collectively and independently) weighs up the appropriateness of your action.
We can talk all about ‘rights’ and so on, but the perception of what is “right”, and therefore what should be delegated as ‘A Right’ evolves—I guarantee you’ve seen this clearly, even in our own relatively-short lifespans, with the words we can and should respectably use. And the process by which that evolution happens is ‘choices’: I choose to say something intense; you choose to accept it as valid and tell your friends about it; some debate maybe ensues; and eventually we reach a plurality of acceptance: And that ‘intense’ speech is freed.
The reason I say all that, is freedom of speech doesn’t function that way when you’re talking about algorithms…
I drive an electric car (they really are as great as people-who-drive-electric-cars keep saying they are!). They do however require a shift in thinking when you’re on longer trips, because you’re navigating (at least partially) around still-inadequate charging stations.

Even when good charging distribution exists, the infrastructure means the experience is not always seamless. My old petrol car could be driven into any fuel station and topped up—it didn’t matter if it was a Mobil or a BP or some other thing, the fuel worked essentially the same, and the costs were similar, and the whole process occurred in roughly the same amount of time. With EV charging, that goes out the window, because different charging stations charge at different speeds (25kW…50kW…120kW…), and you sometimes pay for the time there as well as the energy transferred, and sometimes you pay even more if you’re parked and have stopped charging. So it can be a little complex to work out the ‘best deal’
I don’t really mind. I consider myself an early adopter (I’ve owned electric cars for the better part of a decade now) and take the good with the bad. EV charging infrastructure will improve and stabilise but, in the meantime, I still like to make sure I’m getting good value; just like I used to wait for those 10c/litre discount days with the old petrol car!
What this led me to do was create a spreadsheet (as one does) that let you put in the variations of energy and time charges, and have it spit out an estimate so you could see where the best deal is.
On the surface it doesn’t sound complex. However the trouble is EVs charge at different speeds depending on their current battery charge level—it can take a minute or so to get up to full speed, then it will charge near its maximum speed (or at about 80-90% of the charger’s maximum, whichever is lower) between perhaps 5-60%, after which is starts slowing down until it’s charging at a fraction of the maximum speed between 95-100%… You don’t need to know all the details; except to know that getting something even approximately-accurate took quite a lot of fiddling the back-end numbers so I could get a worthwhile estimate—one that would simply tell me how long it might take to put some number of kW in my car based on how full the battery was when I started charging it.
If you’re not an EV nerd, I realise that’s a garbage paragraph. But “fiddling the back-end numbers so I could get a worthwhile estimate” is the only bit you really need to absorb. That’s the bit that reveals the truth, that anything clever a system like this does requires some ‘fiddling’.
It’s exactly the sort of adjustment that prevented the very first prompt I tried on Chat-GPT a few months ago from working—even though I’m certain it could have answered my query (were it not for the ‘fiddling’).
So, AI systems—just like basic pattern-recognition systems or my EV charging calculator—operate with broadly similar constraints to our human-OS. They can spit out knowledge and potentially combine ideas, but there are rules that they follow, consciously or unconsciously depending on their sophistication and how you measure consciousness.
It might be completely unacceptable to code “gang-up on women, and Swedish people in particular” into an AI model, but it doesn’t take much fiddling in the back-end of an AI to functionally make that effectively happen. Imagine you build an AI tool designed to “review CVs and job applications”; you could cause it to slightly downgrade applications that feature first names which end with an ‘a’, with a single line of code. A fiddle that is invisible in your intellectual-property-protected ‘black box’, and essentially-invisible to an observer. But still potentially applying a bias against Swedish women like Ava, Hilda and Eva.
Or, more likely, you just point your AI at some training data that has already done the work for you and let it reach its own ‘organic’ conclusions.
We all know that disproportionate numbers of Black Americans end up incarcerated, unemployed, or on low incomes. This data is so well established and discussed, it seems redundant to even point it out again to thoughtful humans. But, for an AI, this is just data. If you identify as Black, you are statistically “more likely to be a criminal” and statistically “more likely to poor or unemployed”. Obviously there’s been all sorts of busy-work dedicated to trying to re-balancing any biases that may have been ‘socially cemented’ into these AI algorithms—it just remains to be seen how effective that all is over the coming years and decades. Regardless, it is really hard to see how any system designed to serve humanity can be described as functioning “correctly” if it doesn’t largely reflect humanity’s choices—good or questionable.
And so, even without any deliberately-malicious intent involved, you could argue that the output of an AI trained on the input of humanity is no more or less ‘free’ than the output of any single person (excluding those raised away from society by wolves).
Technology has always had a kind of bionic-amplification effect on humanity. Technologies in the fields of the physical, intellectual and, more recently, software-servicing, have given us a way to leverage our own capability beyond our organic stature. And what’s neat is technology in one area spurs other innovation—the iPhone did not initially conceive of anything like the scale of its subsequent app market: The apps than runs on each new, fairly-generic evolution of a “device with a touch-screen, camera, and some common computer hardware building blocks”, is now a far larger industry than the device that enabled it to exist.
The car enabled suburbs. The printing press enabled widespread literacy. Money enabled trade. Language enabled communication.
The nuclear bomb enabled the “peace-making technique” of mutually-assured destruction.
But the technologies that come next for us are different. A genetically-modified organic cell can replicate itself; as can an AI model, trained by the trending-towards-infinite output of other AI models, and the (nano) robotic machines that AI might manage.
This is not a historically-familiar situation, where we all get a glimpse at a new technology, and dozens of metaphorical lightbulbs light up above dozens of heads, each inspired by the opportunity our liberal market society has gifted us, to make an even bigger and better industry with it!
In the time it took you to read that last sentence, an AI already constructed 600 ways to exploit its own capabilities; plus a plan for how to re-engineer itself into a machine capable of coming up with another 6000.
Having said that, this is also not totally unfamiliar territory for recent-us either. We’ve seen it in a slightly different context—social media. An idea that goes viral could well be thought of as operating in a similar way. What starts as a single entity, can exponentially become billions of data points; likes and comments and shares, and the resulting real-world actions of that consolidated total. In a matter of hours, and sometimes minutes, it can completely leave the scale of human experience.
I don’t want to get too deep into the relevance of social media here, as it has its own separate and complex relationship with freedom of speech (and probably it’s own upcoming Little Teapot rant!), but it’s worth highlighting how it demonstrates adaption, and expansion, far beyond our natural conception.
That should give pause to many of our assumptions, about how we might understand freedom in the coming world.
This is all to say, our perceptions about freedom of speech are built around pretty slow-moving social and technical change. When the printing press or the radio or cable TV came along, we had decades or centuries to reframe what ‘acceptable’ speech and action looked like… Even then, we ballsed it up a fair bit!
In my last newsletter I promised this one would be about colonisation. If you’ve been paying attention, you will have noticed (1700-odd words in) that was the first time I’ve used that term.
There’s a reason for that.
When Dutch and British and French ships took off into the wide world to seek out fortune and discovery, no one told the indigenous folk that they were being ‘colonised’. From our modern viewpoint, we get to look back and see that changing both the social tolerance of a choice, and the basic awareness of choice is the core of colonisation. It doesn’t really matter if it’s done by force or generosity or convenience or persuasion; colonisation is simply a thing that slowly and perniciously morphs one set of free choices into another.
This is the 4th newsletter in my series about AI (it started here if you’re interested in revisiting any of the series). I don’t want to end this series on quite such a dark note, so I’ve got one more thought for you about the impact of AI. Stick around…
-T