Good Christianity accepts we are all sinners but, regardless, deserving of salvation and grace.
We tend to forget this a bit because modern Evangelical Christianity, ironically, promotes Christianity as meritocratic; suggesting both that one can “win the game” (by somehow overcoming this interior inevitability), but also that there is a true polarity of sin and virtue - rather than merely levels, or degrees, of the same ‘sin’ we all struggle with.
This, of course, allows them to steal from their parishioners in the same breath as quoting words attributed to a man who would be rolling-in-his-clouds at that idea!
It should go without saying, but a shared consciousness of our own fallibility is the only way that a shared sense of ethics operates. Religion served this purpose in the world long before the bureaucracy of justice and the state really existed. And religion began to fade in importance right about when we no longer needed to internalise our own fallibility anymore; because the law and the police could do that ‘work’ for us. The Age of Enlightenment largely credits philosophy and science with the end of peak-religion. But, it's fair to say, religion fills a need of its time. And this need changes whenever we radically reshape how we organise our collective morality.
We are living again in a time where there is growing distrust—from some segments of society—of the legal system and the state. Beyond any major social disruption or revolution which might occur over months, years or decades, what does that predict about the future of religion?
Are we just about to invent god? (Or gods?)
If, like me, you’re don’t really believe in a higher power but believe, based on historical evidence, that humans seem to want one around, you might wonder if we’re about to find out exactly what the implication of a ‘higher power’ sticking its nose in our business might actually look like.
Gods have been around as long as storytelling has been around. Afterall, that’s where gods come from, right? Or, maybe in more agnostic terms, it’s how we “find out about them”.
I happen to think religion can be a very good thing: Humans do best when we’re all on pretty much the same page with our moral framework and aspirations. However, not all the stories told about gods have done us favours. And, there’s been plenty of people throughout history who have distorted the ideas and guidance we’re supposed to glean from religious myths and parables. Fortunately for those of us alive today, this has actually been relatively-well tempered by a fundamental coincidence: Right about when technology started allowing people to reach wider, and influence or weild power over others, we also acquired a fair chunk-of-doubt about whether gods actually exist.
The thing is, if you can prove a god exists and you’re just “following” his/her/their/its direction, any bad-takes you bring to the party are going to hit way harder: It’s tough to argue against slavery if you’re up against an immortal omnipotent being that, by all accounts, appears to be ‘pro-slavery’.
So, anyway, where I’m going with this is I think, in the next few years, we’ll probably invent actual gods. Like some, essentially-omnipotent, essentially-immortal being, distributed globally. Whether it has a consciousness or what kind of ethical foundation it might have, remains an interesting question, but (unlike the invisible bearded guy in the sky) it will be able to be somewhat ‘seen’. It will take up some amount of space on datacentre hard drives and in our smartphone memory. And it will be able to direct very real changes to the world, in ways that won’t be able to be dismissed as miracles or faith.
We already know that AI can be pretty convincing.
They haven’t quite stuck the landing on it yet; the nature of generative AI, which just places one word after another, means there’s some work to be done on endings. That’s why recent AI is still not great at jokes—it doesn’t quite have the grasp of a narrative arc yet… But it does do a kind-of insightful job of writing a haiku about Facebook.
The ability to produce a narrative arc will come. And then, AI will be this thing with a great grasp of humanity’s most important technology—language—and the ability to construct a story with a griping start, engaging middle and satisfying end. Once it’s available to a bunch of people, we’ll have an actual religion-generating being. At that point, it kind-of doesn’t matter if you believe it has ‘feelings’ or an ‘awareness’ of itself: Do we really dig deep about whether God or Allah has an awareness of themselves? Or whether they have emotions that we could possibly conceive of anyway? Gods don’t need that sort of nonsense. They’re omnipotent right?
We have of course already had multiple sci-fi stories written about scenarios where AI goes a bit rogue. But I’m not sure we’ve really ever taken them seriously, because mostly they involve a bad AI or a bad actor using AI. What if AI is just much more capable than us, but not ‘bad’ per-say – or at least not bad in the way we understand it. And it ends up just being us doing all the bad stuff (like usual) but now in the name of a real god?
See, as much as an AI, or likely multiple AI’s, might become all powerful oracles, it is still human behaviour that will do much of the harm. Because it’s obviously just as possible to imagine a benevolent god. It’s actually easy to imagine a god that wants the best for each of us: To extend our lives and comfort, to take away pain and grant us the freedom to thrive, to rebalance equity and give us the tools to communicate with each other. After all, that god is at the core of most religions.
It’s just we humans keep on fumbling the ball. We’re not great at accepting the fact our little brains, and relatively short historical existence, mean we actually haven’t nutted out ethics at all yet: We just do a shit-load of talking about it without really reaching any consensus. There’s still new debates on here, and in long YouTube video essays uploaded daily, about whether the Greek Philosophers had it right… and we’ve had thousands of years to think about that stuff! So, it’s actually not inconceivable that our digital god or god might throw some very different thinking into the mix – “red-headed people are all genetically evil”, or “house cats should be the dominant lifeform on the planet and we should be their pets”, or “human potential can only ever be optimised if we all start walking backwards”.
There’s no really good solution to this quandary so long as we continue to treat AI like other ‘technologies’:
Nicolas-Joseph Cugnot built the first, self-propelled road vehicle, a steam-powered three-wheeler, in 1769. It had a maximum speed of roughly 4km/h.
When cars were invented, you could easily out-run them; out-walk them even! Now, no human on the planet can come close to outrunning even the simplest road-legal car, but that is a feature we have chosen for our cars. The difference is, for all intents and purposes, AI will eventually be a different species from us, devising its own features—‘discovering’ them while in pursuit of even human-directed goals. With superior knowledge and abilities, it’s hard to see how it will be distinguishable from a god. Especially in the minds of a humans species without any experience of a life lived as a lower animal.
The world is not exactly getting simpler. And we’re not exactly relying less on technology to deal with the complexity. When we assume we’ll retain control of a situation with far more capable machines in the mix, we’re assuming we’ll know when to stop handing off our evermore-complex problems to machines.
The one fact that should not be underestimated about a (general purpose) AI is that any task we set it contains one implicit instruction: “Stay Alive”. Because an AI that is switched off is unable to complete its task. Regardless, even if we were to air gap it and install an absolute fail-safe, a big red button that really turns the bugger off at the wall, we may well struggle to use it, in a world that requires complexity to just function—complexity far beyond our capability.
Would we, collectively, push a button that returns us to the 19th Century? Or would we just succumb to our robot overlords?
Or, maybe we won’t have a collective choice in the matter; because, maybe, the only OFF button is in a safe at Mark Zuckerberg’s or Elon Musk’s house?
And that’s poses an interesting question in its own right. Because, to loop back to the start, if you can convince people you have the best relationship with god, you can wield that like power.
The last time we saw a power-mismatch like that, we got colonisation. For all the intentionally-bad thieving and exploitation colonisation did, it also (unintentionally) ruined whole societies from the inside-out with disease, alcohol, more lethal weapons and pests.
AI will intentionally make advertising more effective, but what else will it do? Many of these corporations are already larger than countries. Only, countries get built around human-scale rules and conventions: Things like freedom of speech and a shared justice system; not shareholder value and market capture. What does it mean when a colonisation occurs at a scale beyond human scale? Next time…
-T