In a series of editorials where leading writers are free to say what they want under a pen name, our anonymous columnist Little Bird writes about Artificial Intelligence.
It is too late.
The horse is cantering off into the distance while we chat about the relative merits of open or closed doors. The bag is totally cat-free and we are standing around, looking stupid and debating whether to keep the top firmly secured or not. The wind is filling the sails of our ship, as it glides into the distance, while we remain in the harbour looking at charts and worrying whether it might be better to take a different course or not sail at all.
Pick whatever torturous metaphor you like, the point is that the debate about whether we should allow artificial intelligence (AI) into our lives is more or less redundant. It has already been stitched into the fabric of our society and is only going to get more prevalent.
With the launch of ChatGPT though, and its generative AI capability, the intensity of the debate has increased notably, and rightly so. It is an incredible tool, fascinating and alarming in equal measure. Much of its output is indistinguishable from what humans could create, but it operates 100 times faster. Its potential power is easy to see, so it is right that we ask serious questions about how AI is best deployed and how we might limit its potential risks.
On the one hand, AI can help solve complex problems and automate tasks, resulting in increased efficiency and productivity. It can also assist in healthcare, providing accurate and timely diagnoses, and help to improve public safety by detecting and preventing crime.
On the other hand, the use of AI can also result in job losses and a lack of privacy, as personal data is collected and analysed. If not properly designed, tested and implemented, it can also reinforce biases and inequalities. Therefore, as a society, we need to reflect on these pros and cons before introducing AI into our everyday lives.
Except that we already have. According to the California Institute of Technology, “AI and machine learning-enabled technologies are used in medicine, transportation, robotics, science, education, the military, surveillance, finance and its regulation, agriculture, entertainment, retail, customer service, and manufacturing.”
AI in our lives
For decades, those in the creative world have warned of the dangers of AI, and the apocryphal tales of global domination by computer-based lifeforms are well known. They provide the basis of some of our most popular books and movies. This dark vision of the future still dominates the debate today, with numerous news stories about how AI is a threat to humanity, citing some of the leading thinkers in the field. Even Elon Musk, the paragon of the forward-at-all-costs tech billionaire, has lent his voice to this warning cry (although some accuse him of trying to hamper his competition).
However, those in favour of the technology say that talk of such threats is exaggerated, and let us hope that is true. But there’s a less dramatic view of AI and its impact that is also worth considering: it may make many things less tolerable.
That may sound trite and a minor concern when measured against human extinction or mass unemployment. But pouring time and resources into a technology that reduces the general standard of living is the last thing any of us need, especially in an era beset by real unavoidable, problems affecting us all. Moreover, it is a very real possibility, given that, arguably, it fits into a pattern of human behaviour that goes back at least 10,000 years.
In his book Sapiens, Yuval Noah Harari argues that the agricultural revolution, in which most humans went from being hunter-gatherers to farmers, had a counter-intuitive effect. It was good for our evolutionary survival but was less enjoyable than the demanding but relatively healthy hunter-gatherer lifestyle.
Yes, it made it easier to produce large amounts of food in one location, thereby facilitating population growth which is our fundamental aim as a species. But it also committed us to long days of back-breaking, repetitive work, with damaging physical consequences for our bodies, and this increased with the sizes of our communities. As Harari puts it: “The pursuit of an easier life resulted in much hardship, and not for the last time. It happens to us today.”
Indeed it does. That smartphone in your pocket – has it increased your leisure time or is it impossible to be unavailable when your employer gets in touch? Those emails you get at work, that fly across continents in seconds – have they resulted in longer holidays or do they mean your colleagues expect you to be working within seconds of receipt?
Is technology our friend?
The time our technology buys back for us is filled with the expectation of more productivity. It seems unlikely that this will be any different with AI. Since you will no longer spend hours reading and summarising a document or writing a report, you can do 10 or 100 in the same time, for the same pay, assuming you are required at all. It is estimated that 300 million jobs may be lost thanks to AI, which would create another form of hardship.
In addition to this impact, there is also a question mark over the quality of services we can expect in a world where commerce has embraced AI. As consumers, we have already lost tailored, personal service in numerous areas. In its place, businesses prefer to offer cheaper, margin-enhancing alternatives such as call centres, automated phone lines providing generic answers to our problems, and self-service tills where we do the work ourselves. AI will take this further.
Businesses like to present this approach as providing their customers with more efficient systems, but they are only more efficient in the sense of saving the company money. The gain is not the customer’s. It is a common and deeply frustrating experience to spend hours repeating explanations to disempowered company representatives and failing to get a solution because your problem lies outside the script or limited options available in the automated service.
We are already being pushed into using chat bots as a form of customer service, and many reading this will have cursed that these systems frequently fail to understand what a customer needs. More AI means more of that, and fewer humans able to apply their judgement to an individual’s case.
The standard set by the market is ‘good enough at the right price’. So, if AI can do most of the things that humans do but costs a fraction of the price, the loss of service quality will be a compromise businesses are willing to make. The rest of us will have to put up with it.
This will be annoying in some contexts, such as retail or amenities, but could have more serious consequences in areas such as medicine. American political grandee Henry Kissinger has expressed concern about the adoption of AI in key areas of our social structures because of its bias towards winning above all else. For example, imagine a scenario in which AI is used for diagnosis and prescription purposes but is also integrated into NHS performance targets. What is to prevent it from choosing to prioritise budget management over life preservation in order to ‘win’ its assignment?
A lack of control
By definition, AI is outside of our control. It is a system designed to take its own initiatives, to learn and act beyond human instructions. For this reason, it is also very hard to regulate. For example, in the banking sector they are wrestling with the problem of how to be sure that AI used in credit scoring systems is not racially profiling loan applicants. As it stands, banks say they are unable to provide any such assurance.
A person can be regulated through the threat of sanctions such as fines and dismissal from their job. AI has no such system of incentives. The whole point is that we do not dictate its behaviour. But why would we put society’s systems in the hands of something we know we cannot control?
Then there is the crucial question about how AI blurs the lines between reality and artificiality. This may be the most serious aspect of life being less tolerable. We are already facing a crisis of confidence in our social institutions. Many people say they cannot trust what they read or what our public figures say. In a world in which AI can mimic any voice, write elegant prose, and create an image of any individual involved in any event imaginable, the only rational approach is to trust nothing.
I have a confession. Parts of this article were written by AI. Can you tell which ones? If you can’t, does it make you feel less inclined to trust or believe any of it? Perhaps it was this bit. Who knows?
Welcome to the chaos and confusion of AI and an era of increasingly substandard service. Let us hope we decide it’s a price worth paying.
By Little Bird