Liability Laughs Last
Debate raged this week when concerned technologists lit up the Bat-Signal on the steep progress LLMs have recently made. Such is the pace of innovation, that we need to talk in days and weeks, not months. Those timescales underscore the tsunami of change that is not only promising incredible advancement but also threatening life as we know it.
The request by Elon Musk, who notably broke up with Open AI founder Sam Altman and his company in 2018, is to pause the progress on this technology for 6 months. The potential risks and negative consequences of this advancement are starting to become apparent, but what many are failing to see is that despite the debate on whether we should pause, I argue that it is practically impossible to do it. Consider the global response to Covid-19: the rollout of vaccines, lockdowns and the lifting of travel restrictions all happened at different times and pace. This happened at a national/government level where regulation is high. If we consider that AI advancement is happening in the private sector where regulation is low and there are many actors, it becomes practically impossible to police.
As history rhymes and we note the reticence of blue-collar workers wanting to pause automation during the invention of the textile mill, it is patently obvious to see a similar movement amongst white collar workers and AI. In the face of an existential crisis moving at breakneck speed, how is society expected to respond? Italian lawmakers banned ChatGPT and I wouldn’t be surprised to see other governments try to constrain the rollout of AI in the near future. At a governmental level, however, will all governments feel the same? If we consider that advancement in AI presents a new arms race, would governments in the East wilfully adhere to a pause? The Chinese think in decades, not weeks, and during the past decade have brought a sizeable amount of their population out of poverty compared to the West’s increasing decline. China has turned fishing villages into ecommerce hubs in less time that it has taken the UK to build the Crossrail system.
Has AI outgrown geographical borders? In the fight between the public and the private sector, one could argue that several corporates (and they’re in the AI fight!) have significantly more political power that some nation states. What becomes abundantly clear is that the fear resides in which hands hold it, which hold it first and what the implications of that are.
The arms race metaphor lends itself to some interesting comparisons in the sense that innovation in weaponry and warfare is not unfettered; after all, we don’t want everybody inventing new and innovative ways to kill people; even as some argue that the invention of the nuclear bomb reduced the death toll significantly and the same promise might be true of other advancements. Behind the door of AI lies potentially untold advancements in medicine with benefits for disadvantaged humans as envisioned by Bill Gates.
Where does liability fit into this new world? As slow as the public sector is in comparison to private sector tech, it brings with it an understanding of liability as well as a regulatory framework to enforce it. Our laws and social norms have been formulated on previous generations of technology and a fast-forming storm around AI will deeply impact that. There’s a growing discussion of how universities and other government research agencies that typically lead with new tech, are being priced out of AI. Governments can’t afford the compute nor the talent, nor do they have the data. Hence AI is seeming increasingly privatised with no public analogue, the result is a limit on the understanding required to formulate laws and norms.
When we consider liability, the concerns originate from a lack of understanding how LLMs like ChatGPT-4 work. Like a lot of technology, we accept that it works without understanding how it works. For example, there are many people who drive cars, but a decreasing amount of people who fully understand how they work. It has been argued that if a cataclysmic event were to occur and all major technology was wiped out, less than 1000 people on the planet actually know how the internet works properly. If we consider that all of the easily available fossil fuels (like coal) have been extracted, a post-cataclysmic world would be hard to resuscitate.
Given that to most people, AI is a black box: what would need to happen in order for there to be more transparency around it? If the black box means that the builder of the AI assumes liability or even culpability for things like death, I imagine that black box will be opened quickly to shift it. There are already AI tools that provide certainty scores built into their UI as a proxy for flagging when humans should get involved, and it learns from their manual input at that stage.
This is a fascinating time to be in law as well as insurance. Insurance companies need to quantify risk to insure, which will require transparency. Insurers, therefore, have two choices: they will either make it unaffordable, which seems like an unacceptable risk, or force that transparency. Some white-collar industries, like law, will find the adoption of AI to be quite troublesome: it is possible to sue a law firm, but no case has been brought against ChatGPT yet. If an employee uses an LLM to come to a decision, which party bears the brunt of the legal liability? Law firms will increasingly claim on professional indemnity – raising their hourly rate to cover the increase in premiums.
Overall, the main points are clear: the AI genie cannot be put back inside the lamp so even thinking about trying to do it is futile. There are valid concerns over who advances in AI and towards what end. Regulators cannot keep pace with the speed and implications of this innovation which has the potential to upend life as we know it. When it comes to liability, we need transparency into how LLMs work so that we can plan around them.
As the dawn of the new millennium broke, the promise of mobile computing held so much; the world’s knowledge in our pockets would change the game for everyone. Whilst it didn’t happen immediately, the advent of AI has forced this new world upon us, whether we like it, and are prepared for it, or not.