A leading technology academic is offering that “a little bit of fear” is an appropriate emotional response to meet an existential threat that developments in Artificial Intelligence (AI) could result in humanity losing control over machines.
This, says Professor Stuart Russell (founder of the Center for Human-Compatible Artificial Intelligence, at the University of California, Berkeley) is because fear is a stimulant to action now, against the risk of waiting until it is too late.
How to avoid losing control over AI is one of the themes the professor is exploring in the latest series of Reith Lectures entitled ‘Living With Artificial Intelligence’. Of course, a rational extension of the proposition is that, unless we are fearful, then there will be little to impede the march toward an autocracy by machines.
For his part, Richard Watson, author and ‘Futurist-in-Residence’ at the Judge Business School, Cambridge University, also prompts us to consider the democratic jeopardy to our futures of the power exercised by a technology agenda-setting Silicon Valley community that is far removed in its demographic profile from the diverse world to which it directs its ingenious innovations.
Like Russell’s advocacy for action, Watson’s prompt is not to futility, but a call for both an individual and collective response to rein in a future rushing headlong out of control.
In another sphere of technological development, as far back as 1996, long before machine learning had entered public consciousness as a reality (at that point, AI was still the realm of science fiction), the UK Advisory Committee on Genetic Testing was established to advise on ethical, social and scientific aspects of genetic testing and the requirements to be met by suppliers of genetic testing services.
Since then, in the UK and in most developed economies with genomic associated academic communities, the field of genetics has been regulated.
The regulatory scheme has aimed to achieve a balance between, on the one hand, facilitating the considerable benefits that advances in applied scientific understanding of the building blocks of life bring (for the health and wellbeing of humankind), and, on the other hand, mitigating the threat posed by a loss of power to uncontrolled ‘mad science’.
Whilst debate continues to rage as to whether the right balance has been struck, at least in the field of genetics technology, checks against excess exist within established democratic structures; a perceived threat met by a collective societal response resulting from action by individuals.
The same cannot presently be said of the field of AI. The levers and drivers for political action, or an ethical response to the risk of a loss of power to machines, do not presently appear to offer the prospect of interventions.
Where existing political structures are not inclined to, or are seemingly unable to, respond to existential threats that AI might present, the place for action has to be at a subsidiary level; ultimately on the stages where the drama of arm wrestles for control will be played out.
What Professor Russell is telling us, at macro and micro levels, is not to play a ‘wait and see’ game but to harness our very human concerns and use those concerns as a positive driver to act.
So then, where the stage in question is occupied by the legal profession, what should be our response?
Perhaps counter-intuitively where we are invited to be afeared by those who know something of the potential powers of AI and where there are comprehensible concerns of redundancy of and within the legal fraternity from accelerating technological advances, the starting point has to be to embrace the opportunities that AI offers and to do so with confidence. This is because it is only when we are open to the possibilities that we can influence and frame the agenda for their application to what we do.
AI offers the prospect of extraordinary advances in process-oriented decision making. It offers to do much of the heavy lifting of what we do as lawyers that does not engage the relationship-capital that clients need or want us to employ in our services.
It follows that the confidence to see and engage with the potential for AI in legal practice can only be fully realised if accompanied by a conviction in the centrality of human relationships to our role as legal advisers and representatives.
The law provides a framework for the management and discipline of human relationships and it is how we deploy AI to serve our clients’ needs within that relationship-oriented framework that will define the profession’s future.
Add to that, the profession’s role as a stalwart of ethics in society and its skillset in advocacy place it in a unique position to make the case for principled controls against an autocracy of machines.
So, is the fear that Professor Russell suggests should provoke a response, a call to action for the profession? A summons to be emancipated and take ownership of its own technological future and play its part in establishing conditions that ensure that machines serve humanity and do not overpower us? These are questions worth asking … and to be nimble about it too.