The Prime Minister, Theresa May, this week used her speech at the World Economic Forum meeting in Davos to talk about the ethics of artificial intelligence (AI) and the launch of a new Centre on Data Ethics. But why, many might be asking, did she choose to make this seemingly obscure topic the centrepiece of an appearance on a global stage?
The point is that AI ethics is not a niche issue anymore. It has moved from the confines of academia to the foreground of political attention as a result of growing awareness of the ways in which automation is affecting all of our lives, and the impact it could have on our society in the future. This has led to a debate on how we should shape the development of AI to avoid some of the potential ethical and moral pitfalls.
The UK has already positioned itself as a world leader in “responsible AI”. Alongside a number of high-profile academic institutions (e.g. The Alan Turing Institute and The Leverhulme Centre), there are active groups on AI in both the House of Commons and the House of Lords. And the addition of the new national centre announced by the PM will strengthen the UK's role further.
But charities are absent
The charity sector, however, is currently noticeable by its absence from this debate. The Charities Aid Foundation submitted a response to a House of Lords call for evidence on AI last year, but we were one of only a handful of charities to do so.
In our submission we argued for the importance of engaging charities in this debate; and I think that it is more vital than ever that our sector gets to grips with these issues, for a number of reasons.
Firstly, the subject is too wide-ranging in its impact to be the sole preserve of technical AI experts; who are not necessarily best placed to address the ethical or political questions. Charities obviously do not have a monopoly on morality; but they are mission-driven, and as such often have a long track record of dealing with thorny social issues and may have valuable insight to add.
Secondly, many charities exist to represent the most marginalised individuals and communities in society. Since these groups are likely to be most affected by the negative consequences of AI ─ such as the impact of automation on the workplace; or algorithmic bias, where people find themselves on the wrong end of automated decisions – the onus is on charities to understand the issues and equip themselves to speak up for their beneficiaries in this new context.
Lastly, even if they are not interested in the policy-level issues, many charities will have to deal with the practical impact of the technology. For instance, the new GDPR will introduce a raft of new rights and responsibilities pertaining to algorithmic decisions. Charities need to understand how this affects them and those they serve.
You don’t have to be an expert
So what do charities actually need to do at this point? One thing is just to make sure you are up to speed with the issues. This doesn’t mean you have to become and expert in AI, as there are many accessible resources for non-experts (this article from BBC News Labs is a good introduction with further links). Often all that is required is a bit of lateral thinking to relate the issues directly to charities. This is what we are doing though our work on technology at CAF, and we are keen to work with others who want to explore these issues.
But what, you might ask, is it that charities actually need to be thinking about? Recent work on ethical AI is helpful here, as it can start to break the slightly nebulous idea of “playing a role in oversight’ into much more tangible elements such as fairness, accountability and transparency.
What is fair?
In terms of fairness, we can immediately pose a number of distinct questions that charities could take a position on: Is it fair to make a given AI system at all? Assuming that we are making it, is there a fair technical approach? And once we have made it, how do we test the system for fairness?
Perhaps the most challenging area, however, will be around transparency, as it is here that there is often least clarity about what is actually meant. It is true that many algorithmic processes operate as opaque “black boxes”, but what would straightforward transparency concerning their inner workings actually achieve?
If most of us were presented with vast reams of technical data on machine learning systems, would we actually be any the wiser when it comes to understanding why we had been refused health insurance or been identified as a possible suspect in a crime? Probably not.
Perhaps, then, “explanation” is a more relevant concept than straightforward transparency. And this is an area in which some fascinating work is taking place: from algorithms that are able to “explain themselves” to the idea of using counterfactuals (i.e. statements of how things could have been different) to explain algorithmic decisions without having to 'open the black box'.
Given that the forthcoming GDPR introduces a “right to explanation” with regard to algorithmic decisions (although this is not legally binding, it must be said); it seems particularly pertinent for charities to start educating themselves about this area now. (Along with the many other elements of GDPR we are all struggling to get our heads around, I hear you say...)
From our magazines
We must engage meaningfully in debate
These examples of ways in which charities can start thinking about AI ethics hopefully give a sense of how we can start to engage meaningfully in this debate.
And we need to do so as a sector, because these issues are ones that will affect our whole society quite profoundly. If we, as charities, want to remain relevant and vital in the future, then we need to get to grips with them now or face becoming ineffective or even irrelevant.
Rhodri Davies is head of policy and programme leader at CAF’s Giving Thought