2019 needs to be the year in which human rights sit at the heart of AI governance

By Lorna McGregor

At the beginning of 2018, the MIT Technology Review forecast that one of the ten ‘breakthrough’ technologies for the year would be ‘AI for Everybody’, underscoring the transformational potential of AI to sectors such as health. In a new report by the ESRC Human Rights, Big Data and Technology project, we argue that it is critical for everyone to benefit from artificial intelligence’s (AI) advances, particularly those most marginalized in society. To do otherwise, only risks widening existing inequality, a point that was underscored by this year’s World Economic Forum.

However, despite stark warnings, the wide-ranging risks presented by AI to individuals and societies have not been given enough attention. They are often treated reductively as ‘only’ risks to privacy (although as we argue in our report, privacy risks are serious) or dismissed as dystopian threats posed by specters such as the ‘killer robot’. In our report, we map the full impact of AI on human rights. We do so by showing how AI is already affecting many areas of life from education to work to health to care for the elderly. While AI has great potential to enhance the delivery of services in these areas, it already presents serious threats to our lives. The right to education is one of the fundamental human rights set out in the Universal Declaration of Human Rights which has just celebrated its 70th anniversary. In our report, we show how big data and AI can enhance the availability and accessibility of education. It can do so by providing a tool to help teach students in areas where there are no schools, or they are hard to reach. It can also enable teachers to adopt more effective approaches to the particular learning needs of individual students through adaptive learning software.

However, this is only possible if the existing digital divide is overcome. Moreover, some of these tools rely on access to user data, creating risks to privacy which is particularly sensitive in an educational context in which students are developing and testing thoughts and opinions. There is also a risk of an inappropriate rights ‘trade-off’ whereby some rights may be compromised in order to enable the realisation of others. Further, while these tools can supplement and enhance access to education for many individuals, thus addressing inaccessibility and inequality, there is a risk that due to resource constraints technology becomes a substitute for traditional forms of learning and human teachers. This could exacerbate existing inequality and entrench a digital divide in the delivery of education.

In order to address the harms to human rights and enable everyone to benefit from AI, an effective approach to AI governance is needed. In our report, we suggest that it will only be effective if it includes a human rights-based approach at its core. Such an approach marries well with calls for AI ethics in that it is based on human dignity. However, it not only requires states and businesses to respect human rights but also requires them to develop strategies and safeguards to prevent human rights being infringed upon in the first place; build frameworks for oversight and monitoring of AI applications to identify any risks to human rights; and ensure accountability and remedies if things go wrong.

Last year, states and large tech companies started to focus more on options for AI governance. A number of states published national AI strategies and some of the large tech companies developed AI plans. Within some of these strategies, states, such as Canada, and tech companies like Google and Microsoft acknowledge that human rights need to be part of the solution.

This is an important start. However, much more needs to be done given that the harm AI can potentially cause is not in the distant future but is here now. In our report, we argue that states and businesses need to assess the impact their existing uses of AI are having on human rights. We also argue that whatever form(s) AI governance takes – whether traditional state regulation, international agreements or industry co-regulation – a human rights based-approach needs to underpin and shape how AI is dealt with. This is necessary to guard against the risks to human dignity and well-being that we identify in our report. It is also necessary to ensure that AI is for ‘everybody’. We argue that at the heart of the development and use of big data and AI should be the right of everyone to benefit from scientific progress. This was a right set out in the Universal Declaration of Human Rights seventy years ago, and remains critical today in a world of big data and AI.

Professor Lorna McGregor is the Principal Investigator and Director of the ESRC Large Grant on Human Rights, Big Data and Technology and co-author of the report.


Disclaimer: The views expressed herein are the author(s) alone.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s