Why We Need to Stop Talking About ‘Killer Robots’ and Address the AI Backlash

By Lorna McGregor

This post originally appeared on EJIL: Talk!

In the field of artificial intelligence, the spectacle of the ‘killer robot’ looms large. In my work for the ESRC Human Rights, Big Data and Technology Project, I am often asked about what the ‘contemporary killer robot’ looks like and what it means for society. In this post, I offer some reflections on why I think the image of the ‘killer robot’ – once a mobiliser for dealing with autonomous weapons systems – is now narrowing and distorting the debate and taking us away from the broader challenges posed by artificial intelligence, particularly for human rights.

In order to address these challenges, I argue that we have to recognise the speed at which technology is developing. This requires us to be imaginative enough to predict and be ready to address the risks of new technologies ahead of their emergence. The example of self-driving cars is a good illustration of technology having arrived before regulatory issues have been resolved. To do otherwise means that we will be perpetually behind the state of technological development and regulating retrospectively. We therefore need to future-proof regulation, to the extent possible, which requires much more forward-thinking and prediction than we have engaged in so far. Continue reading

Advertisements

Algorithmic Decision-Making and Human Rights

By Vivian Ng

Man Vs. Machine’. ‘How algorithms rule the world’. ‘How algorithms rule our working lives’. ‘How Machines Learn to be Racist’. ‘Is an algorithm any less racist than a human?’ ‘Machine Bias’. ‘Weapons of Math Destruction’. ‘Code-Dependent’. These are some of the recent headlines about the age of artificial intelligence, that seem to foreshadow a not-so-promising future for the human race in its rapid advancement. One thing is clear, algorithms have become increasingly prominent in our everyday lives. What is less clear is what we can do about that and deal with both the opportunities and risks it brings.

Algorithmic accountability is currently a topical issue in the space where discourse about technology and human rights intersect. As part of its work on the analysis of the challenges and opportunities presented by the use of big data and associated technologies, the Human Rights, Big Data and Technology Project is looking into the human rights implications of algorithmic decision-making. We contributed written evidence to the UK House of Commons Science and Technology Committee inquiry on ‘Algorithms in decision-making’. The submission outlined when and how discrimination may be introduced or amplified in algorithmic decision-making, and how the various stages in the algorithmic decision-making process offer opportunities for the regulation of algorithms to prevent and/or address potential discrimination. We were also at RightsCon 2017, discussing these precise issues. We followed the track on Algorithmic Accountability and Transparency, and also organised a panel discussion on ‘(How) can algorithms be human rights-compliant?’ We gained new insight and ideas from the experts who joined us. This post sets out some of our preliminary thinking, the issues we are working through from an inter-disciplinary perspective, and some critical questions to be addressed.

Continue reading

Lethal Autonomous Robots and the Dehumanization of War

Over the last week several newspapers around the world have highlighted the second round of meetings in Geneva, under the supervision of the Convention on Certain Convention Weapons CCW, regarding the legal future of so-called Lethal Autonomous Robots (LARs). For some, who argue that LARs can be more ethical than human soldiers this new technology represents the future of warfare (R. Arkin). For others, LARs are ‘killer robots’ that should be subject to a prohibition similar to that applicable to Blinding Lasers Weapons, which were prohibited by Protocol IV to the CCW (Human Rights Watch; Article 36).

By Afonso Seixas-Nunes, SJ.

Continue reading