By Vivian Ng
The House of Commons Science and Technology Committee recently released the Fourth Report of Session 2017-2019 on ‘Algorithms in decision-making’. The release of the Committee’s findings and recommendations for the government is particularly timely, following recent revelations regarding Cambridge Analytica and Facebook and the increasing recognition that these issues extend far wider. This post will unpack how human rights have featured in the Committee’s analysis, and argues that human rights should underpin and center the understanding of how algorithms affect individuals and groups in society, as well as the responses to address the risks and challenges.
By Lorna McGregor, Daragh Murray, and Vivian Ng
This blog originally appeared on The Conversation
Whether or not you realise or consent to it, big data can affect you and how you live your life. The data we create when using social media, browsing the internet and wearing fitness trackers are all collected, categorised and used by businesses and the state to create profiles of us. These profiles are then used to target advertisements for products and services to those most likely to buy them, or to inform government decisions.
Big data enable states and companies to access, combine and analyse our information and build revealing – but incomplete and potentially inaccurate – profiles of our lives. They do so by identifying correlations and patterns in data about us, and people with similar profiles to us, to make predictions about what we might do.
But just because big data analytics are based on algorithms and statistics, does not mean that they are accurate, neutral or inherently objective. And while big data may provide insights about group behaviour, these are not necessarily a reliable way to determine individual behaviour. In fact, these methods can open the door to discrimination and threaten people’s human rights – they could even be working against you. Here are four examples where big data analytics can lead to injustice.
By Daragh Murray and
This blog originally appeared on The Conversation
Amnesty International has raised a series of human rights issues in connection with the “gang matrix” developed and run by London’s Metropolitan Police, in a recent report. According to the report, appearing on the database could affect the lives of 3,806 people, 80% of whom are between 12 and 24 years old.
There are no specific details about how the matrix operates and is used by police. It exists, at least in part, to address the difficulties in policing gang activities across different districts. But it’s suspected that – because of government data sharing – appearing on the database will “follow” young people around, affecting their access to housing, education or work.
The Met said in a statement, “The overarching aim of the matrix is to reduce gang-related violence and prevent young lives being lost”, but added that it was working with Tottenham MP David Lammy, Amnesty International and the Information Commissioner’s Office to “help understand the approach taken”.
The Long Read Series
By Aoife Duffy
Just over 40 years after its famous Ireland v United Kingdom judgment, the European Court of Human Rights ruled on the Irish government’s request to review its 1978 finding that the United Kingdom had committed an Article 3 violation of the European Convention on Human Rights. Article 3 states that “[n]o one shall be subjected to torture or to inhuman or degrading treatment or punishment.” The historical context of the original ruling was violent conflict in Northern Ireland; the contemporary context of the revision judgment is intense debate about European institutions and standards following the Brexit referendum. Whereas the European Commission found that the United Kingdom’s combined use of five techniques – hooding, wall standing, exposure to white noise, reduced diet and sleep deprivation – amounted to torture, the European Court categorised the system of interrogation not as torture, but inhuman and degrading treatment. In 2014, the Irish government submitted a revision request under the Rules of the Court on the basis of fresh evidence – a dossier of declassified files released under the 30 year rule that seemed to corroborate the Commission’s finding of torture. In short, the Irish government argued that had these facts been known at the time, the European Court would not have diverged from the Commission’s finding of torture. This post will demonstrate that the revision judgment was settled along weak procedural lines, which can easily be picked apart by reference to the declassified files that triggered the revision request. In addition, it will question the utility of situating history making in this type of legal forum.
By Amy Dickens and Linsey McGoey
In November 2015, the Royal Free NHS Foundation Trust transferred over 1.6 million identifiable patient records to DeepMind, an artificial intelligence subsidiary of Google. Earlier that year, the Trust had privately signed an agreement commissioning DeepMind to develop an early warning system to detect Acute Kidney Injury (AKI). The resulting smartphone application, called Streams, is now in clinical use across the Royal Free and will soon be rolled out to other NHS Foundation Trusts.
When news of the deal broke, privacy advocates expressed astonishment at the private agreement. They were outraged by the Trust’s willingness to share confidential patient data without consent and other necessary protections. The Information Commissioner’s Office subsequently launched a year-long investigation into the deal, concluding in July 2017 that the Royal Free had violated the 1998 Data Protection Act. Despite the ruling, DeepMind has since partnered with multiple NHS Trusts. The company’s CEO, Mustafa Suleyman, has declared his ambitions to expand into the NHS and develop a digital platform that could support artificial intelligence technologies in the future.
Criticism of the DeepMind-Royal Free partnership has centred around privacy rights and data protection issues. While privacy concerns are important, our research suggests that a narrow focus on privacy has thwarted attention to related human rights concerns, particularly around the right to the highest attainable standard of health. Continue reading
By Carmel Williams
The UN Human Rights Council’s Universal Periodic Review (UPR) operates a five-year cycle for all countries to demonstrate compliance with all human rights obligations – not just those arising from treaties that the country has ratified. It is therefore quite a departure from individual treaties’ UN Monitoring Committees, giving rise to a more cohesive, comprehensive overview of a country’s human rights performance. This coherence reflects the UPR’s principle of promoting the universality, interdependence, indivisibility and inter-relatedness of all human rights. While the UPR has been well received generally by states, civil society and human rights institutions, and it promotes a collegial, ‘judgement by peers’ approach to human rights reporting, it is not yet clear whether it is “changing human rights on the ground” (J. McGregor, S. Bell, and M. Wilson, “Human Rights in New Zealand – Emerging Fault Lines” 2016). UPR recommendations are not specific; they ask States to take varying types or degrees of action, but they do not quantify the results expected.
In an effort to gain clarity about its effect ‘on the ground’, this research project is examining the impact of UPR recommendations made to New Zealand at the conclusion of its second reporting cycle, ending in 2014. The New Zealand Human Rights Commission developed a comprehensive online monitoring tool of UPR recommendations that lists all 155 recommendations. The 121 accepted recommendations have been categorised by the Commission, in consultation with the government agencies, into issues, government actions, population group, or UN Treaty Bodies that the recommendations relate to (Fig 1).
By Tara Van Ho
This is my final post on last week’s Jesner v Arab Bank decision from the US Supreme Court. Earlier posts can be found here (where I critique the Court’s confusion over international criminal law versus international human rights law) and on Nadia Bernaz’s Rights as Usual blog here (where I argue that the law and policy makers need to recognize that corporations are not simply tools for evil, but that their structure can encourage evil).
My final criticism of the Jesner decision is that the Court, and Justice Kennedy in particular, do not evidence a clear understanding of customary international law, how it develops or how it binds states. Kennedy repeatedly suggests that recognizing corporate accountability for breaches of customary international law would be a ‘judicial invention’ that usurps the role of the legislative and executive branches in developing foreign policy. There are reasons particular to US law that may have led Kennedy to suggest that the recognition of existing customary international law standards would still be a ‘judicial invention’ for the purpose of the Alien Tort Statute (ATS) . Those reasons are much more persuasively set out in the Court’s 2004 Sosa v. Alvarez-Machain decision (start reading from Section IV.C) than they are in Jesner. Kennedy’s approach, however, raises questions about how he understands customary international law and its development.