By Vivian Ng and Sabrina Rau
On 6 August 2018, Apple, Facebook and Spotify removed content from Alex Jones’s Infowars pages and accounts on their platforms, which were seen to be spreading conspiracy theories and hate speech. YouTube also terminated Alex Jones’s channel. These recent actions followed the takedown of four of Infowars’ YouTube videos earlier last month. While Twitter did not immediately take any action, they later suspended Alex Jones’s account for seven days on 15 August, citing violation of their rules on abusive behaviour and inciting violence. These companies have removed content or terminated these accounts on grounds that they violated the terms of service.
Much of the reporting has been critical of companies who are perceived to have not done enough, or acted quickly enough, to remove content from Alex Jones and Infowars. The attention seems to centre on whether such platforms have acted appropriately and adequately to combat misinformation and disinformation spread by entities like Infowars. For example, while Twitter has since taken action regarding Alex Jones’s account, it had been criticised for not suspending the separate Infowars Twitter account as well. Google and Apple have also been criticised for not removing the Infowars app on their app stores. These issues are important but commentary has been lacking on the broader significance of the current news around the actions platforms are taking regarding the content and accounts of Infowars and Alex Jones. More fundamentally, what role do companies have in content moderation, and how should that role be carried out? This post will look at the role that social media platforms play in the realisation of the right to freedom of expression in particular, and consider if and how content moderation by private companies that own and control such platforms can be compliant with human rights standards and norms.
By Emily Jones
Technology is vastly changing contemporary conflict. While there has been a lot of recent focus by international lawyers on topics such as drone warfare and autonomous weapons systems, very little has been published on these issues from a gender and law perspective. Seeking to bridge this gap, I recently co-edited a Special Issue for the Australian Feminist Law Journal on Gender, War and Technology: Peace and Armed Conflict in the 21st Century alongside Yoriko Otomo and Sara Kendall. The issue brings together a wide array of voices. Several different technologies are discussed; from drone warfare to lesser known technologies being used in conflict settings such as evidence and data collection technologies and human enhancement technologies.
As the introduction to the Special Issue notes, gender is used throughout the Special Issue in multiple ways, highlighting women’s lived experiences in conflicts as combatants, victims, negotiators of peace agreements, military actors and as civilians, as well as being used as a theoretical tool of analysis, ‘considering issues of agency, difference, and intersectionality, and contesting gendered constructions that presuppose femininity, ethnicity, and passivity.’ Intersectionality is also a key theme throughout the issue, with articles also ‘considering issues of race, colonialism, ability, masculinity and capitalism (and thus, implicitly, class).’ War is understood in light of feminist scholarship on conflict, noting how war and peace work on a ‘continuum of violence’ with neither war not peace being as easy to define as legal categorisations suggest.
By Carmel Williams
Big Data is transforming health care in multiple ways, from patient management to diagnostic and treatment methods. These new technologies are changing the health and public health landscapes, offering improved public health and clinical care. However, careful oversight of proposed uses of Big Data technologies is needed to protect against discrimination and increasing health inequities. In this blog, I propose that governments should undertake human rights impact assessments, including assessments that integrate right to health impacts, before using Big Data driven technologies in health. These assessments provide a structured approach to examining multiple ways in which the right to health could be at risk, including, but moving beyond, privacy issues.
Examples of the use of Big Data in healthcare include personalised medicine where a patient’s treatment is tailored to their genetic and environmental profile, DNA sequencing (which results in vast amounts of data stored in bio banks), forensic, genetic or medical databases, including data from public health studies and clinical trials – and all of which can be re-purposed for various technical inventions. The artificial intelligence (AI) industry in health care is booming, with growth rates in economic terms of around 40% per annum, reaching over $US6 billion by 2021. All this depends on access to huge data sets.
The concerns about the use of patient data, whether for patient management or clinical purposes, have focused predominantly on privacy and breaches of security. Although this is crucially important, here I examine broader social and economic rights issues, through the use of an abridged right to health framework (see the Oxford Textbook of Global Public Health, chapter 3.3, new version due in 2020).
By Paola Limón
Between 1960 and 1996, Guatemala suffered a violent internal armed conflict. During this time, it also managed to become a full member of the Inter-American Human Rights System: signing the American Convention on Human Rights (ACHR) in November 1969, ratifying it in April 1978 and accepting the jurisdiction of the Inter-American Court of Human Rights (IACtHR) in May 1978.
Since 1996, the IACtHR has decided 25 cases against Guatemala; making it the country with the second highest number of contentious cases decided by the IACtHR (Peru is first, with 42 cases since 1995). Of these 25 cases, Guatemala has only fully complied with one. In this regard, on 21 September 2017, the IACtHR notified its monitoring compliance resolution of 30 August 2017, declaring that Guatemala had fully complied with its judgment of 3 May 2016 in the Case of Maldonado Ordóñez.
Although this might seem like an inconsequential matter at first glance, it is unprecedented that Guatemala fully complied with an IACtHR judgment; even more so, considering that it happened in approximately 14 months. But a closer look into the merits proceedings and reparations orders in this case, reveals that full implementation of this judgment was only possible, in time and substance, due to an error attributable –not exclusively– to the IACtHR. This post, product of the ESRC Human Rights Law Implementation Project –HRLIP– (see endnote), seeks to explore those aspects of the IACtHR’s proceedings and orders in this case, which facilitated implementation of the judgment.
By Vivian Ng
This post originally appeared on the Human Rights, Big Data and Technology (HRBDT) Project Blog.
On 2 July 2018, the Human Rights, Big Data and Technology (HRBDT) Project co-hosted a panel discussion with the International Law Program at Chatham House, on ‘What Next After the Facebook and Cambridge Analytica Revelations?’. Experts on the panel included Silkie Carlo, Director of Big Brother Watch, David Kaye, the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Lorna McGregor, the Principal Investigator and Co-Director of the HRBDT Project, and James Williams, a writer and academic at the University of Oxford, and formerly a strategist at Google. The discussion was chaired by Harriet Moynihan, an Associate Fellow of the International Law Program at from Chatham House.
In the opening remarks, Harriet Moynihan set out how recent discussions have focused on two key issues – first, the question of how accountability and effective remedy can be achieved, and second, what regulation should look like. There is heavy emphasis on the introduction of the General Data Protection Regulation (GDPR), but what are the implications for international law? How can standards in human rights and data protection respond? Continue reading