By Vivian Ng and Sabrina Rau
On 6 August 2018, Apple, Facebook and Spotify removed content from Alex Jones’s Infowars pages and accounts on their platforms, which were seen to be spreading conspiracy theories and hate speech. YouTube also terminated Alex Jones’s channel. These recent actions followed the takedown of four of Infowars’ YouTube videos earlier last month. While Twitter did not immediately take any action, they later suspended Alex Jones’s account for seven days on 15 August, citing violation of their rules on abusive behaviour and inciting violence. These companies have removed content or terminated these accounts on grounds that they violated the terms of service.
Much of the reporting has been critical of companies who are perceived to have not done enough, or acted quickly enough, to remove content from Alex Jones and Infowars. The attention seems to centre on whether such platforms have acted appropriately and adequately to combat misinformation and disinformation spread by entities like Infowars. For example, while Twitter has since taken action regarding Alex Jones’s account, it had been criticised for not suspending the separate Infowars Twitter account as well. Google and Apple have also been criticised for not removing the Infowars app on their app stores. These issues are important but commentary has been lacking on the broader significance of the current news around the actions platforms are taking regarding the content and accounts of Infowars and Alex Jones. More fundamentally, what role do companies have in content moderation, and how should that role be carried out? This post will look at the role that social media platforms play in the realisation of the right to freedom of expression in particular, and consider if and how content moderation by private companies that own and control such platforms can be compliant with human rights standards and norms.
By Emily Jones
Technology is vastly changing contemporary conflict. While there has been a lot of recent focus by international lawyers on topics such as drone warfare and autonomous weapons systems, very little has been published on these issues from a gender and law perspective. Seeking to bridge this gap, I recently co-edited a Special Issue for the Australian Feminist Law Journal on Gender, War and Technology: Peace and Armed Conflict in the 21st Century alongside Yoriko Otomo and Sara Kendall. The issue brings together a wide array of voices. Several different technologies are discussed; from drone warfare to lesser known technologies being used in conflict settings such as evidence and data collection technologies and human enhancement technologies.
As the introduction to the Special Issue notes, gender is used throughout the Special Issue in multiple ways, highlighting women’s lived experiences in conflicts as combatants, victims, negotiators of peace agreements, military actors and as civilians, as well as being used as a theoretical tool of analysis, ‘considering issues of agency, difference, and intersectionality, and contesting gendered constructions that presuppose femininity, ethnicity, and passivity.’ Intersectionality is also a key theme throughout the issue, with articles also ‘considering issues of race, colonialism, ability, masculinity and capitalism (and thus, implicitly, class).’ War is understood in light of feminist scholarship on conflict, noting how war and peace work on a ‘continuum of violence’ with neither war not peace being as easy to define as legal categorisations suggest.
By Carmel Williams
Big Data is transforming health care in multiple ways, from patient management to diagnostic and treatment methods. These new technologies are changing the health and public health landscapes, offering improved public health and clinical care. However, careful oversight of proposed uses of Big Data technologies is needed to protect against discrimination and increasing health inequities. In this blog, I propose that governments should undertake human rights impact assessments, including assessments that integrate right to health impacts, before using Big Data driven technologies in health. These assessments provide a structured approach to examining multiple ways in which the right to health could be at risk, including, but moving beyond, privacy issues.
Examples of the use of Big Data in healthcare include personalised medicine where a patient’s treatment is tailored to their genetic and environmental profile, DNA sequencing (which results in vast amounts of data stored in bio banks), forensic, genetic or medical databases, including data from public health studies and clinical trials – and all of which can be re-purposed for various technical inventions. The artificial intelligence (AI) industry in health care is booming, with growth rates in economic terms of around 40% per annum, reaching over $US6 billion by 2021. All this depends on access to huge data sets.
The concerns about the use of patient data, whether for patient management or clinical purposes, have focused predominantly on privacy and breaches of security. Although this is crucially important, here I examine broader social and economic rights issues, through the use of an abridged right to health framework (see the Oxford Textbook of Global Public Health, chapter 3.3, new version due in 2020).