What Next After the Facebook and Cambridge Analytica Revelations?

By Vivian Ng

This post originally appeared on the Human Rights, Big Data and Technology (HRBDT) Project Blog.

On 2 July 2018, the Human Rights, Big Data and Technology (HRBDT) Project co-hosted a panel discussion with the International Law Program at Chatham House, on ‘What Next After the Facebook and Cambridge Analytica Revelations?’. Experts on the panel included Silkie Carlo, Director of Big Brother Watch, David Kaye, the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Lorna McGregor, the Principal Investigator and Co-Director of the HRBDT Project, and James Williams, a writer and academic at the University of Oxford, and formerly a strategist at Google. The discussion was chaired by Harriet Moynihan, an Associate Fellow of the International Law Program at from Chatham House.

In the opening remarks, Harriet Moynihan set out how recent discussions have focused on two key issues – first, the question of how accountability and effective remedy can be achieved, and second, what regulation should look like. There is heavy emphasis on the introduction of the General Data Protection Regulation (GDPR), but what are the implications for international law? How can standards in human rights and data protection respond?

Putting the Facebook and Cambridge Analytica incident in context, Lorna McGregor observed that the revelations were a catalyst in the increasing explicit recognition of how big data, smart technology and artificial intelligence can have implications on human rights, within public and policy debates. She highlighted that these effects are not limited to privacy, but also to other rights including the right to freedom of thought, opinion, expression, and on democracy. While the GDPR is important, it is not sufficient to address the human rights implications on civil and political rights, as well as economic, social and cultural rights. The involvement of global businesses also requires a global analysis. The many restrictions and exemptions within the GDPR means that there are areas where it is not biting. Using data protection as a means of implementing the right to privacy may also turn the GDPR into a compliance tool. International human rights law involves looking at whether human rights have been affected, as well as the frameworks to ensure that obligations are met, which includes prevention, monitoring and oversight, and post-facto accountability and remedies. She emphasised that clear articulation of what international human rights law already requires of states and businesses is needed. Proposals for doing so include a new General Comment by the United Nations Human Rights Committee on Article 17 of the International Covenant on Civil and Political Rights on the right to privacy, or a thematic General Comment by the Human Rights Committee and the Committee on Economic, Social and Cultural Rights. It is key to work out the implications, while enjoying the benefits of big data, smart technology and human rights.

While there is ongoing consolidation around efforts to address the implications on the right to privacy with developments such as with the GDPR being operative, David Kaye juxtaposed that with developments in protection for the right to freedom of expression. He discussed the perspectives of states and companies regarding content regulation, and how these affect regulation and moderation of content in practice. He noted differences between authoritarian regulation of content and state practices in democratic spaces, and highlighted that governments are moving towards content regulation in a variety of areas. He also noted that state practices indicate that new rules are evolving quite rapidly with a significant reorientation of what the right to freedom of expression on platforms means. For example, he contrasted the use of upload filters employing artificial intelligence to stop even the upload of content with the prior model of notice and takedown of content. At the same time, companies are moving towards a perception that they can have a greater role in content moderation, and see artificial intelligence as part of the solution. Companies’ rules for content moderation remain relatively open-ended and opaque with a wide berth of discretion, and typically favours the takedown of content, but lack systems of appeal and remedy. David proposed exploring the possibility of using a social media council, modelled on press councils in commonwealth countries, to achieve greater public accountability for content moderation in protection of the right to freedom of expression.

There was also appreciation of how the Facebook and Cambridge Analytica revelations connected to the broader context of the technology sector. Silkie Carlo posited that the Cambridge Analytica scandal was a symptom of a bigger problem with the business model of Facebook and companies like it, which exploit data for commercial purposes. She emphasised that the deep analysis of personal information about individuals and trade in information such as political leanings and psychological traits, even of individuals who are not users of certain platforms, pose risks to human rights and democracy. In her view, the Cambridge Analytica scandal was not an aberration but rather an exposé of the status quo, and was the tip of the iceberg of what is a much bigger problem. She perceives Facebook as a ‘for-profit global surveillance machine’, within a broader context of ‘modern colonialism’ by technology companies. The question then is how to deal with evolving risks. She raised both the issues relating to business practices of companies like Facebook, as well as the role of governments in applying pressure on companies for the use of such data. She cited the Investigatory Powers Act and the ability of the UK government to collect bulk datasets as an example of how hard law can be used to apply such pressure. In her view, individual awareness of how they use their platforms and protection of their own private sphere is important and regulation can play a part by banning micro-targeting especially in political targeting.

In terms of solutions, a key obstacle James Williams identified is the lack of a common language for talking about the management of the ‘attention economy’. The collision of what Silkie referred to as the ‘colonialisation of the internet’ by internet companies with the development of the use of psychology in advertising has created abundant information that compete for attention. He agreed that Cambridge Analytica is not an exception, but rather is the rule and emblematic of the practices of the digital persuasion industry as a whole. In his view, the real challenges of this highly sophisticated, persuasive and unbounded industry are thus systemic. Missing that connection risks a ‘change of guards’ in the technology industry and the perpetuation of incrementally tolerable design that is still capturing and manipulating attention. The response to these challenges should include the development of language to map the nature of the problem, a re-evaluation of what advertising is and what it is for, and a reframing of the harms and risks. Narrowing the problem to issues of data protection and privacy escapes more fundamental questions about companies’ business models, but society must urgently address these questions because the methods of commercial persuasion inevitably become those of political persuasion as well. Deeper democratisation of design is needed, which involves conversations between designers and users that engage actual input from individuals for real user-centred design should be created. He also emphasised however that it is important to push back on the framing of the political problem as simply a problem of design.

The discussion covered issues including how the concerns identified in the technology sector relate to other sectors that also process and amalgamate data, complexity in assessing the real benefits and risks of artificial intelligence applications for healthcare, the state of public opinion and the trajectory of development in this area. It also explored various recourses for accountability, such as what individuals can do and particular challenges for certain minority groups that rely on social media platforms for communication, the role that technological solutions can play in terms of design solutions and for remedy, the operationalisation of and coordination between various monitoring and oversight mechanisms, as well as priorities for different stakeholders.


Disclaimer: The views expressed herein are the author(s) alone.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s