Privacy: System Failure | An Interview with Gus Hosein

 

By Vivian Ng and Daniel Marciniak

Gus Hosein has worked at the intersection of technology and human rights for over fifteen years. He has acted as an external evaluator for the United Nations High Commissioner for Refugees (UNHCR), advised the UN Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, and has advised a number of other international organisations. He is the Executive Director of Privacy International. We interviewed Gus when he spoke at the University of Essex Talk Big Data seminar on ‘Big Data, Big Brother?’ on 17 November 2016.

Could you tell us a bit more about the work you are currently doing, projects you are currently focusing on, and Privacy International’s strategy?

The field is changing, the world is changing and we always have to deal with the change. Privacy as a right has always been contingent on and defined by the environment around it. A year and a half ago, we evaluated where we are involved in the fight for privacy and thought about where we need to be involved. We identified three programme areas where we can make the largest contributions: (1) surveillance and human rights; (2) the Global South; and (3) data exploitation.

On the first area – surveillance and human rights – we have been continuing the work sparked by the Snowden revelations. We have noticed an increased focus on communications surveillance but also recognise that this is, however, only a fraction of the issues relating to surveillance and the contemporary debates around it. Hence, we are also anticipating upcoming surveillance issues such as Trump’s proposed registering of Muslims, and issues around immigration surveillance.

With regards to the second area – the Global South – we have noticed that technology adoption in the Global South is very advanced, while, at the same time, the strength of civil society institutions and media organisations, awareness of politicians, and the regulatory environment in the context of the privacy discourse lag behind developments in Western Europe and North America. Thus, we are looking at supporting and funding existing local advocacy organisations, as these small organisations have the potential to make a huge impact where little discourse currently exists.

We have decided to call the third area ‘data exploitation’ in order to cover terms such as ‘big data’, ‘machine-learning’ and ‘artificial intelligence’ that go in and out of fashion, but basically describe the application of analytics to datasets. Our strategy in this area was to first identify the trends in the world that were most worrying, which includes big data, the Internet of things, the rise of currency in algorithms, and datafication. These are the areas that we want to be competent in and seek change in, acknowledging that we might not have the solutions, but committing to identifying the legal and technological solutions, and to raising the field’s awareness to potential dangers that have to be addressed.

How do you prioritise the issues that Privacy International works on?

The landscape has changed a lot. When I started out at Privacy International 20 years ago, we could choose what to work on and it would have an almost guaranteed impact. Now, given the visibility of Privacy International’s profile, we are being engaged on so many different issues that prioritisation has become an issue. Right now, we are still focusing on the work in relation to the post-Snowden revelations. Because we chose to fight this fight on legal terms, we are somewhat bound to the timeline of cases, how they develop, and when they are scheduled for hearing. In relation to this work on surveillance, we are also tracking the surveillance industry – companies who build and sell technologies for surveillance for which there is no governing legal framework. We are seeking remedy against it – looking at media coverage, legal reform or possible litigation.

We are planning to focus on the new fights – migration, digital border controls, biometrics, data-sharing in Europe, and merging of administrative data.

How effective is the international regulation of companies in relation to the surveillance industry?

Our strategy for initial work that we engaged in this area relied on creating public outrage. This is however no longer sufficient for addressing the regulation of the surveillance industry. When we first started working on this, export controls seemed to be a good measure in response. In turn, a number of technologies were added to the restrictions contained within the relevant instrument – the ‘Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies’ (Wassenaar Arrangement). This also opened up the opportunity to lobby governments on the compliance of companies within their jurisdiction. However, other problems became apparent, including failure on the part of the regulators.

First, listing technologies was not an effective way of regulation; a better approach would have been to list the capabilities of technologies and link it with the intentionality, which is what we decided to pursue in our work with the European Union (EU). The test established by the EU in relation to this now benchmarks against human rights, and assesses whether reception of a particular technology by a recipient country could result in a breach of human rights.

Second, the overregulation of intrusion software affected the security research community adversely. Regulations created under the Wassenaar Arrangement hindered the export of their research and their collaboration with other researchers. It created a perception that this regulation was deliberately targeted at the security research community to prevent competition with national governments’ security research into vulnerabilities.

Together, these problems meant that there was a failure in regulating export of the technologies, whilst creating a chilling effect on the security research community, of which the latter still needs to be remedied as it is important for the future of our security infrastructure.

This is the reality of this field: Technology is contingent, law is variable, and its application is never knowable.

In relation to human rights as a benchmark, how do you think the human rights concept of privacy has developed over the years, and is that normative concept still adequate?

Setting aside the U.S. framework of privacy, it is only relatively recently that privacy surfaced in the human rights discourse within the UN space, and identified and prioritised as a human right. However, framing privacy as a human right does not bring force to its protection nor its understanding. Privacy does share common roots with human rights – the protection of autonomy and dignity – and essentially the defence of a democratic system of checks and balances. Privacy is a lens unto the world and an enabler of human rights. In relation to technology, a privacy lens gives a firmer grasp on the roles that technology can play. Beyond the power disparities between the individual and the state, a privacy perspective pays attention to the ways (digital) technologies reshape these relationships. This concept of privacy is distinct from human rights in that it moves along with changes in the environment and calls not only for legal but also technological measures.

What do you think about the normalisation of encryption and the possibility that it creates a dead-end to legitimate breaches of privacy?

Within the communication chain are competing interests of the user, the provider or enabler of the technology (such as Apple), and the state(s) trying to intercept communication. These interests are impossible to reconcile short of mandating insecurity. Even within a single state, there are competing interests between law enforcement and national security, as well as opposing views on the protection of domestic versus foreign communications.

Governments have never been able to intercept all private communications that occur in our daily lives, but they do now. Nobody should have that ability. It is the duty of the enabler of that communication to make them secure. The more interesting fight, however, is over malware and hacking. That is how governments are going to compromise user devices, since they cannot get the cooperation of the enabler. However, this raises further questions of cybersecurity.

What is your sense of public perception about data exploitation (or big data)? Is there an information asymmetry between individual and states or companies, and how does consent play into that relationship?

Consent is a very important concept, but the discourse around consent is also misguided because it provides a blind spot where data beyond the individual’s control is concerned. People may hand over a lot of information to companies, but big data, machine learning, and artificial intelligence have very little to do with the information that people knowingly give away or entrust to companies. It is not about the content of information that people create, but more about the metadata – the data that one has no ability to control. It is about devices collecting and generating data beyond our control, which is the grist of the mill for the future. Therefore, it is not about consent, because it is not about what one can meaningfully understand and disclose. It is about the things that are beyond one’s control. The question about asymmetry is entirely right: Are we going to inform everyone about everything that is beyond their control and give them control over it? It is not always possible. The solution has to be something beyond consent, and even control.

Instead of looking at solutions beyond consent and control such as auditing and transparency, Privacy International has been looking at this as a systemic problem. Devices that create data beyond our control and turn such information into intelligence about us are a security fault to our society. They can be hacked, to provide intelligence to malevolent actors, and create various undesirable inequalities. It is not for the individual to change his or her behaviour in order to preserve their rights, but it is a systemic problem that requires a societal response.

How can the public be engaged with a problem that has such a systemic character to it?

We have relied on public outrage as a catalyst for change but it is a poor start. The strategy requires a constant fomentation of public outrage, but it is a problematic approach because the public can quickly forget, the outrage can be too narrowly-focussed and ignore a large part of the problem. What then is the solution? It is about creating an ethos for individuals, organisations, and governmental bodies that our technology should not betray us and go against our personal best interests, even as they work for us beyond our understanding.


Disclaimer: The views expressed herein are the author(s) alone.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s