Big Data, Mass Surveillance, and The Human Rights, Big Data & Technology Project

 

Editor’s note: This post forms part of a larger series addressing key issues related to human rights, technology & big data. 

Digital technologies have permeated our everyday lives. Nothing exemplifies this more than the smartphone. These small hand held devices help us communicate with one another, schedule our activities, and procure knowledge from the Internet. While smartphones bring new levels of convenience to our lives, what is too often overlooked is that they are also excellent surveillance devices. Smartphones produce extensive personal data about who we communicate with and how frequently (email, text, SMS, social media apps), the places we visit and how frequently we visit them (GPS, Map apps), our well-being and medical history (health, psychology, and fitness apps), our Internet browsing patterns (web apps), our friend lists and social networks, and much more. This data is constantly being harvested, potentially enabling others to peer into our personal lives and decide what kinds of products and services to recommend to us, whether or not to employ us, or if we pose a threat to national security.

Despite the pivotal role played by digital technologies in our everyday lives, little is known about how they document our experiences and how the resulting data is collected, by whom it is collected, and how it is ultimately used. Work stream two of the Human Rights, Big Data & Technology Project aims to help fill this gap in our knowledge. This post introduces readers to the controversial relationship between digital technologies, big data, and mass surveillance, as well as the research conducted by work stream two.

Big Data and Mass Surveillance:

The amount of data we produce on a daily basis is staggering, so much so, that this is referred to as an era of “big data”. To appreciate just how “big” big data is, consider the following: Google’s Eric Schmidt claims that we now create and store more data in 48 hours than was created from the beginning of human civilisation to 2003.

Big data offers significant benefits for a range of sectors including crime prevention, health, and commercial enterprise. For example, in the security sector, agencies like GCHQ have collected big data (although at times illegally and without adequate supervision) to locate and identify terror threats. In the health sector, big data is used to detect flu outbreaks, and allocate medical resources. In the commercial sector, on-line retailers like Amazon use big data to help determine consumer trends and personalise advertising and/or recommendations.

Despite these benefits, the quality and quantity of modern data collection also means that our most private, intimate, and personal information has become more accessible than ever before. Furthermore, the development of data sharing technologies mean that our personal information no longer exists in isolated silos, but is traded like a currency between data collectors. Information such as communications patterns, social media behaviour, and Internet histories – with associated inferences drawn about our consumer preferences, political affiliations and sexual orientations – is amalgamated, and made available to third parties. These are the features of an advanced “surveillance society”, where it is difficult to engage in any behaviours without producing traceable data.

Life in an advanced surveillance society is defined by significant threats to human rights. For instance, big data and mass surveillance pose a threat to liberty by empowering security organizations that excessively interfere in the private lives of innocent citizens. Big data and mass surveillance may also pose a threat to democratic freedoms by “chilling” free expression especially among political activists and journalists. Furthermore, big data and mass surveillance can threaten social, political, and economic opportunities as our deepest and most personal secrets are now “discoverable” online where they have the potential to impact decisions made by employers, security agencies, advertisers, and a number of other organisations that shape our experiences. To drive this point home, consider how your life would be affected if the worst thing you’ve ever done was documented online where friends, police officers, and employers could access it. Below we’ve included two real- world examples further illustrating the risks of big data and mass surveillance.

Target’s Advertisements and Predictive Policing   

In 2002, Target (the second largest general merchandise retailer in the US) began to analyse consumer data to learn about consumer behaviour and improve advertising. Target’s statisticians quickly noticed a number of specific consumer trends. For example, data showed that, in the first 20 weeks, pregnant women consistently purchased vitamins and supplements, and as they approached their delivery date, began to purchase scent-free soap and various cleaning supplies. These trends were so reliable that Target’s statisticians realised that they could use purchasing behaviour to reliably predict what stage of pregnancy a woman was currently in. Target’s advertisers took advantage of this information by sending customers who fit statistical trends advertisements for baby products. Approximately a year after this new advertising strategy was put into practice, Target began receiving complaints about their advertising campaign and realised that its predictive model was so accurate that its ads could reveal a young daughters’ pregnancy to her parents. Although Target’s advertising campaign was successful if measured in terms of accurate predictions, it is also an example of the follies of big data and seems to have been somewhat successful in grabbing the attention of those individuals who are otherwise dismissive about privacy concerns. After all, it is difficult to maintain the infamous “I have nothing to hide” argument when asked to consider situations in which big data is used in a way that exposes the details of one’s sex life to one’s parents. It is even more difficult to maintain that argument when considering how this same data – when shared with third parties – might be used by employers to discriminate against pregnant women when assessing a job applicant.

A similar lesson about the social costs of big data and surveillance is learned when considering the use of predictive analytics in policing and security. Many police services around the world now use big data to study crime trends. For example, Chicago police units use big data to catalogue potential criminals in “heat lists”, Fresno California Police units use big data to calculate “threat scores,” London police units track social media data to determine potential terrorist threats, and police units in Kent employ a controversial PredPol system to predict crime. The accuracy, effectiveness and explanatory value of each of these programmes have been brought into question by research which finds that data-based policing produce high numbers of false positives which result in the excessive policing of innocent people. Illustrative of this problem is the fact that Chicago police’s “heat lists,” which claim to identify the 400 most dangerous people in the city, have included innocent people with no criminal records. False positives also raise questions about the biased nature of supposedly objective big data-based police efforts given the overrepresentation of members of marginalised groups. One of the central issues here concerns socio-economic status: poorer groups, particularly those accessing welfare, are likely to have engaged with the state more extensively than those from other sections of society. As such, these groups will have provided more data to official bodies than others, resulting in a cycle where predictive policing algorithms encourage police organisations to target marginalised groups because they are the groups that the police already have data about.

The Human Rights, Big Data and Technology Project

While the potential risks are clear, not enough is known about the relationship between digital technologies, big data, and mass surveillance to accurately assess their implications. Further research exploring how state and non-state actors use big data, how it is shared between organisations, and what implications – positive and negative – that it may have for human rights protections is necessary. Accordingly, the University of Essex’s Human Rights, Big Data & Technology Project includes a work stream dedicated to the study of big data and mass surveillance. Using empirical research techniques across international sites, work stream two will address how big data facilitates new forms of mass surveillance, with special attention paid to questions of security and discrimination, and the impact to human rights protecting privacy and free expression.


Disclaimer: The views expressed herein are the author(s) alone.

Advertisements