Human Rights in the Digital Age: The Perils of Big Data and Technology (Part II)

Editor’s note: This post forms part of a larger series addressing key issues related to human rights, technology & big data. 

Last week, we discussed the promises of big data and technology in relation to human rights. This week, we turn to the perils and consider the risks to human rights including, but beyond, the right to privacy.

Today, there is little that corporations and states do not know about us. Our smartphones and other devices log who we communicate with, track our locations and record our lifestyle preferences. We routinely share information on social media and with applications that we use either on our smartphones or other devices, such as healthcare-related monitors, fitness tracking applications, and navigation services. Furthermore, as Snowden revealed, states have an unprecedented ability to conduct mass surveillance, including through the interception of communications and communications data. The United Kingdom intelligence agencies, for example, have been covertly collecting bulk personal data since the 1990s, which was recently found to be unlawful by the Investigatory Powers Tribunal (the full judgment can be found here). State surveillance activities are often supplemented by collaborating with other states through intelligence sharing arrangements, or by engaging or attempting to require technology companies to hand over information – as the dispute between FBI and Apple attests.

Prior to the digital age, we could preserve a certain degree of privacy even if we conducted our business in the public domain. We could protect or control our privacy to some extent by altering our words and actions.  Nowadays, our traditional assumptions about the contours of private life are constantly challenged. For example, new technologies can be vulnerable to interception and mass electronic surveillance. Furthermore, there has been an increase in so-called ‘soft surveillance’ by corporations. Information is commercially valuable in the digital age, to the point where personal data has been monetised and commoditized, resulting in the trade of  data capital. Data brokers buy and sell rich datasets, which combine multiple sources of data such as search histories and location data, that provide valuable insights for corporations about user preferences. These datasets often reuse information that we have given elsewhere and repurpose the data for uses other than that for which the data was initially collected. Increasingly detailed and intimate information is consolidated, shared, reused and repurposed to build comprehensive profiles of our identities and lives, often without our knowledge and much less consent.

This raises several systemic issues surrounding our interaction with states and corporations. First, how much of our personal information do states and corporations really have? Our names, birthdays, addresses, social security numbers, billing information, email addresses, employment information, lifestyle preferences? Second, we have trusted (albeit sometimes unknowingly) states and corporations alike to store the information they collect about us securely, but do they indeed do this? Third, there is an inherent power imbalance. We cannot anticipate the full spectrum of how our information may be used by states and corporations and what the consequences are, yet we have no real alternative to technological changes.

Given the potential for interference with the right to privacy, the scope of the right and how it may be ensured needs rethinking. Can existing standards be translated into the digital age to ensure that privacy is adequately protected? If not, how should privacy, ownership, consent and human dignity be reconceptualised in a digitised era that both threatens and offers opportunities for human rights protection and implementation? Is it enough to simply trust that states and corporations will secure our information, manage our data responsibly, and treat sensitive data appropriately? These issues require a more robust response than placing the burden on the individual to effectively consider the full potential implications of sharing their personal data. This is a challenging task, particularly for the newly appointed United Nations Special Rapporteur on the Right to Privacy.

Beyond privacy, there are concerns in relation to the freedom of expression and information. Regarding mass electronic surveillance, this can have a chilling effect on the freedom of expression and association if we live in fear of our communications being monitored by both the state and fellow citizens. Furthermore, the so-called ‘filter bubble’ can potentially impede the full enjoyment of the freedom of expression and information. Online platforms provide recent, individually tailored search results, news stories, job postings, and advertisements through big data analytics that draw inferences from search and browsing histories, click behaviours, and location, among other information. This enables corporations to selectively funnel information for the individual, making assumptions about what they want to see and feeding them information that conforms to past activity. This can isolate individuals from information that could challenge and broaden their worldview.

There are also concerns around the potential for algorithmic discrimination. The widespread uses of data-driven algorithms in our daily lives by states and corporations range from minor decisions, such as mapping the fastest route to a destination, to decisions that have serious consequences. These include predicting a defendant’s likelihood to reoffend in the context of sentencing and/or parole decisions. In the context of the latter, this can have positive outcomes such as eliminating overly cautious and/or inconsistent decisions and personal prejudices. However, decisions based on algorithms can have direct or indirect discriminatory effects. For example, algorithmic risk assessments used in criminal sentencing decisions in the United States have been found to be disproportionately biased against African-American defendants. Furthermore, the variables used in data-driven algorithmic decision making processes are often not disclosed, and how the inputs are prioritised and weighted is obscure. This makes it difficult to challenge the outcome. A system with due process safeguards needs to be developed in order to ensure algorithmic accountability.

Fundamentally, the paradigm shift in what is made possible by big data and technology challenges how human rights in the digital age are conceptualised. We are facing a paradoxical position of accruing practical benefits from technological advances, while also being vulnerable to potential human rights risks. This underscores the importance of questioning how rights may be protected in the digital age, and what role regulation can play. 

Challenges for Regulation

The human rights protections set out in various international, regional and national regimes were mostly written before the digital age. Evidently, the world has changed significantly since then, raising numerous challenges for regulation and, correspondingly, accountability.

First, today’s globalised and technological lifestyles transcend state boundaries, the traditional regulatory demarcation.  Communications are transmitted across the world and data is transferred to other countries for storage or processing. Additionally, interferences with communications and communications data no longer just involve a single national intelligence agency – nowadays, multiple states collaborate to share intelligence information. The fact that states can interfere with the human rights of those who are not within their territory raises questions around the scope of extraterritorial human rights obligations. As the success of covert surveillance is argued to rest on its secrecy, relying on intelligence agencies to self-regulate does not seem to be a promising tool for accountability. So, how should national, regional and international regulatory efforts best interact? At present, legislation has been developed or reformed gradually to rectify some of the oversights of legacy laws.

Second, the prominent role of corporations in the 21st Century challenges the efficacy of existing regulatory responses. States have traditionally been the duty-bearers of human rights. However, given the evolving and increasingly significant role that businesses play in the realisation of human rights, there is increasing recognition and acceptance that businesses also have human rights obligations. The accountability of corporations needs serious consideration as their compliance or complicity with government demands in the context of surveillance or other activities has profound implications on human rights. The cooperation and involvement of businesses is important not only to ensure that new standards are implementable and adaptable for different sectors, but also in fostering consumer trust – a particularly critical element for the success of business-client relationships in this big data era.

The rise in global communications and transborder data transfers, resulting in increased potential for extraterritorial human rights violations, and the prominence of corporations challenge the development of a comprehensive regulatory system. How can transnational regulation of multiple states and businesses be connected within a human rights framework that is used to dealing with single state responsibility? These unprecedented developments require a collaborative effort from various stakeholders, including states, businesses, international organisations, civil society, and academia, to create a stable environment where international, regional and national regulatory efforts interact, adapt and innovate in pace with technological and commercial change in order to effectively respect, protect and fulfil human rights in the digital age.

Conclusion

The digital age requires further thought around how to maximise the promises of technology and big data for the fulfilment of human rights, while mitigating the perils. The Human Rights, Big Data and Technology Project seeks to:

  • Consider the human rights implications associated with the use (or abuse) of big data and associated technology, as the actual and potential consequences, whether positive or negative, are still not fully comprehended;
  • Map and evaluate regulatory responses, assess existing gaps, and consider how reform can be harmonised and regulation can be sustainable in relation to the interception, collection, storage, use, amalgamation, re-purposing and sharing of data by both states and corporations;
  • Explore what remedies are needed, and how these can be effectively developed and implemented, to ensure an avenue of redress in cases of violations.

Disclaimer: The views expressed herein are the author(s) alone.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s