Mobile phone theft and EU eprivacy law: the CJEU clarifies police powers

By Lorna Woods

This post originally appeared on EU Law Analysis, and is reproduced here with permission.

Introduction

This week’s CJEU judgment in Case C-207/16 Ministerio Fiscal is part of the jurisprudence on the ePrivacy Directive, specifically Article 15 which broadly allows Member States to permit intrusions into the confidentiality of communications for certain specified reasons.  Article 15 is part of the legal framework for the mass retention of communications data from Digital Rights Ireland (Case C-293/12 and 594/12), EU:C:2014:238) (“DRI”) on and in which the Court has affirmed that retention schemes could be justified only in the case of “serious crime” (Tele2/Watson (Joined Cases C-203/15 and C-698/15), ECLI:EU:C:2016:970).  This left the question of what “serious crime” might be, and whether there would be EU law standards circumscribing the scope of this term. It is this question that the reference here seeks to address, though it should be noted that the facts in issue were very different from those in the earlier cases. Continue reading

Advertisements

Big Brother Watch and Others v. the United Kingdom – Some initial thoughts

By Daragh Murray & Vivian Ng

On Thursday, 13 September 2018, the European Court of Human Rights (ECtHR) handed down their decision in Big Brother Watch and Others v. the United Kingdom. This decision addressed the legality of the United Kingdom’s (UK) bulk interception programme, intelligence sharing, and the obtaining of communications data from communications service providers and was prompted by the 2013 Snowden revelations.

This is a complex decision which is likely to have significant ramifications for mass surveillance programmes. As such, it is too early to offer detailed analysis, and this post intends to highlight some of the interesting elements that we will be thinking over in the coming weeks and months. Our focus here is on the bulk interception programme.

For an excellent initial post on the implications for the UK’s Investigatory Power’s Act, please see ‘Big Brother Watch v UK – implications for the Investigatory Powers Act?’ at Cyberleagle. Continue reading

A Brief Review of the OHCHR Consultation on the Right to Privacy in the Digital Age

By Vivian Ng

The Office of the United Nations High Commissioner for Human Rights (OHCHR) conducted a consultation on the right to privacy in the digital age, convening an expert workshop in Geneva from 19-20 February 2018 and invited relevant stakeholders to submit contributions for a report on the right to privacy in the digital age. The Human Rights, Big Data and Technology Project participated in the expert workshop and submitted inputs to OHCHR. The report has now been published. This post will highlight the key elements of the expert workshop, outline HRBDT’s contributions, and summarise OHCHR’s outcome report.

Continue reading

Quick Comment on UK Draft Data Retention and Acquisition Regulations 2018 and the definition of ‘serious crime’ for bulk surveillance powers

By Daragh Murray and Pete Fussey

The UK Government has published the Draft Data Retention and Acquisition Regulations 2018, which propose changes to the Investigatory Powers Act 2016 (IPA) and the Regulation of Investigatory Powers Act 2000 (RIPA). Both the IPA and RIPA provide a legal basis for Government surveillance, including bulk surveillance techniques.

The changes included in the draft were brought about, in large part, as a result of adverse findings by the Court of Justice of the European Union in the Watson case, which held that the EU Charter of Fundamental Rights:

…must be interpreted as precluding national legislation governing the protection and security of traffic and location data and, in particular, access of the competent national authorities to the retained data, where the objective pursued by that access, in the context of fighting crime, is not restricted solely to fighting serious crime, where access is not subject to prior review by a court or an independent administrative authority, and where there is no requirement that the data concerned should be retained within the European Union. (para 125)

Continue reading

Police are using big data to profile young people, putting them at risk of discrimination

By Daragh Murray and Pete Fussey

This blog originally appeared on The Conversation

Amnesty International has raised a series of human rights issues in connection with the “gang matrix” developed and run by London’s Metropolitan Police, in a recent report. According to the report, appearing on the database could affect the lives of 3,806 people, 80% of whom are between 12 and 24 years old.

There are no specific details about how the matrix operates and is used by police. It exists, at least in part, to address the difficulties in policing gang activities across different districts. But it’s suspected that – because of government data sharing – appearing on the database will “follow” young people around, affecting their access to housing, education or work.

The Met said in a statement, “The overarching aim of the matrix is to reduce gang-related violence and prevent young lives being lost”, but added that it was working with Tottenham MP David Lammy, Amnesty International and the Information Commissioner’s Office to “help understand the approach taken”.

Continue reading

Members of HRBDT Project Submit Evidence on Proposed Amendments to Investigatory Powers Act

 

On 30 November 2017 the Home Office issued an open consultation regarding proposed amendments to the UK Investigatory Powers Act 2016. These amendments were proposed in response to an adverse judgment by the Court of Justice of the European Union in Joined Cases C-203/15 and C-698/15.

The full scope of the proposed amendments are discussed in the Government’s Consultation Document.

Dr. Daragh Murray, Prof. Pete Fussey, and Prof. Maurice Sunkin QC, members of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, submitted written evidence. Their submission focused on the Government’s proposal to amend the statutory purposes for which communications data may be retained or acquired. It argues that the Government’s proposals are overly broad, add uncertainty to the law, and have not been adequately justified. Continue reading

Amnesty International’s Tanya O’Carroll on privacy & the ‘nothing to hide, nothing to fear’ argument

By Ajay Sandhu 

I recently interviewed Tanya O’Carroll, a Technology and Human Rights advisor at Amnesty International, to discuss government surveillance and its impact. I framed our discussion around the most common response researchers studying surveillance receive from the public: the “nothing to hide” argument. The nothing to hide argument alleges that government surveillance programs serve a security purpose and should not to be opposed by innocent people. This blog outlines O’Carroll thoughts about the nothing to hide argument and it’s flaws, the importance of privacy rights, and the ‘encryption mentality’ that she thinks should replace the nothing to hide argument. Continue reading

“People just don’t get it” an interview with Kade Crockford of the ACLU of Massachusetts about why surveillance issues aren’t getting the attention they deserve

By Ajay Sandhu 

The precarious state of privacy often fails to stir public attention. For example, the Investigatory Powers Act (IPA), a piece of legislation granting police and intelligence agencies sweeping surveillance powers in the UK, is said to have passed into law “with barely a whimper.” What explains this lukewarm response? How does the US install bulk surveillance programs like Total Information Awareness (TIA) or the UK pass privacy threatening bills like the IPA (sometimes called the “snooper’s charter”) without receiving the level of attention that one might expect from a society which claims to value privacy rights?

To help answer this question, I spoke to Kade Crockford, the director of the Technology for Liberty Program at the American Civil Liberties Union of Massachusetts (ACLUM). I spoke to Crockford because of her expert knowledge on issues related to privacy, security, and surveillance as well as her recent experience leading a campaign against the Boston Police Departments’ plan to buy social media spying software. Crockford played a central role in the pro-privacy advocacy which likely encouraged the Boston PD to scrap their plans. I thought that Crockford could offer insights into why surveillance practices aren’t earning a critical response and how to reverse this trend. Continue reading

The Police’s Data Visibility Part 2: The Limitations of Data Visibility for Predicting Police Misconduct

By Ajay Sandhu

In part 1 of this blog, I suggested that raising the police’s data visibility may improve opportunities to analyse and predict fatal force incidents. In doing so, data visibility may offer a solution to problems related to high numbers of fatal force incidents in the US. However, data visibility is not without limitations. Problems including the (un)willingness of data scientists and police organisations to cooperate and the (un)willingness of police organisations to institute changes based on the findings of data scientists’ work must be considered before optimistically declaring data visibility a solution to problems related to fatal force. In this blog, I discuss two addition limitations of data visibility, including low-quality data and low-quality responses to early intervention programs. Both are problems related to the prediction and intervention stages of using data to reduce fatal force incidents. Future blogs can discuss issues related to the earlier stages of using data to reduce fatal force incidents such as collection and storage of data about police work.

  Continue reading

The Police’s Data Visibility 1: how data can be used to monitor police work and how it could be used to predict fatal force incidents

By Ajay Sandhu

Editors note: This is the first of a two-part blog post examining the potential impact of data visibility on law enforcement.

The Counted, Fatal Force, and Mapping Police Violence websites each collect, store, and display data about people killed by police in the United States. These websites are just a few of the emerging platforms designed to address the significant gap in information left by US police organisations’ failure to create, maintain, and publically disclose data about “fatal force” incidents. When visiting any of the three websites mentioned above, visitors can access in-depth statistics, charts, graphs, and maps, which provide details about the number of fatal force incidents that have occurred, their locations, the identity of officers involved, and the demographics of victims. The availability of this information has solicited questions about if and how digital data can address persistent problems related to a lack of transparency and accountability in policing, and the lack of information about fatal force incidents:

  • Can data enable new opportunities to scrutinize fatal force incidents?
  • Can data provide an opportunity to discover trends associate with fatal force incidents?
  • Can data analysis provide the police with the knowledge required to reduce fatal force incidents?

This two-part blog focuses on the last question by considering the opportunities and limitations of using digital data to monitor police work, document fatal force incidents, and create intervention programs designed to reduce fatal force incidents.

  Continue reading