Dehumanising the dogfight: the next step in the unmanned arms race

by Pauline Canham

Last month, an artificial intelligence (AI) algorithm was pitted against a human pilot in simulated F-16 fighter jet dogfights.  The AI pilot won, 5-0.  The US Defence Advanced Research Projects Agency (DARPA) hosted the ‘AlphaDogfight’ Trials as part of the Air Combat Evolution (ACE) program, which looks at future possibilities of teaming machines with humans to enhance defence capability through “complex multi-aircraft scenarios”.  

This article will look at the issues raised by removing the human element from lethal action, before outlining the growing calls, from the human rights community, for a ban on autonomous weapons.  First, though, it is worth taking a step back to understand how we got here, through a brief history of the use of unmanned drones, the precursor to fully autonomous weapons.

f-16-2520441_1280

A brief history of Unmanned Aerial Vehicles (UAVs)

The use of pilotless aircraft for surveillance during conflict emerged during the Vietnam War with the US using what they called “Lightning Bugs” on reconnaissance missions.  The Israeli Defence Force (IDF), too has used drones or Unmanned Aerial Vehicles (UAVs) since the 1970s, as decoys and intelligence gathering vehicles, during wars with Egypt, Syria and Lebanon.

The merging of these robotic eyes-in-the-sky with lethal weaponry would be a pivotal moment for post 9/11 policy making and play a significant role in what President Bush called “a different kind of war”, in which the risk to American military personnel was removed through delivering death by remote control.

The use of remotely piloted drones to assassinate the enemy, rather than risking troops on the ground, found favour, particularly following the catastrophe of the Iraq war and the deeply damaging CIA torture program, and became a go-to counter-terrorism tool for Obama.  The low risk to American lives and the often-sold precision accuracy of drones gave them an ‘ethical’ flavour that appealed to those who wanted revenge with a clean conscience.  It was a way for Obama to appear tough on terrorists, but maintain his Nobel Peace Prize winning status as a man who espoused human rights and the rule of law.

Along with indefinite detention without trial in Guantanamo Bay, the drone program is one of the few surviving policies of the War on Terror, now into its 20th year.  Claims of the precision accuracy of drones, though, have been challenged by various studies in the countries of their operation, including Yemen and Afghanistan, where drone strikes were found to be “10 times more likely to kill civilians than conventional aircraft”.  In July 2020, on publishing her report into the drone assassination of Iranian General Soleimani, the UN Rapporteur on Extrajudicial and Arbitrary Execution, Agnes Callamard described the surgical precision of drones as a “myth”.

Removing the human from lethal action

Removing the human from battlefield operations is given as a significant advantage by operating states, claiming that machines are less likely to make mistakes and offer higher levels of precision and lower risk to military personnel.  The AlphaDogfight trials also exposed the fear, or feeling of “self-preservation”, of the human pilot as a limiter in performing risky manoeuvres that might provide an edge in battle.   The Pentagon’s Director for Research and Engineering for modernisation, Mark Lewis, said that the advantage of an AI pilot is that it will be prepared to “do things that a human pilot wouldn’t do”.  

Whilst this lack of fear may appear advantageous, it serves to illustrate the argument against fully autonomous weapons; they don’t have human attributes that indeed include fear for themselves, but also compassion towards others.  They are, in effect, weapons of dehumanisation, with no ability to recognise the humanity in those they fight against, or any way to distinguish between combatants and civilians.  As things stand, the use of remotely controlled drones, operated by ‘pilots’ that are stationed thousands of miles away from the target, has seen lethal strikes that have caused catastrophic civilian casualties through a misinterpretation of activities including weddings, funerals and jirgas (traditional community assemblies), that were wrongly assumed to be terrorism related. 

Jeremy Scahill, author of  The Assassination Complex, refers to this as the ‘tyranny of distance’, a phrase borrowed from the 1966 book about the precariousness of Australia’s isolation and distance from its coloniser.  The lives of the Yemeni, Pakistani, Afghani and Somali targets of drone strikes are indeed permanently precarious, and the distance of the innocent victims of robotised drone violence makes them invisible, not just to the ‘pilots’ of the drones themselves, who initiate the strike, but also to the publics of those governments who deploy such weapons.

Political theorist and author of Just and Unjust Wars, Michael Walzer has voiced his concerns about drones, stressing that their advantages make their use easier and more likely and this should trouble us, as the traditional reciprocal risks of going to war add weight to jus ad bellum considerations.   Removing the human from one side of the battle with the enemy has been described by some as “remote controlled hunting” , with the moral equality of combat removed due to a lack of risk reciprocity.

Further, as the development of these hi-tech weapons depends on the depth of defence budgets, asymmetries of power and violence have resulted in violations of human rights in Afghanistan, Pakistan, Palestine, Somalia, Yemen, Iraq and Libya, where communities live in constant fear of strikes.  These are communities that have been psychologically traumatised, their privacy denied and their cultural and religious practices undermined.  As a Stanford Law School study in Pakistan concluded, innocent men, women and children have been killed simply by dint of their behaviour such as gathering in groups, or carrying weapons, considered, by the United States to be consistent with terrorist activity.

Imagine, then, the spectre of full autonomy in the use of armed drones,  offering the prospect that such behavioural ‘signatures’ could be programmed into targeting algorithms that would totally disregard any cultural context. 

Calls for a ban

As yet, there is still little in the way of international law to specifically regulate the use of drones or autonomous weapons, other than International Humanitarian Law (IHL), – a.k.a the Law of Armed Conflict (LOAC) – which covers areas of operation within zones of existing armed conflict; or International Human Rights Law (IHRL), which requires the justification of self-defence, limited by necessity and proportionality for any counter-terrorism operations.  

Furthermore, there are ambiguities around the use of IHRL extraterritorially, which allows the US to sidestep accountability on a technicality, namely Article 2 of the International Covenant on Civil and Political Rights (ICCPR), which limits the obligations of a state to “ all individuals within its territory and subject to its jurisdiction”.  In addition, the state of exception that ushered in Bush’s “different kind of war” has become permanent, and the ability to flout international law under the guise of a universal project of global security and human rights, has slipped quietly under the radar, with the suffering of thousands of innocent victims out-of-sight.

There are now growing calls for a ban on fully autonomous weapons and a treaty, seeking to ensure that humans maintain control over the use of force and lethal decision making.  A 55-page report released by Human Rights Watch in August 2020, “Stopping killer robots: Country positions on banning fully autonomous weapons and retaining human control”, listed the positions of 97 states involved in discussions on the topic since 2013.

The United States position on negotiating a new international treaty on fully autonomous weapons is that it is “premature”, arguing that IHL, as it currently stands, is sufficient.  Rather interestingly, China supports a ban on the use of autonomous weapons, but not on their development as they currently seek to develop themselves as a hi-tech military superpower, with a focus on machine-learning, AI and autonomous weapons systems.  The United Kingdom, meanwhile, joined the United States in insisting that existing IHL is adequate and “has no plans to call for or to support an international ban” on such weapons.  Opposition parties in Germany too have called on Chancellor Merkel to take a tough stand on the issue, arguing that without restraints, there is a very real danger of a new arms race.  However, Merkel’s coalition voted down the motion, and critics point to German arms sales of “new weapons with autonomous functions” as playing a key role in that vote.

Conclusion

The Vice President of Heron, the small Maryland company that developed the algorithm that won the dogfight competition, said that despite ethics concerns, it is important to forge ahead with employing AI within military hardware because “if the United States doesn’t adopt these technologies, somebody else will.”   Such a position simply ensures an acceleration of the race towards a global proliferation of robotic violence, noted by UN Secretary General, Antonio Guterres in his 2020 Report on Protection of Civilians in Armed Conflict. In the report, he stressed the “moral and ethical issues in allowing technology to decide whether to take a human life”, adding that the current absence of debate “leaves a policy vacuum that has to be addressed by Member States.” 

In his Nobel Peace Prize speech, Obama’s warning-cum-US national security strategy, that “modern technology allows a few small men with outsized rage to murder innocents on a horrific scale”, would become the modus-operandi of the War on Terror.  If the international community does not come together to curtail the further development of unmanned and autonomous lethal weapons, those few small men will become many.

ABOUT THE AUTHOR

PC_AJPauline Canham is the HRC Blog’s student editor.  Pauline is studying a Masters Degree in Human Rights and Cultural Diversity at Essex, after 20 years in the broadcasting sector, working for the BBC and AlJazeera, with a focus on large change projects including the BBC’s move into the new Broadcasting House in 2013, and the re-launch of Al Jazeera’s Arabic Channel in 2016.

International Human Rights Weekly News Roundup

by Pauline Canham

In focus

Police violate human rights in their use of facial recognition technology

Facial_recThree senior judges in the UK Court of Appeal have ruled that Police in South Wales violated the right to privacy under the European Convention on Human Rights, through the unlawful use of facial recognition technology.  The ruling comes after a legal challenge by civil rights group, Liberty, who took up the case of a man whose face was scanned as he was Christmas shopping in Cardiff in 2017 and attending an anti-arms protest in 2018.  Mr Bridges, who is a civil rights campaigner, had argued that his human rights were breached when his biometric data was used without his consent.

Facial recognition identifies people through distinguishable features on the face, and compares them with identities on watch lists such as criminal suspects, missing persons or people of interest.  Bridges had lost his original case at the High Court, but the Court of Appeal held that his right to privacy, under Article 8 of the European Convention on Human Rights, was violated as the police had been allowed too much discretion in applying the technology.  The Court also found that South Wales Police had failed to investigate racial and gender bias in their facial recognition algorithms.

Mr Bridges, who is a former Liberal Democrat councillor for Gabalfa in Cardiff, said that he did not set out to make a case on the issue, but after the protest at an arms fayre at Cardiff International Arena, where he felt the police were surveilling people to intimidate protestors, he decided to get in touch with Liberty.  The 37 year old, who used crowd-funding to pay for the legal costs, said “We have policing by consent in this country”.

Liberty lawyer Megan Goulding described the judgment as a “major victory in the fight against discriminatory and oppressive facial recognition” and civil rights campaign organisation, Big Brother Watch said it “should deter police from lawlessly rolling out other kinds of oppressive technologies”.     The Surveillance Camera Commissioner, an independent appointee of the Home Office, welcomed the judgement, saying the “use of this technology will not and should not get out of the gate if the police cannot demonstrate its use is fair and non-discriminatory.”

Meanwhile, South Wales Police, are playing the judgement down, reiterating their commitment to the “careful development and deployment” of the technology but  Daragh Murray,  Senior Lecturer here at the Essex Human Rights Centre, has said “It means that any use of facial recognition must be stopped until an appropriate legal basis is established.”

 

Other stories making the headlines around the world

 

World

 

Africa

 

Americas

 

Asia

 

Europe

 

Middle East

 

 

 

 

 

Analysis of India’s contact tracing application vis à vis digital rights

by Ritwik Prakash Srivastava

Introduction

In the wake of COVID-19, the Indian government came up with a contact-tracing application Aarogya Setu (application). The Indian Prime Minister, Mr. Narendra Modi, in his address to the nation on 14 April 2020, urged the citizens to download the application to supplement the State’s struggle against the contagion. What started as a voluntary step, was first made mandatory for employees, including in the private sector, then a directive extended it to entire districts, and failure to comply resulted in a criminal penalty.

It brings to the forefront the conflict between public health and the right to privacy of an individual. While the effectiveness of contact-tracing has been proven, it is also pertinent that such a mechanism is developed within the frameworks of existing laws and a regard for human rights and constitutional rights. Interestingly enough the Supreme Court of India, in its landmark judgment of K.S. Puttaswamy v. Union of India (the judgement) in 2017, made the right to privacy a fundamental right in India. Even stating that “if the State preserves the anonymity of the individual it could legitimately assert a valid state interest in the preservation of public health…

This piece seeks to address the viability of the Indian government’s order of making the download of Aarogya Setu application mandatory, against the touchstone of the right to privacy.

 

Analysis

The Court in its judgment recognised every individual’s right to decide for themselves the extent of information about them that could be shared with others. However, every fundamental right in India comes with its reasonable restrictions, and is not absolute (see Article 19 (2) of Constitution of India). Some of the grounds of restriction could be to preserve public order, maintain sovereignty and integrity of India, and security of the State. These restrictions have to be mandatorily in accordance with procedures established by law (see Maneka Gandhi v. Union of India).

As per paragraph 180 of the section of the judgement authored by the then Chief Justice of India, Justice Khehar, Justice R.K. Agrawal and Justice Dr D.Y. Chandrachud, before such restrictions on the right to privacy can be placed, the State must show the existence of a valid legislation, which permits the restriction to be put into place. Secondly, the restrain must be in pursuit of a legitimate aim; thirdly, it should have a rational nexus with the such aim; fourthly, it should be the least restrictive method to achieve such aim and lastly, it should be proportionate to the aim that is required to be achieved.

The Aarogya Setu application fails on the first prong itself. Not even the Epidemic Diseases Act, 1897, currently enforced in India, grants such permissions to the State. In the absence of any legislative framework to restrict its ambit, there is no guarantee that the sensitive data about individuals’ health and movement will not be used for mass surveillance, or will not be stored and used for profiling once the pandemic subsides.

Gerd Altmann from Pixabay

As the Terms and Conditions of Aarogya Setu stand currently, a user has no mechanism to seek deletion of their data uploaded on the servers of the application. Removal of the application merely means they cannot use the services, and not that they get their data erased. Without a comprehensive framework to regulate data protection, a contact tracing technology may as well mutate into a system of movement control and data profiling. The possibility of this becomes greater in the absence of any protocol which mandates a limit on the time for which such sensitive personal data of citizens can be stored by the government.

These shortcomings may have been eliminated if India had a dedicated privacy framework, as demanded in the judgement. However, even after substantial discussions and impending need of such a law, the framework is yet to be enacted, it currently exists merely as a bill. As far as international standards and European regulations on contact-tracing are concerned, the Aarogya Setu application fails on various counts.

The European Data Protection Board (“EDPB”) in its “Guidelines on the use of location data and contact tracing tools” (“Guidelines”). The foremost caveat the guidelines provide against contact-tracing is that are a grave intrusion into the privacy of an individual. The guidelines make it very clear that use of application must be voluntary. However, the orders of Indian government of mandatory download go directly against such a provision. There is an inherent lack of transparency on how the accumulated data is to be processed, or for how long it would remain in the possession of the government. The government has not shared any policies with respect to data retention and grievance redressal against the collected data.

A basic technical requirement any application which seeks to collect and process data is that of security. The guidelines mandate “state-of-the-art” cryptographic techniques to secure the data collected. However, there are already serious questions being raised at its sophistication when an ethical hacker took to Twitter to reveal the flaws with the application’s security. There have also been reports of the Aarogya Setu application exposing the users’ location data to third-party actors like YouTube.

 

Conclusion

Since the Supreme Court’s reasoning in the Puttaswamy judgement, the Indian government has had collisions with the concept of privacy multiple times. First with the nation-wide citizen identification scheme AADHAR, then with the inordinate delay in the delivery of the personal data protection law. While the current circumstances around the pandemic are nowhere near normal, the concerns arising out of unwarranted surveillance cannot be set aside.

The threat that the pandemic poses to digital rights was specifically addressed in a joint-statement issued by United Nations, the Inter-American Commission for Human Rights, and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe. The joint-statement provided that the use of any technology for surveillance should  conform to the strictest standards of protections provided by the domestic law and the principles of international human rights.

New privacy concerns arise every day out of ever-developing technologies, be it in terms of facial recognition, mass surveillance, or tracking online activities of citizens.  The digital ecosystem has become an intricate part of the personal life of every citizen. While the current status quo with the Coronavirus pandemic is largely out of the ordinary, it is important nonetheless that the governments remember that privacy rights of citizens cannot be suppressed even during an unusual situation.  Now more than ever, it is important that any derogation from or limitation to digital rights remains lawful, and is appropriately scrutinised by the states and their respective courts.

 

ABOUT THE AUTHOR

Ritwik Prakash Srivastava Ritwik Prakash Srivastava is a third-year B.A.LL.B. (Hons.) student at National Law Institute University, Bhopal.  He is currently the Co-Convenor of the Centre for Research in International Law at NLIU, Bhopal. His research interests include technology and media law, cyber law, and public international law.  He may be reached at ritwiksrivastava.ug@nliu.ac.in.

2019 needs to be the year in which human rights sit at the heart of AI governance

By Lorna McGregor

At the beginning of 2018, the MIT Technology Review forecast that one of the ten ‘breakthrough’ technologies for the year would be ‘AI for Everybody’, underscoring the transformational potential of AI to sectors such as health. In a new report by the ESRC Human Rights, Big Data and Technology project, we argue that it is critical for everyone to benefit from artificial intelligence’s (AI) advances, particularly those most marginalized in society. To do otherwise, only risks widening existing inequality, a point that was underscored by this year’s World Economic Forum. Continue reading