Lethal Autonomous Robots and the Dehumanization of War

Over the last week severalnewspapers around the world have highlighted the second round of meetings in Geneva, under the supervision of the Convention on Certain Convention Weapons CCW, regarding the legal future of so-called Lethal Autonomous Robots (LARs). For some, who argue that LARs can be more ethical than human soldiers this new technology represents the future of warfare (R. Arkin). For others, LARs are ‘killer robots’ that should be subject to a prohibition similar to that applicable to Blinding Lasers Weapons, which were prohibited by Protocol IV to the CCW (Human Rights Watch; Article 36).

By Afonso Seixas-Nunes, SJ.

The possibility of lethal autonomous systems operating in the battlefield raises some very important questions. The history of weapons and military technology is characterised by increasing physical distance between fighters, but is it legitimate to take this one step further and to completely delegate the risk of combat to autonomous systems? If war was called the ‘human thing’ by the Greeks are we able to transform war into a ‘robot thing’? Some authors understand this question as being an ethical and moral one but others prefer to postulate that LOAC demands ‘meaningful human control’ and this concept takes us to some key issue that arise when dealing with LARs: what is autonomy, how does this concept apply to weapons systems, and what are the implications (i.e. can these systems work)?

In this regard it is very important to understand the Dynamic OODA Loop (DOODA Loop), a concept familiar to roboticists and military engineers. The concepts refers to an observation, orient, decide and action decision making process, undertaken in a very dynamic and unpredictable environment, such as a battlefield. This means that the decision to take an attack is based on the observation (O) of information which is collected and processed in order to orient (O) the best choice between several possibilities (D). Finally, action is taken on the ‘best’ decision (A). In our case, this is the decision to either launch, cancel or suspend an attack. The capacity of a system to perform the four different stages of the DOODA Loop without human intervention is called autonomy: the more ‘autonomous’ a system is, the less human control.

Of course immediate problems come to mind. As the working paper submitted by the Austrian Government highlighted, the basis for the lawfulness of new weapons can be found in Article 36 API to the Geneva Conventions which stipulates the obligation of every state party ‘to determine whether its employment would, in some or all circumstances, be prohibited by its protocol or by any other rule of international law applicable to the High Contracting Party’. As the International Court of Justice stated in the Nuclear Weapons Advisory Opinion the principle of distinction is one of the ‘cardinal principals’ enshrined by LOAC. Looking at the asymmetry of modern armed conflicts it is clear that autonomous systems will face not just the challenge of determining who is who? (i.e. is an individual a combatant or civilian?) but also what is what? That is, autonomous systems must have the capacity to interpret different behaviours, in order to distinguish between civilians, civilian directly participating in hostilities, and possibly also members of an armed group with a ‘continuous combat function’. This problems involves the very first level of the DOODA Loop…the capacity to observe and process information which is very much grounded in understanding human behaviour with accuracy.

The principle of proportionality gives rise to further difficulties. This principle requires that anticipated military advantage must be weighed against the potential loss of civilian life, injury to civilians, or damage to civilian objects that may be expected from an attack. This harm must not be excessive in relation to the concrete and direct military advantage anticipated (Article 51 (5) (b); Article 57 (2) (b) API). The possibility of autonomous systems performing the technically subjective evaluations demanded by the principle of proportionality is necessarily of concern. Additionally, I would highlight another aspect: if autonomous weapons are expected to have no emotions and to be fearless, how exactly is the value of human life and military advantage going to be balanced?

These issues give rise to significant challenges. It is evidently difficult to translate into algorithms criteria that are themselves based on subjective decisions dependent upon the irregularities of the battlefield. Equally, the dynamism and uncertainty inherent in the battlefield gives rise to concerns regarding the predictability of future autonomous systems, and uncertainty as to what these weapons may do once deployed.

At this stage some may ask: if autonomous systems give rise to such significant concerns, why should we not pursue an absolute ‘ban’ as proposed by Chile, Pakistan, the Holy See and others?

Given the potential benefits in relation to precision and efficiency, I personally believe that we should not exclude military technology’s evolutionary trends towards autonomy. It is true that LARs may mean a ‘dehumanization’ of the battlefield, but is this necessarily a bad thing?

Pragmatically, we cannot ignore the fact that semi-autonomous systems are already deployed and utilised by Israel, North Korea, the USA, and the UK. Equally, it must be noted that States do not have an interest in unsupervised ‘autonomous systems’. The battlefield is too unpredictable to leave LARs acting without any human supervision.

I believe that the real concern in relation to the deployment of LARs is the absence of social responsibility. Throughout history we have seen – and felt – the ‘human costs’ of war: soldiers not returning to their families.  The presence of LARs in the battlefield might decrease the threshold at which States go to war and – given their ability to operate in remote locations – could raise concerns regarding the transparency of the States.

Disclaimer: The views expressed herein are the author(s) alone.

Advertisements