Vicarious Trauma of the Private Counter-Terror Workforce: Extending the Duty of Care

By Vivian Ng

Communities used to gather on street corners, sidewalks, parks and public squares. Today, social media platforms are increasingly the forum of choice for individuals seeking to express themselves, communicate, interact, organise, and even mobilise. These online platforms are today’s public square, where free exchange and development of opinions and ideas can happen. However, there are concerns that social media has also become a forum for terrorists, racists, misogynists, or child abusers to thrive. As a result, and particularly in light of recent terror attacks, there is pressure on social media companies to be more proactive in preventing their platforms from being used to radicalise and incite violence. In response, social media companies are investing in more resources to moderate content on their platforms, particularly by expanding teams of content moderators. A critical reflection of the human rights implications engaged by this evolving role and the responsibility of technology companies is necessary. This post will focus on one specific element of the wider debate: an interrogation of the duty of care owed to the so-called ‘private counter-terror workforce’.

The phenomenal size of online social media communities and the staggering amount of content generated every day present significant challenges to keeping online spaces safe for users. Content moderation stretches human resources and technological expertise, which is operationally challenging. From a human rights perspective, this also calls into question the scope and content of technology companies’ human rights due diligence, which brings new challenges to the human rights framework. Online platforms may fill the role of public forums but they are in fact privately owned and controlled. Others have raised concerns about how problematic content moderation policies and practices of social media companies can pose risks to users’ freedom of expression. However, a little-addressed issue is the rights impact that social media companies’ content moderation operations can have on employees (and sub-contractors) involved in its content moderation work.

Recent reports on the privatisation of counter-terrorism activities have highlighted how technology companies rely on teams of content moderators comprised of staff and subcontractors to monitor, restrict, or remove reported content by users. These teams are also responsible for identifying, investigating, and reporting suspicious accounts to law enforcement. These tasks are incredibly challenging. With apparently limited training and support, content moderators have to apply complicated rules from prescriptive manuals and exercise judgment on reported content under time pressure. Crucially, the psychological toll of such work has been raised as a serious concern. Repeated exposure to distressing content with limited mental health support can result in damaging psychological and emotional consequences for content moderators. This could have significant adverse effects on individuals’ professional and personal lives.

It is not unreasonable to suggest that the vicarious trauma content moderators experience from reading, hearing, or watching traumatic incidents is far beyond typical workplace health and safety risks anticipated in a technology company. Yet, these risks have not been appreciated in the context of their rights implications, particularly on the right to just and favourable conditions of work, and the right to the enjoyment of the highest attainable standard of physical and mental health. The provision of and access to appropriate mental health treatment and care is one of the conditions that enable the realisation of the right to the health. Business enterprises have responsibilities regarding the realisation of these rights, which is particularly important in the context of occupational safety and health.

The conversation regarding business and human rights is not new, but the increasingly prominent role of technology companies, in particular, brings new dimensions to the responsibility of businesses to respect human rights throughout their operations. How can the duty of care be formulated and operationalised in relation to social media companies and their counter-terrorism content moderation operations? There needs to be a change in institutional attitudes regarding the real demands and risks of such work, which is a leap from the status quo. Perhaps some parallel lessons can be learned from other situations where such a change has been necessary and is materialising.

Consider the ‘traditional’ frontline as an example – those deployed on missions to the physical frontline in conflict zones (such as journalists, staff of human rights and humanitarian organisations, and peacekeepers) experience hazardous and hostile environments. It is recognised that organisations have a duty of care for personnel on field assignments, and staff care – physical and mental health, and psychological wellbeing – needs to be prioritised. However, it has only been recently recognised that this duty of care should be extended to staff of the same organisations based in offices or headquarters. There is increasing recognition that if staff work with eyewitness media and review disturbing footage, they can also experience vicarious trauma despite being physically removed from the frontline.

If content moderators of social media companies are working on similar material to staff of news, human rights and humanitarian organisations, it should not be unreasonable to suggest that they can experience comparable vicarious trauma. Yet, this connection has not been drawn between the private counter-terror workforce and those in ‘mission-driven’ organisations. If technology companies are thrust to the forefront of the global counter-terror effort, their duty of care responsibility and the parameters of human rights due diligence cannot be an afterthought.


Disclaimer: The views expressed herein are the author(s) alone.

Advertisements