The Police’s Data Visibility Part 2: The Limitations of Data Visibility for Predicting Police Misconduct

By Ajay Sandhu

In part 1 of this blog, I suggested that raising the police’s data visibility may improve opportunities to analyse and predict fatal force incidents. In doing so, data visibility may offer a solution to problems related to high numbers of fatal force incidents in the US. However, data visibility is not without limitations. Problems including the (un)willingness of data scientists and police organisations to cooperate and the (un)willingness of police organisations to institute changes based on the findings of data scientists’ work must be considered before optimistically declaring data visibility a solution to problems related to fatal force. In this blog, I discuss two addition limitations of data visibility, including low-quality data and low-quality responses to early intervention programs. Both are problems related to the prediction and intervention stages of using data to reduce fatal force incidents. Future blogs can discuss issues related to the earlier stages of using data to reduce fatal force incidents such as collection and storage of data about police work.

 

Low-quality data: As they are still relatively new technologies, it is hard to assess algorithms designed to predict fatal force. However, we can learn about the limitations of these algorithms by drawing on research about the police’s attempts to use algorithms to predict and pre-empt crime. “Predictive policing” adapts digital data about previous crimes to predict where crime is most likely to occur in the future and who is most likely to engage in criminal behaviour. Despite police departments recent and rapid adoption of predictive policing software such as PredPol, the effectiveness of predictive policing has been subject to critique for several reasons. Among these reasons are concerns about the low-accuracy of data fed to predictive policing software. This “input data” has been described as inaccurate and incomplete due to systemic biases in police work. For example, police officers’ tendency to focus on certain types of crime, certain types of spaces, and certain social groups, while leaving other crime and other spaces unaddressed, creates unrepresentative data suggesting problematic correlations between impoverished spaces, racial groups, and crime. When analysed by predictive software, this biased data is likely to produce predictions which contain both false positives and false negatives. Accordingly, if high-quantity but low-quality input data is used to predict fatal force incidents, similar problems may arise. For example, input data which is inaccurate, incomplete, or skewed (the result of police organisations’ failure to accurately document police work, especially use of force incidents) may result in inaccurate calculations from predictive software. These inaccurate predictions will then inform early intervention programs targeting low-risk officers and/or the neglect of high-risk officers.

Low-quality response: In addition to concerns about their ability to accurately identify high-risk officers, there are several concerns about the practicalities of early intervention programs. For instance, there are reasons to believe that, if high-risk officers are accurately identified by data scientists, interventions will not be taken seriously. A police department in New Orleans, for example, faced difficulties persuading its officers to take interventions seriously after they began to mock data collection efforts, and even considered being flagged as high-risk by an algorithm as a “badge of honour.” Some officers began to refer to interventions such as re-training programs as a “bad boy school” and saw inclusion as a matter of pride rather than something to be taken seriously. The problems with getting officers to take interventions seriously suggests that even if data scientists can construct an algorithm which accurately flags high-risk officers, there is no guarantee that ensuing attempts to improve police behaviour will be effective, especially if police are unwilling to accept interventions. Furthermore, even if officers do not ridicule interventions, there is no guarantee that interventions will receive the support from police organisations that may be required. For example, studies show that interventions often suffer from administrative neglect, delays, and can be error-ridden and sloppy, leading to a failure to transform the organizational and social culture of a police department.

Conclusion

By raising police officers’ data visibility, police organisations, with the help of data scientists, can engaging in comprehensive analysis of fatal force incidents, and produce programs designed to identify high-risk officers and successfully intervene through re-training, counselling, or substantive changes to use of force policy. However, several unknowns play a key role in determining the implications of data visibility and predictive analytics including the inclusion/exclusion of data, false positives/negatives, and social forces which determine if interventions will be taken seriously by officers. Each of these unknowns requires detailed study before trying to walk a logical pathway from data visibility to a reduction in fatal force incidents.


Disclaimer: The views expressed herein are the author(s) alone.

Advertisements