Balancing Justice and Technology: The Ethical Landscape of Law Enforcement AI

Harshita Rai
6 min readJun 8, 2024

--

https://www.zebra.com/us/en/solutions/industry/government/sub-vertical/law-enforcement-technology.html

Introduction

Technology has shown to be a promising tool for law enforcement; however, the ethical concerns of this application have gone unnoticed. Artificial intelligence such as predictive policing, big data analytics and smart machines are some of the tools that have been utilized for effectiveness while simultaneously contributing to several ethical problems. According to Jade McClain, New York University public affairs officer with a law and public administration background, today’s policing technology records time-stamped video and audio of police encounters, quickly pinpoints the sites of gunshots, and uses AI to forecast the location and timing of crimes (McClain 2019). Law enforcement started incorporating technology because “it [had] promising applications… to make law enforcement work less time consuming, less prone to human error and fatigue, and more cost effective” (The Security Distillery). Overall, it was anticipated that technology would aid law enforcement in their work. Unfortunately, these positive outcomes come with ethical implications due to lack of understanding and caution (Magalhaes). In reality, the majority of these technologies cause racial equity issues, privacy issues, social costs, and negative impacts on amendment rights, which can lead to mistrust between citizens, governments, and businesses.

Ethics of Predictive Policing

Predictive policing is a type of technology that can lead to racial biases and discrimination. These factors need to be considered when implementing policing technology into law enforcement to avoid making mistakes and perpetuating biases. According to Daniel Munro, senior fellow at the University of Toronto’s Innovation Policy Lab, predictive policing is defined as using and analyzing data such as numbers, types, past arrests, and public reports to predict future crimes (Macleans). However, this data may be biased, as shown by Lum and Isaac’s (2016) simulation study focusing on drug offenses in Oakland. They found that the policing model’s algorithm unfairly targeted African American and Latino neighborhoods, having “almost 200 times more drug-related arrests than any other area”(Lum, Issac). This shows that the police data may overrepresent crimes committed in communities with a higher concentration of non-White and Low Income people. Additionally, Rolland supports Lum and Issac’s claim by describing the discriminatory results of the AI algorithm, COMPAS, which has been used by police departments to conduct risk assessments. Brisha Borden, a black 18-year-old with no prior offenses, and Vernon Parter, a white 41-year-old, with a five-year prison sentence for armed robbery, were both assessed using the COMPAS tool, and Parter was deemed low risk, while Borden was deemed to have a higher risk of committing a crime (The Security Distillery). The AI system clearly proved that certain places and criminal populations may be overrepresented in the data. These results raise ethical questions about biased algorithms, historical data, and the potential abuse of these tools (GovTech). As a solution, Farhang Heydari (2019) proposes that “one of the best means to ensure policing technology is …to enlist vendors in designing technology that way” as insight into tech rollout can have a substantial impact (McClain). Heydari, executive director of the Policing Project at NYU law, calls for action to bring tech companies, a significant stakeholder, into the conversation around predictive policing and urge them towards thoughtfulness around data collection that can be efficient and nondiscriminatory.

Ethics of Biometrics

Biometrics, especially facial recognition is another popular tool used in law enforcement, but it has posed ethical concerns around the privacy of citizens. The use of computerized facial comparison to identify unknown suspects in photographs is known as biometric face recognition (Smith, Miller). In a study conducted by Smith and Miller (2021), renowned research associate at the Oxford Uehiro Institute for Practical Ethics and authority on criminal justice ethics, the analysis of real-life misuse of facial recognition systems focused on the legal action against Clearview AI, an application containing three billion facial images taken from social media used to identify unknown individuals from videos or photographs. The study concluded that the use or redistribution of pictures and biometric data was not authorized by the subjects which is a clear violation of one’s privacy which adds to ethical and legal concerns. Privacy is referred to as what is consented and socially acceptable to share within specific circumstances (Fontes, Perrone). According to Catarina Fontes, researcher at Technical University of Munich specializing in artificial intelligence ethics, this “remote biometric identification of individuals… poses a high-risk of intrusion into individuals’ private lives” and leads to the loss of trust of citizens for the government and businesses. In response to these concerns, Axon, the nation’s largest retailer of police body cameras “promised to halt development on face-matching software” and many companies followed suit. Heydari’s Policing Project collaborated with Axon to create an “AI and Policing Technology Ethics Board” to advise companies of their limitations of their products and services (McClain). The Policing Project’s proposed solution is clearly making progress in reducing dangers of biometric technology by urging companies to make law enforcement both tech-driven and just.

Ethics of Big Data and Smart Machines

Big data is another type of artificial intelligence that has the potential to maximize the ability of law enforcement but has similar limitations to prior strategies. Big data includes the use of smart machines which can “collect and analyze probabilistic information for past crimes to predict future crimes” similar to the predictive policing method (Magalhaes). A study conducted by Brownsword and Harel (2019), professors at King’s College London and Bournemouth University, found that the rise of smart machines and big data leads to multiple waves of disruptions in the criminal justice system. Furthermore, the high powered computing aided by machine-learning algorithms poses a threat to citizens’ amendment rights to privacy and prohibition of discrimination as shown by the apparently “color-blind” algorithms used for sentencing, such as COMPAS (Brownsword, Harel).Black people make up 54% of the population in Newark, and 85% of pedestrian approaches indicating that they are 2.5 times more likely to be contacted and subjected to smart machine searches using AI. (Magalhaes). Although solutions to unbiased AI and reduction of disruptions in the justice system could be possible, the public debate about technology in law enforcement tends to have unrealistic expectations on one end and overblown fears on the other. These studies and data showing the disruption in the criminal justice system and amendment rights provide conflicting evidence regarding the benefits of using smart machines and urge officials to come up with a viable solution.

Conclusion

Despite the positive intention of the incorporation of technology into law enforcement, the commencement has been followed by serious negative consequences. To prevent the violation of privacy for the citizens of the United States, ethics boards which can connect the various stakeholders involved in the issue such as citizens, governments, and businesses, needs to be implemented for a mutual understanding of the effects of these technologies. As a solution to the ethical concerns of policing, Heydari (McCain) offers the plan of putting resources into conducting more research for assessing the privacy-related aspects of the products and services provided by businesses and then recommending better alternatives. With such a variety of stakeholders, it is unclear who will provide the money and resources needed for this project. However, Heydari’s policing project was part of a project done by NYU and other ethics boards that could possibly exist in the form of a nonprofit organization. As a result, it is now up to those who are aware of the crisis to find a solution for citizens who deserve to live in privacy and safety.

Works Cited

Brownsword, Roger, and Alon Harel. “Law, Liberty and Technology: Criminal Justice in the Context of Smart Machines.” 20 June 2019. International Journal of Law in Context, vol. 15, no.2, 2019, pp.107–125.

Fontes, Catarina, et al. “Ethics of Surveillance: Harnessing the Use of Live Facial Recognition.” Technical University of Munich, Dec. 2021.

Magalhaes, Marcelo, et al. “A Perspective on Ethics & Law in AI and Big Data Analytics.” The Harvard Law Record, 3 Sept. 2020.

Mccain, Jade. NYU Web. “Can Law Enforcement Be Both Tech-Driven and Just?” NYU, 1 Nov. 2019.

Munro, Daniel. “The Ethics of Police Using Technology to Predict Future Crimes.” Macleans.ca, 18 June 2017.

Rolland, Appoline. “Ethics, Artificial Intelligence and Predictive Policing.”The Security Distillery, The Security Distillery, 23 July 2021.

Smith, Marcus, and Seumas Miller. “The Ethical Application of Biometric Facial Recognition Technology.” AI & SOCIETY, 13 Apr 2021. vol.37, no. 1, 2021, pp. 167–175.

To Predict and Serve? Lum 2016 Wiley Online Library. Royal Statistical Society, 7 Oct. 2016

Westrope, Andrew. “In 2020, a Reckoning for Law Enforcement and Tech Ethics.” GovTech, GovTech, 16 July 2021.

--

--