, with researchers at the University of Chicago generating an algorithm for prediction of crime up to a week in advance, getting roughly 90 percent accurate. The predictive model can observe patterns in time and location to be able to isolate patterns of crime with public data on violent crime and property crimes in multiple cities in the United States .
The use of predictive models raises many ethical questions. Many critics have claimed the models can reflect existing biases present in policing. By creating models based on biased data, they are replicating systems of oppression that further engage over-policing in already marginalized communities . An example is Argentina’s ambition to use AI to predict crime. This has led to discussions concerning privacy versus civil liberties, as experts warn that mass surveillance and the profiling of human beings of any type lead to an unjust and risky path.
AI has the potential to be a good tool for protecting people and keeping them safe. However, there are serious ethical considerations in the application of predictive crime tools, as well as issues surrounding the social impact that could arise due to the prediction ability. We need to continue to further develop the tools responsibly by thinking through the implications of good data and ascertain that a data-driven structure to apply ethical implications and challenges of accountability, transparency, and fairness are maintained so that both the human and democratic rights are not jeopardized, and societal good is not adversely impacted.
While the potential of predictive policing with AI is promising, implementations of AI in predictive policing can lead to serious ethical and legal dilemmas. For example, predictive policing models are typically based on historical crime data, which can carry systemic biases. An example is how PredPol has consistently faced accusations of systemic discrimination toward blacks and minorities and is alleged to further contribute to over-policing of these groups.
Also, the application of AI in policing has the very large potential to violate individual’s privacy rights. Active surveillance forms of technology utilized without accountability or safeguards has already raised issues of moderation between civil liberties and security.
There are also ethical problems associated with AI leading to racial discrimination as well. A recent study determined prediction tools that leverage AI can perpetuate society’s historical bias against specific communities and individuals.
To overcome these issues, researchers are proposing transparency in how agencies create, use, and test predictive policing models and AI. There also needs to be impartial audits done on each model to locate any biases and abolish it before it is used in policing. Ethical guidelines should be created for predictive policing and AI. It is important to emphasize that AI needs to reach justice, not elevated inequality.
While there are many possibilities about AI and crime prediction, there are serious ethical and legal ramifications regarding its usage. The kind of algorithms that are used to predict policing are reliant on historical crime data, which reflects biases in the system. For example, the PredPol algorithm for policing was used with the specific intention of targeting minority communities to lead to over-policing in these communities.
The application of AI technologies in police work has serious implications on our individual privacy rights. The use of surveillance technologies based on AI without accountability in oversight raises uncomfortable conversations around security and civil liberties.
The ethical implications could also lead toward a possibility of racial discrimination through the use of AI systems. An exploratory study recently showed that these predictive policing systems can reinforce the biases of society, thus perpetuating the harm against those racialized groups being monitored.