Algorithmic fairness in predictive policing

Predictive Analytics
DOI: 10.1007/s43681-024-00541-3 Publication Date: 2024-09-02T10:03:36Z
ABSTRACT
Abstract The increasing use of algorithms in predictive policing has raised concerns regarding the potential amplification societal biases. This study adopts a two-phase approach, encompassing systematic review and mitigation age-related biases policing. Our identifies variety fairness strategies existing literature, such as domain knowledge, likelihood function penalties, counterfactual reasoning, demographic segmentation, with primary focus on racial However, this also highlights significant gaps addressing related to other protected attributes, including age, gender, socio-economic status. Additionally, it is observed that police actions are major contributor model discrimination To address these gaps, our empirical focuses mitigating within Chicago Police Department's Strategic Subject List (SSL) dataset used predicting risk being involved shooting incident, either victim or an offender. We introduce Conditional Score Recalibration (CSR), novel bias technique, alongside established Class Balancing method. CSR involves reassessing adjusting scores for individuals initially assigned moderately high-risk scores, categorizing them low if they meet three criteria: no prior arrests violent offenses, previous narcotic involvement incidents. assessment, utilizing metrics like Equality Opportunity Difference, Average Odds Demographic Parity, demonstrates approach significantly improves without sacrificing accuracy.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (40)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....