using machine learning to predict domestic violence

Using machine learning to reduce domestic violence

Using machine-learning to forecast which accused perpetrators of domestic violence -- particularly those whose crimes result in injuries -- will be re-arrested on similar charges can cut such recidivism in half, according to a recent report.

Machine learning used during the arraignment process prevented “well over” 1,000 domestic violence incidents annually in at least one large metropolitan area, according to authors Richard Berk, a professor of criminology and statistics in the School of Arts & Sciences and the Wharton School, and Susan B. Sorenson, director of the Evelyn Jacobs Ortner Center on Family Violence.

For their study, “Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions,” Berk and Sorenson analyzed 28,646 domestic violence arraignments that led to official charges and the corresponding releases.  

“Under current practice, about 20 percent of the individuals released after arraignment are arrested for domestic violence within two years. If magistrates only released offenders our forecasts identified as good bets… [f]ailures could be cut in half.” In the jurisdiction studied, that would translate to “well over 1,000 fewer domestic violence arrests per year,” Berk and Sorenson noted in their report.

The computer analyzed more than 35 characteristics, including age and gender; prior domestic violence, murder, DUI or weapons charges; previous jail/prison sentences and age at first adult charge. This data points helps the computer learn what characteristics are associated with projected risk, offering extra information to a court official deciding whether to release an offender.

Unlike other risk-assessment tools that emphasize the victims’ needs, this report focused on trying to identify perpetrators who will reoffend, Sorenson told the Penn Current.

While using machine learning to predict future offenders holds promise, opponents say using this type of risk assessment technology can generate too many false positives, perpetuate stereotypes and potentially create “untoward consequences” for offenders later found innocent, Penn Current reported.

Sorenson and Berk noted the technology is simply a tool to help judicial officials make better choices. “It doesn’t make the decisions for people by any stretch,” Sorenson told Penn News. These choices “might be informed by the wisdom that accrues over years of experience, but it’s also wisdom that has accrued only in that courtroom. Machine learning goes beyond one courtroom to a wider community.”

"In all kinds of settings, having the computer figure this out is better than having us figure it out," Berk said. The algorithms are not perfect...but, as we say, you can't let the perfect be the enemy of the good."

Predictive analytics in law enforcement is an idea that's gaining steam, with Florida using such techniques to reduce juvenile recidivism and child abuse.

About the Author

Kathleen Hickey is a freelance writer for GCN.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.