Outside reviews can limit bias in forensic algorithms, GAO says
Independent evaluations that can establish scientific validity will curtail the impact of biases and human error during forensic investigations, the watchdog agency says.
Forensic algorithms play a crucial role in modern criminal investigations, helping law enforcement determine whether an evidentiary sample can be matched to a specific person.
While technology can curtail subjective decisions and reduce the time it takes analysts to reach conclusions, it comes with its own set of challenges. In a follow-up to a May 2020 report on how forensic algorithms work, the Government Accountability Office outlined the key challenges affecting the use of these algorithms and the associated social and ethical implications.
Law enforcement agencies primarily use three kinds of forensic algorithms in criminal investigations: latent prints, facial recognition and probabilistic genotyping, GAO said.
All three compare evidence from crime scenes to an online database. However, a number of factors ranging from the quality of the evidence, the size of the respective database and age, sex and racial demographics have the potential to reduce the accuracy of these findings. Analysts themselves are subject to human error, and biases differ from person to person.
The accuracy of latent prints frequently relies on the percentage of the fingerprint covered in the sample and whether it is smeared or distorted, making it difficult to draw precise inferences when the quality of the evidence is compromised. Similarly, the accuracy of facial recognition algorithms can vary when individuals wear glasses or makeup or if the image was taken from an extreme angle.
According to GAO, law enforcement also runs into problems assessing the validity of probabilistic genotyping, or the technology used in DNA profiling. Most studies evaluating probabilistic genotyping software have been conducted by law enforcement or software developers themselves, the report stated. A report from the President's Council of Advisors on Science and Technology noted that independent evaluation is often required to establish scientific validity, but there have been few such studies.
GAO offers policymakers three solutions to improve the reliability of forensic algorithms. The first involves increased training for law enforcement analysts and investigators to boost their understanding of the algorithms and the subsequent results. To reduce the risk of misuse and improve consistency, GAO says policymakers could also support the development and implementation of standards and policies related to law enforcement’s testing, procurement and use of such algorithms.
Finally, GAO suggests that increased transparency related to testing and performance could improve the public’s knowledge of the technologies and help address corresponding challenges.
“By automating assessment of evidence collected in criminal investigations, forensic algorithms can expand the capabilities of law enforcement and improve objectivity in investigations,” GAO said. “However, use of these algorithms also poses challenges if the status quo continues.”
Read the full report here.