Using AI in the Courtroom Is Problematic | Opinion

A recent criminal case in Washington captured national attention for the judge's "first of its kind" ruling excluding AI (artificial intelligence)-enhanced video evidence. In the written order, the judge stated that although the software used for the enhancement was popular in the commercial market, it had not been peer reviewed nor independently tested for reliability, and the algorithms underlying it were "opaque and proprietary."

News like this leaves the impression that the judiciary is carefully monitoring the use of AI-generated evidence in criminal cases, but in reality, nothing could be further from the truth. When an analyst cannot interpret a crime scene DNA sample, it's now routine for prosecutors to introduce results generated by AI probabilistic genotyping software. And courts accept such evidence, often even dismissing out of hand arguments by defense lawyers that such software and its underlying algorithms remain—to paraphrase the Washington judge—opaque, proprietary, and untested by independent experts. The same can be said for location data evidence generated by tools powered by AI, as well as information gleaned from AI facial recognition tools, even though we know such data can be faulty and at least six people have been wrongfully arrested or jailed as a result.

Courtroom pictured
A courtroom is pictured. Carol M. Highsmith/Buyenlarge/Getty Images

Even more alarmingly, judges and defendants do not always have an opportunity to test such evidence because law enforcement frequently hides its use of AI software from courts and the public. Investigators have made arrests after combing through recreational DNA databases or using DNA to generate supposed facial images, without disclosing those searches as the basis for the identification. Alerts from AI gunshot detection software have justified the deployment of police to scenes, and even have been upheld by courts as providing a legal basis to briefly detain and question people. Never mind that one study found that 90 percent of alerts turned up no corroboration, led to three men being falsely imprisoned, and propelled thousands of unnecessary stop and frisks in predominantly Black and brown neighborhoods.

What distinguishes the Washington case is not the use of AI—it is that the AI-based evidence was excluded after careful scrutiny by the court. That scrutiny might be explainable by another recurrent pattern in criminal cases: namely that forensic techniques offered by the defense are carefully picked apart, while forensic methods offered by the government skate by unexamined. Perhaps the most famous forensic exclusion of all time was polygraph evidence offered by a defendant in a case called Frye, which then set the standard for judicial review of all scientific evidence. Tellingly, however, Frye and its federal counterpart have routinely failed to preclude dubious methods, sloppy work, and overclaiming testimony—so long as it is offered by the government. That is why scandals routinely pop up in the field of forensic science, such as the recent announcement that a DNA analyst in Colorado had manipulated data in at least 652 cases over a 15 year period, or the publication in 2009 of a report by the National Academy of Sciences that concluded that familiar methods like field testing of drugs, firearm or bullet identification, and hair or bite analysis all lacked a firm scientific basis.

Given the checkered history of forensic evidence, it is in fact refreshing to see a court give it more careful attention. But even that is not enough.

Our current systems of justice—and in particular our procedural and evidentiary rules—were created for the 19th century, not the 21st. They assume that the evidence is a self-evident, discrete piece of information that can be physically handed over and fully assessed and reviewed by a lawyer, judge, or jury on its own terms. The scrutiny that sophisticated AI technologies require in order to safeguard their integrity requires much more—like mandatory disclosure of the use of AI in any aspect of the case; access to the algorithms and inputs, not just outputs; the assistance of experts; disclosure of quality assurance and systems integrity measures; special scrutiny of methods created for, sold to, and deployed primarily for law enforcement purposes; and so on. As with all aspects of our lives, AI is coming whether we like it or not. It is time for our rules of justice to catch up.

Erin Murphy is Norman Dorsen professor of civil liberties at New York University School of Law.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Erin Murphy


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go