Recent AI applications in our society today are indications of varying grave measures to sustainably resolve hard-biting social problems using technology. From detecting fake news to diagnosing internalizing disorders among children, San Francisco has a fresh approach. The city has announced plans to use AI-powered “bias mitigation tool” to automatically bowdlerize race-signifying information from police reports as a means of fighting racism in courts.
The tool will be designed to help distance persecutors from being influenced by racial bias in their deciding whether suspects should be charged with a crime. The bias mitigation tool, which will go beyond stripping out race descriptions, according to the San Francisco district attorney’s office, will be implemented on July 1st.
All descriptions that might unconsciously or consciously inform prosecutors about a suspect’s racial background like people’s names, neighborhoods, and locations are also removed by the tool.
San Francisco District Attorney George Gascón disclosed that seeing a name like Hernandez can potentially bias the outcome because the name informs persecutors the suspect is of Latino descent. A DA told The Verge that the tool will also censor details about police officers, including their badge number, should the persecutor know them.
San Francisco has been fighting racism in court manually
San Francisco has since been using a much more limited manual process in efforts to prevent persecutors from seeing the information that could bias court outcomes. The city usually removes the first two pages of the report. The manual process inspired SF to create the machine learning process, making it the “first-in-the-nation” to use AI in battling bias ruling or racism in courts.
A team at Stanford Computational Policy Lab is working with Alex Chohlas-Wood in building the tool, which he described as a lightweight web application that automatically executes multiple algorithms to redact a police report. The tool uses computer vision to recognize and replace words in the report with generic versions such as location, suspect #1, etc.
The bias mitigation tool was built at no cost to San Francisco and in the final stages, says Wood. It will be open-sourced in a few weeks for others to adopt and will use a method called “named-entity recognition,” alongside other components to identify words to remove, he added.
It is unclear how well the tool might work; none has seen such system work on real police reports. Gascón could only state that the tool will evolve over time when a journalist tried to find out if the tool would redact other descriptions like cross-dressing. The AI-powered bias mitigation tool will also be used in a given arrest during the first charging decision. What if the initial charging decision should base on video evidence? Of course, the suspect’s race will be clear.
The technology will help to make our system of justice “more fair and just,” by reducing the threat that implicit “bias poses to the purity of decisions which have serious ramifications for the accused,” Gascon said.
It will be interesting to see how the new tool would help to fight racism in courts. A controversial AI-driven technique known as “predictive policing” is known to introduce biases instead of removing them.