China continues to pour massive resources into developing artificial intelligence that will have a greater reach into everyday life and functions of the state. Now, even Chinese courts are using AI to assist with making legal decisions.
A court in the city of Hangzhou located south of Shanghai started employing AI in 2019. The judge’s assistant program called Xiao Zhi 3.0, or “Little Wisdom,” first assisted in a trial of 10 people who had failed to repay bank loans.
Previously, it would have taken 10 separate trials to settle the issue, but with Xiao Zhi 3.0, all the cases were resolved in one hearing with one judge and a decision was available in just 30 minutes.
At first, Xiao Zhi 3.0 took over repetitive tasks such as announcing court procedures during hearings.
Now, the technology is used to record testimony with voice recognition, analyze case materials, and verify information from databases in real time.
Xiao Zhi 3.0 is mainly used in cases involving simple financial disputes. However, similar technology has been applied by a court in Suzhou to settle disputes over traffic accidents. AI examined the evidence and wrote the verdicts, sparing the judge’s time.
Xiao Baogong Intelligent Sentencing Prediction System, another legal AI platform, is also used by judges and prosecutors in criminal law.
The system is able to suggest penalties based on big data analysis of case information and prior judgments from similar cases.
“I can see the temptation for Chinese courts to adopt AI even in criminal cases. One of the challenges for Chinese criminal justice is to ensure the uniformity. They want to make sure that across different regions of China, the penalties are consistent with one another ,” Shitong Qiao, professor of law at Duke Law School in the US, told DW.
However, Zhiyu Li, an assistant professor in law and policy at Durham University, said there are ethical issues presented by using AI to assist with more complicated legal decisions in cases where a decision made based on AI calculations might be deemed more credible than a decision made by a human.
“While judges and prosecutors have the freedom to ignore or reject these suggestions for criminal punishments, we don’t know if it may nevertheless sway their decision-making unconsciously due to cognitive biases,” Li told DW.
Putting the law in the hands of tech companies?
Around the world, AI-based solutions are mostly used to optimize legal databases and make them more accessible for both professionals and the public.
In Canada, the negotiation app Smartsettle ONE managed to resolve a three-month dispute over unpaid fees in less than an hour.
The parties had to move flags on a screen to indicate the possible space for compromise. Then the application used bidding tactics to nudge the stakeholders into settlement without revealing their secret bids.
However, only a few countries are currently ready to go further with using AI in legal matters.
France prohibited any development of AI-based predictive litigation in 2019. One of the reasons was to avoid the commercialization of judicial decision-making data, as courts do not have the capacity to develop AI by themselves.
The process would be outsourced to private technology companies. For example, Alibaba, a Chinese e-commerce corporation and one of the biggest tech companies in the world, participated in the development of AI for online transaction disputes.
“Motivations of these companies must be different from public institutions. The process needs to be made accountable. Making sure that the data itself is not biased and the algorithms are fair is a fundamental challenge not only for China but for the whole world,” he said legal expert Shitong.
The limits of automated law
In China, people can use smartphones to file a complaint, track the progress of a case and communicate with judges.
AI-based automated machines found in so-called “one-stop” stations provide legal consultations, register cases, and generate legal documents 24 hours a day. They can even calculate legal costs.
However, there is debate over the reliability of information provided by these automated lawyers. The machines are said to consider material, emotional and time costs involved in cases and provide users with calculated information on predicting the outcome. The limitation is that automation can miss nuances and lead people to make the wrong decision.
“Based on our interview results, some disputants were quite skeptical about the reliability and usefulness of the machine-generated predictions because the predictions were based mainly on answers to multiple-choice questions instead of face-to-face, interactive communications,” said legal expert Zhiyu.
Another issue is that the AI systems make assessments based on an incomplete public record, due to the uneven digitization of China’s regions.
Some controversial cases have been removed from the government database China Judgments Online after public outrage over what was seen as inadequate punishments handed down to the alleged perpetrators. This has raised concerns about whether AI based on fragmented data can make unbiased decisions.
“I think it’s both a special Chinese and a universal problem to make the best use of AI and at the same time to ensure accountability. Even judges do not understand the mechanism of AI decision-making because it is a black box. With AI, it will be so much more difficult for individual citizens to hold judges and government officials accountable,” Qiao said.
Edited by: Wesley Rahn