In a move that has sparked controversy and concerns within Taiwan’s legal community, the Judicial Yuan recently announced its intention to integrate artificial intelligence (AI) into the drafting of judicial rulings. While proponents argue that this technology could streamline and improve the efficiency of the legal system, legal professionals and justice advocates are calling for transparency, accountability, and safeguards to prevent erroneous and biased judgments.
Calls for public disclosure
Several prominent groups, including the Taiwan Bar Association and the Judicial Reform Foundation, have voiced apprehension about the Judicial Yuan’s decision. They emphasize the need for clear communication with the public regarding how AI will assist judges in crafting rulings and what measures will be implemented to prevent potential errors and biases.
Expanding AI’s role
Initially, the Judicial Yuan intends to employ generative AI in cases involving fraud, money laundering, and unsafe driving, following the completion of system testing. Additionally, they plan to expand AI’s involvement to civil lawsuits related to compensation for traffic accidents and debt cancellations and liquidations. The Judicial Yuan asserts that its AI system is specifically designed to produce draft rulings and not just generate boilerplate content. It is touted as a “smart tool” capable of generating content based on relevant indictments and swiftly analyzing criminal facts and evidence.
Unanswered questions
Despite the Judicial Yuan’s enthusiasm for integrating AI, the legal community remains skeptical. There are several pressing questions that demand answers:
Data sources and training
The Judicial Yuan has not disclosed the time range from which legal documents were sourced to train the AI system. Legal professionals are concerned about the system’s ability to instantly analyze criminal facts and evidence, especially if it primarily relies on court judgments and indictments rather than case-specific files.
Third-party contractor and data management
The Judicial Yuan has not provided clarity on how a third-party contractor constructed the database used to train the AI system. Furthermore, it remains uncertain whether the data are stored and managed by the contractor or by the Judicial Yuan.
The public has a right to understand how the contractor will address issues arising from incomplete or biased information in the database when tweaking AI parameters. Additionally, there is a need for transparency regarding how liability disputes between the Judicial Yuan and the contractor will be resolved.
Judges’ authority and oversight
While the Judicial Yuan claims that judges will retain ultimate authority over determining facts, applying the law, and issuing sentences, there are concerns about the accuracy of changes made to generative content. Clarity is needed on whether judges will have the final say on factors leading to their judgments.
Demands for transparency
Legal professionals and advocates argue for greater transparency and access to the AI training database. Allowing attorneys, prosecutors, and other stakeholders to examine the database would enable collective scrutiny of potential bias and risks inherent in the system.
International precedent and concerns
While AI has been employed in other countries to assist in compiling evidence for legal cases, using it to aid judges in drafting rulings and decision-making remains relatively uncharted territory internationally. Concerns are raised by a Purdue University study showing that a significant portion of individuals may not detect errors in AI content with an accuracy rate below 50%, a phenomenon known as “artificial hallucination.”
Risks of AI in justice
There is a growing awareness of the risks associated with extensive AI use in the justice system, particularly in the United States. AI’s role in assisting judges in sentencing based on defendants’ risk of reoffending may inadvertently reinforce existing prejudices and racial inequalities. AI, while powerful, may overlook crucial cultural factors, stakeholder situations, and intent—essential elements for judges to consider when making decisions that cannot be replicated by AI.
The integration of AI into the judicial system holds the promise of increased efficiency, but it also raises complex questions about transparency, bias, accountability, and the role of judges. The legal community in Taiwan is calling for a cautious approach that prioritizes public disclosure, safeguards against bias, and preserves the essential role of human judgment in the legal process. As the use of AI in the legal field continues to evolve, addressing these concerns will be essential in ensuring fair and just outcomes for all.