Skip to main content

Clifford Chance

Clifford Chance
International Arbitration Insights<br />

International Arbitration Insights

How does guidance from arbitral institutions help navigate the challenges of use of AI in arbitration?

The recently published Guide to the use of AI in cases administered under the SCC Rules addresses some key issues arising from the use of AI in arbitration. It adds to a growing body of material that will help parties and arbitrators identify the issues they need to consider and possibly make provision for in their arbitral procedures.

Guidance for use of AI in dispute resolution

The SCC Arbitration Institute published the SCC Guide in October 2024. It is a short and high-level document which seeks to highlight issues that should be considered by the parties and tribunal when using AI during an arbitration. The SCC Guide follows similar guidance produced in the last year by the Silicon Valley Arbitration and Mediation Center (SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration) and the Judiciary of England and Wales (AI Guidance for Judicial Office Holders). 

The growing volume of guidance is unsurprising given the use of AI by arbitrators (or judges) gives rise to particularly acute issues for the rule of law. The EU AI Act has identified that AI systems intended to be used in the administration of justice constitute a "High-Risk AI System" and it Recitals state that AI tools should be used to "support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity". These concerns are reflected in the guidance.

Who is the guidance aimed at?

The guides have slightly different audiences. The SCC Guide is addressed primarily at arbitrators, save on the issue of confidentiality, which is addressed to all participants. The Judicial Guide, as expected, is targeted at those discharging judicial office. The SVAMC Guide sets out principles applicable to parties, party representatives and arbitrators.

The SVAMC Guide also envisages being incorporated into a procedural order, and includes a model clause for doing so.

In terms of style, the SCC Guide is a very high-level document and it assumes a degree of background knowledge regarding the functionality and common pitfalls of using AI tools. The SVAMC and Judicial Guide provide more detail on functionality, limitations and risks of AI tools. They also include practical guidance. For example, the Judicial Guide sets out (at page 6) various indicators to assist judicial officers in recognising when AI tools may have been used to create content. The SVMAC Guide includes detailed commentary and suggestions for how the principles may be implemented in practice.

Each of the guides uses a different definition of AI, and acknowledges, to varying degrees, broader potential uses of different types of AI.  However, it is clear that all three are primarily concerned with the use of Generative AI. As such, the three guides are broadly consistent in identifying the main risks of using AI in dispute resolution, namely: (i) security and confidentiality, (ii) the perpetuation of biases or failure to identify inaccuracies arising from use of AI tools (quality and integrity) and (iii) the substitution of human reasoning and decision-making by AI tools.

Respecting confidentiality when using AI

The SCC Guide encourages arbitrators to inform themselves as to how data input is employed and deployed when using AI, acknowledging that the use of some tools may have "unintended consequences" for confidentiality. The risk that AI tools retain (and reuse) or obtain rights to information inputs is recognised in the other guides as well (SVAMC Guide, Guideline 2 and Judicial Guide, paragraph 2). The Judicial Guide gives very practical tips (e.g. deleting prompt history) that are clearly aimed at protecting public figures and the integrity of the judiciary. None of the guides go as far as saying that public AI tools should not be used, but for many use cases that will likely be a key threshold.

Maintaining quality and protecting integrity of proceedings

There is a risk that AI will perpetuate biases (if part of the data on which it was trained) and generate unreliable information or evidence. To address this, the SCC Guide states that arbitrators should apply appropriate levels of review and verification to AI output before using it and ensure effective human oversight. All three guides encourage educating oneself on the capabilities and limitations of AI tools and stress that users remain accountable for any AI outputs. This is all common sense and best practice. 

The SCC Guide also notes that transparency and accountability are key components of integrity and encourages arbitrators to disclose any use of AI to ensure parties' right to be heard and that arbitrators do not exceed their mandate. The SVAMC Guide encourages (but does not oblige) all participants to disclose the use of AI (Guideline 3). It also gives arbitrators guidance on how to deal with the consequences of errors in submissions that arise from the (mis)use of AI, including drawing inferences.

The Judicial Guide reminds judges that they retain personal responsibility for any materials produced in their name (Judicial Guide, paragraph 6). However, it imposes no disclosure requirement on a judge and notes that judges are not typically required to describe the research or preparatory work which may have been done in order to produce a judgment. The Judicial Guide also includes a list of indicators to assist judges in recognising when parties may have used AI tools to create content, and noted a general bias towards US law because of the training dataset.

No delegation of decision-making authority or mandate

The arbitration-focused guidance is clear that delegation of decision making is a red line that arbitrators should not cross. The SCC Guide makes clear that arbitrators should not delegate their decision-making or reasoning to anyone or anything. The SVAMC Guide explicitly prohibits an arbitrator from delegating any "any part of their personal mandate to any AI tool" and further that AI tools "must not replace the human judgement, discretion, responsibility, and accountability inherent in an arbitrator’s role" (Guideline 6). This issue is not addressed directly in the Judicial Guide, as it really goes without saying, but the Judicial Guide does list legal analysis and research as a task for which the use of generative AI is not recommended.

Conclusion

It is early days for the use of AI in dispute resolution. In arbitration it will largely fall to users, in conjunction with arbitral institutions and other industry groups, to develop a set of best practices that strike the right balance between efficiency and accountability. The three guides discussed above are a welcome starting point and help identify practical issues that will arise in arbitrations, as well as principles that parties may wish to agree apply to their arbitrations, or disclosures that parties or arbitrators should consider making.

  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top