Skip Navigation

CIArb Issues Its First Guidelines for the Use of AI in Arbitration

By Patrick Lapierre, Gina Murray and Emma Tucker (Summer Student)
June 4, 2025

The Chartered Institute of Arbitrators (CIArb) recently published its Guideline on the Use of AI in Arbitration (2025) (Guidelines) to provide guidance on how to take advantage of the growing benefits of artificial intelligence (AI) while mitigating some of the associated risks.

The Guidelines, broken down into four parts, address issues that participants in arbitration proceedings should consider when resorting to AI tools and resources. The Guidelines also include a model agreement on the use of AI in arbitration and two model procedural orders on the use of AI.

Part I: Benefits and Risks of the Use of AI in Arbitration

The Guidelines highlight the rapidly growing potential of AI tools to improve the efficiency, quality and accessibility of arbitration proceedings. AI can support a wide range of tasks, including legal research, document analysis, evidence collection, transcription, translation and even case outcome prediction. These capabilities can reduce cost and delay, enhance consistency and assist under-resourced parties in presenting their case.

However, the Guidelines also underscore serious legal, procedural and ethical risks. Leading among these are threats to confidentiality, data security and the enforceability of arbitral awards. The use of AI by arbitrators raises further concerns over bias, transparency and accountability. The Guidelines emphasize the importance of human oversight and responsibility, particularly in relation to the integrity of decisions and compliance with due process, highlighting the continued risk of inaccurate outputs related to academic sources, legislation, case law and even evidence.

Considering rapid regulatory developments and increasing reliance on digital tools, the Guidelines advise parties and tribunals to assess the nature of any AI tool used, its impact on procedural fairness and its compatibility with applicable rules and laws. Overall, while AI offers considerable benefits to arbitration practice, its use should be transparent, proportionate and aligned with core principles of justice, impartiality and independence.

Part II: General Recommendation About Use of AI in Arbitration

The Guidelines recommend that parties and arbitrators conduct reasonable due diligence on any AI tools proposed for use in arbitration. Reasonable due diligence includes understanding the tool's function, data inputs and technological design. Stakeholders are further encouraged to weigh the potential benefits of AI against its legal and procedural risks, particularly those affecting due process, the rule of law and the integrity of arbitration.

Participants should also consider the legal and regulatory frameworks governing AI in the relevant jurisdictions. Importantly, unless explicitly agreed otherwise in writing, and subject to any mandatory rules, the use of AI tools does not absolve any party or arbitrator of their underlying duties and responsibilities in the arbitration process.

Part III: Parties’ Use of AI in an Arbitration

The Guidelines address four key areas that must be considered where parties intend to use AI in an arbitration:

  • Arbitrator’s Procedural Powers: The Guidelines affirm that arbitrators have broad procedural powers to regulate the use of AI in arbitration, including issuing directions, making rulings and appointing experts to understand AI technologies. While parties may privately use AI tools, arbitrators may require disclosure where AI impacts the integrity of the process. Arbitrators are encouraged to document AI-related decisions in procedural orders and address unresolved issues in the final award.
  • Party Autonomy: Parties retain autonomy to agree on whether and how AI is used, including identifying specific tools or limits. Where the arbitration agreement is silent, arbitrators should raise the issue early in proceedings to allow parties to express their views.
  • Rulings on AI Use and AdmissibilityWhen disagreements arise, arbitrators may rule on the use of AI based on overall case circumstances. They should weigh cost and efficiency benefits against risks to fairness, evidence quality, confidentiality and legal compliance. Rulings should consider the nature of the AI tool, including bias, data quality and links between machine-generated content and the arbitral record. Arbitrators must be guided by applicable laws, including privacy, cybersecurity and institutional rules, which may affect the award’s enforceability.
  • Disclosure Requirements: Disclosure of AI use may be required where it affects evidence, decision-making or involves the delegation of duties. Arbitrators can mandate disclosure by parties, witnesses or experts, including the type of AI, timing and scope. Disclosure obligations are generally ongoing and must be balanced against confidentiality or legal limitations. Failure to disclose can prompt inquiry, procedural remedies or cost consequences.

Part IV: Use of AI by Arbitrators

Arbitrators may use AI to help sift through the information submitted by the parties, provided that they do not abdicate their decision-making role to the AI tool. Regardless of how or if an arbitrator used AI to reach their conclusion, the arbitrator remains fully responsible for the outcome of the proceeding.

Arbitrators should discuss the use of AI with the parties and refrain from using AI if the parties object.

Takeaways

A few central themes emerge from the Guidelines:

  1. Whether and to what extent a given AI tool can be used is subject to the discretion of both the parties and the arbitrator.
  2. Transparency about the use of AI is crucial to the legitimacy of the arbitration proceedings and their outcomes.
  3. The AI space is novel and rapidly evolving, and what is considered best practice now may change with advancements in AI or the introduction of AI-focused legislation.

For more information, please contact the authors or any other member of our Arbitration group.

More insights