The Use of AI Tools in Valuation and Damages Assessments
Balancing Innovation With Professional Responsibility
-
septiembre 26, 2025
-
Could relying on artificial intelligence (“AI”) in developing expert opinions turn a defensible position into a credibility crisis? For business valuators and damages experts, that question is no longer hypothetical. AI is already reshaping how professionals conduct research, draft reports and analyze complex data. The technology promises speed, efficiency and deeper insights — but it also raises difficult questions about accountability, confidentiality and credibility, particularly when expert evidence is scrutinized in the courtroom.
There has been significant and ongoing discussion around the use of AI among damages and business valuation experts. At the Chartered Business Valuators (“CBV”) Institute Conference “Connect” in June 2025, a poll of participants revealed just how rapidly AI is being adopted in the valuation community: 87% of respondents said they already use AI tools, with 79% leveraging them for research, 55% for report drafting, 23% for analysis, 9% for math checks, and 21% for other uses. Despite this uptake, 59% cited privacy and confidentiality as their top concern, highlighting the tension between innovation and risk.1
Evidence of the growing importance of AI in Canada has come with the appointment of its first Minister of Artificial Intelligence and Digital Innovation in May 2025, a move signaling that AI’s development, governance and ethical use are now matters of public policy.2
As AI capabilities evolve at a rapid pace, the real challenge for business valuators isn’t whether to use these tools, but how to harness their capabilities responsibly and effectively. While the risks are real, particularly in the context of expert testimony, the opportunities are equally significant.
Understanding where AI can provide practical value is the first step towards integrating it into professional practice in a way that strengthens, rather than undermines, credibility.
How AI Can Help Business Valuators
When properly integrated, AI offers more than just research assistance, it can fundamentally enhance the valuation process:
-
Accelerate efficiency gains: Automate routine tasks like data identification and extraction and document summarization. For instance, AI tools can rapidly scan documents such as financial statements and shareholder agreements for relevant valuation inputs.
-
Enhance complex analysis: AI excels at processing large and/or unstructured data sets, enabling valuators to perform faster, more complex analysis and providing them more time to focus on interpretation and insights.
-
Strengthen decision support: AI can help model various valuation scenarios, run sensitivity analyses and offer comparative insights that sharpen professional judgment, not replace it.
-
Improve report quality: With careful prompting and review, AI can assist in generating clear, well structured narrative sections while maintaining the expert’s insights.
These benefits are tangible and measurable, but they are realized only when AI is used as a supporting tool —rather than relied upon to form conclusions or opinions in place of professional judgment.
The Critical Importance of Human Oversight and Abiding by Professional Standards
Efficiency should never come at the expense of diligence.
Valuation professionals, and expert witnesses in particular, must ensure all AI-assisted work meets professional standards: based on sufficient facts, thorough analysis and reliable methods. Failure to maintain rigorous oversight can have serious consequences.
The courts have already delivered sharp warnings. In 2024, in Zhang v. Chen, the British Columbia Supreme Court (“BCSC”) in Canada sanctioned a lawyer for citing fake cases generated by an AI tool (often referred to as AI “hallucinations”)3. Similarly, in Matter of Weber in New York, a damages expert admitted to using AI tools for calculations but could not explain the basis or sources behind them, rendering their testimony unreliable.4
In both cases, the outcome was the same, AI’s capabilities were overshadowed by its risks and by the user's lack of awareness of its limitations, leading to outcomes that undermined rather than advanced the case.
Navigating the Regulatory Landscape: Stay Ahead
Courts, arbitral institutions, and professional bodies worldwide are issuing guidance and protocols to ensure the ethical and responsible use of AI. These directives share consistent themes:
-
Transparency: Disclose when and how AI tools are used, particularly in documents prepared for court or arbitral submissions. Some jurisdictions have mandatory AI disclosure requirements.5
-
Accountability: Professionals remain fully responsible for content, regardless of AI involvement.
-
Verification and oversight: All AI outputs must be independently reviewed for accuracy, reliability and appropriateness. For example, AI platforms may be trained using flawed or unrepresentative data.
-
Confidentiality: Sensitive client data should never be entered into public AI platforms without clear safeguards or consent.
These directives are becoming more prescriptive. For example, in March 2025, the Chartered Institute of Arbitrators (“CiArb”) released a detailed guideline on AI in arbitration.6 Not only does it address issues such as those listed above, but it also introduces procedural orders for AI use and outlines risks to due process, data security and enforceability, of which expert witnesses should also be aware.7
For CBVs, the CBV Institute has not yet set official guidelines specific to the use of AI in engagements. In June 2024, the CBV Institute published a “Primer on Artificial Intelligence”,8 or the “CBV Primer”, intended to outline considerations for CBVs incorporating, or contemplating the incorporation of, AI into their professional practice.9,10
The CBV Primer offers a timely reminder: Even as AI evolves, existing professional obligations still apply. Key points include:
-
Accountability and competence: Do not rely on AI to replace professional judgment. It should serve as a tool to distil information, aid research and facilitate analysis, rather than being relied upon for decisionmaking. CBV Institute Practice Standard No. 120 requires that “valuations are adequately planned and properly executed, with due care and with an objective state of mind.”11
-
Verify all inputs and outputs: Biases and hallucinations are well-known issues in generative AI. In accordance with Section 201 of CBV Institute’s Code of Ethics, CBVs must not make, or be associated with, any statement that they know (or should know), is false or misleading.12
-
Maintain confidentiality: Section 500 of the CBV Institute Code of Ethics still applies, requiring that CBVs safeguard clients’ confidential information.13 Input data may be visible to technology providers and used to train their systems, raising concerns about confidentiality and data protection. Some firms are already taking steps to address issues of confidentiality. For example, FTI Consulting has expended significant resources to develop its own proprietary AI platforms, allowing its experts to benefit from AI’s capabilities without compromising data security.
-
Document clearly: Practice Standard No. 110 requires transparency about sources relied upon.14 Business valuators should document their methods and rationale, including how AI-supported content was verified.
Until AI-specific valuation standards are released, the foundational principles in existing practice standards and codes of conduct must guide the use of AI by business valuators and damages experts.
Best Practices for Implementation
In light of the issues outlined above, business valuators should reflect on best practices to adopt prior to integrating AI into their work. Some examples include:
-
Establish clear AI usage policies: A formal policy provides a governance framework detailing when, how, and under what conditions AI may be used in professional work. Such policies should address, for example, approved and prohibited use cases, rules for handling client information while maintaining confidentiality, safeguards against bias or threats to independence, guidance on disclosing AI use in reports or testimony, and a process for regularly reviewing the policy as laws, professional standards, and AI tools evolve.
-
Develop practical checklists: Checklists help translate policies into actionable, day-to-day guidance. They can cover pre-engagement steps (including obtaining client consent when AI is used with client data), data handling protocols, AI tool selection, workflow integration, output verification, and reporting and disclosure procedures.
-
Implement structured training: Training ensures AI tools are used effectively, ethically, and in line with professional and legal obligations. Given the rapidly evolving AI landscape, periodic refresher sessions are essential to maintain awareness of emerging tools, regulatory developments, and evolving best practices.
Final Thoughts
AI offers extraordinary potential to elevate the work of valuation and damages experts. Those who use it thoughtfully can gain a competitive edge in both efficiency and insight. Misuse of the technology, however, can carry serious consequences — from reputational damage to evidentiary exclusion.
The AI landscape is evolving rapidly with courts, regulators, and professional bodies paying close attention. Staying current with evolving standards and guidelines and embedding responsible AI practices into workflows is not just advisable — it’s essential.
Looking ahead, the differentiator will not just be who uses AI, but who uses it well. A truly responsible adopter of AI is one who views AI not as a replacement, but as a carefully managed complement. They will leverage AI to streamline tasks like data gathering, calculations and report drafting, while maintaining full accountability for the outcomes, verifying outputs rigorously and safeguarding client confidentiality. By doing so, they will set the benchmark for credibility in legal proceedings and for leadership in the profession.
Footnotes:
1: Audience response at CBV Institute Connect 2025 presentation: Greg Endicott, “The AI Revolution: Why It Matters to Valuators,” Strategic Value Group LLC, June 12, 2025.
2: CBC News, https://www.cbc.ca/news/politics/artificial-intelligence-evan-solomon-1.7536218, accessed on August 29, 2025.
3: BCC Courts, https://www.bccourts.ca/jdb-txt/sc/24/02/2024BCSC0285cor1.htm, accessed on August 29, 2025.
4: Justia U.S Law, https://law.justia.com/cases/new-york/other-courts/2024/2024-ny-slip-op-24258.html, accessed on August 29, 2025.
5: For example, a number of Canadian courts (including the Federal Court), require written disclosure if AI is used in court filings. Federal Court of Canada issued Notice to the Parties and the Profession, The Use of Artificial Intelligence in Court Proceedings on December 20, 2023 (updated on May 7, 2024); Court of King’s Bench of Manitoba issued a Practice Direction Re: Use of Artificial Intelligence in Court Submissions on June 23, 2023; Supreme Court of Yukon issued a Practice Direction Use of Artificial Intelligence Tools on June 26, 2023; Provincial Court of Nova Scotia issued Use of Artificial Intelligence (AI) and Protecting the Integrity of Court Submissions in Provincial Court on October 27, 2023.
6: Chartered Institute of Arbitrators Guideline on the Use of AI in Arbitration (2025).
7: The new guidelines specify that “the arbitrators may impose certain AI-related disclosure obligations on the parties including any party-appointed experts or factual witness. In this context, arbitrators may make directions as to the type of AI covered by the obligation to disclose, circumstances in which disclosure is, to whom disclosure is to be made and within which timeframe.” The guidelines contain a template “Agreement on the Use of AI in Arbitration” as well as a template “Procedural Order on the Use of AI”, which reference party appointed experts.
8: https://cbvinstitute.com/wp-content/uploads/2024/06/AI-Primer-June-2024-Final-EN.pdf.
9: CBV Primer, pg. 3.
10: The International Valuation Standards has also released commentary on AI, including its paper Navigating the Rise of Artificial Intelligence in Valuation: Opportunities, Risks, and Standards (dated July 2025) that serves as a forward-looking perspectives paper that explains how recent AI and valuation technologies are reshaping valuation practice; assesses the opportunities and threats AI presents to business valuators; as well as offering guidance on using AI tools responsibly—emphasizing transparency, human judgment, and compliance.
11: CBV Primer, pg. 5.
12: CBV Primer, pg. 5.
13: CBV Primer, pg. 5.
14: CBV Primer, pg. 5.
Servicios relacionados
Publicado
septiembre 26, 2025
Contactos clave
Managing Director
Senior Consultant
Consultant