This site uses cookies.

News Category 2

Managing the Risks of AI in Expert Evidence: 10 Practice Standards for Lawyers and Experts - Ramune Mickeviciute, Hugh James LLP & Geoffrey Simpson-Scott, Hodge Jones and Allen LLP

23/10/25. We hope we do not exaggerate by saying that one of the biggest nightmares that any lawyer faces is being penalised for using or submitting false or inaccurate information.

Most of you might have already heard about good cases stumbling not because of the law itself, but because of missed communication or unclear expectations. In a world that is run by technology to ease our professional lives, one might question how this happens.

Artificial Intelligence (AI) has been introduced as a perfect tool to ease the burden for many of us. However, it is not without any faults.

The use of AI beginning to appear more often in expert evidence, particularly in clinical negligence litigation. From basic proofreading through to data analysis and even drafting, AI tools are becoming part of the professional landscape.

We consider how this can create issues in legal proceedings so that you do not end up in the same position as some of us who have been unfortunate in this field. We we are going to discuss the effects of AI in clinical practice, and in particular the use of it when preparing expert evidence.

What is AI?

The term ‘AI’ is used so very often these days; however, how many of us know exactly what it means? We often have to remind ourselves what it is; how we can benefit from it; and what dangers it might impose?

A fancier explanation is that AI is ‘a technology that enables machines to perform tasks that typically require human intelligence’. AI encompasses various technologies like machine learning and deep learning, which allow systems to learn and adapt from data. It also creates models by training an algorithm to make predictions or decisions based on data. It encompasses a broad range of techniques that enable computers to learn from and make inferences based on data without being explicitly programmed for specific tasks.

In more casual terms, it is a technology that is self-programmed to learn from a lot of data and is there to help us in some situations. AI can appear in a form of different apps to help to tackle specific or broad range tasks.

For instance, when it comes to legal practice, AI can assist us in analysing information; our written communication; our drafting; extracting information from research; reviewing documents; summarising information; and spellchecking (amongst others).

Also, just to show it in numbers, 96% of law firms are reportedly now using AI in some capacity.

While AI has the potential to speed up processes and uncover insights, it also comes with risks. Experts and lawyers alike need to be alert to these challenges, since unreliable or poorly explained AI use can damage credibility and even make evidence inadmissible. The reported cases and insights from England and other common law jurisdictions show that this is already a familiar story.

AI Use in Expert Evidence

One of the challenges that lawyers might face is to try and control the sue of AI by the third parties involved with the case. One of these is likely to be our medical experts.

Doctors/medical staff started using AI years ago and the use continues to grow in many areas of their profession, which is more difficult for us to keep track of. This is not limited to their practice alone, and experts are starting to use their AI facilities to daft reports.

Experts use AI tools to make calculations, predictions or gather some additional data as well as to draft the body of their report.

While the use of AI is not prohibited, the danger lies in using it without double checking that the data produced is indeed accurate for the specific case that we are dealing with. If evidence is submitted to the court and/or our opponent containing false data, we face serious sanctions.

We have identified several pitfalls and proposed solutions to help you address the problem by bridging the gap between what we intend to do and what then actually happens.

Pitfalls for expert evidence

To address the issues that we might face when dealing with expert evidence, we propose ten practice standards. Each one sets out a practical expectation followed by the reasoning behind it and, of course, a practical solution to deal with it.

  1. Ask Experts to Disclose AI Use Early

Practice Standard: Ask experts in your Letter of Instruction to confirm if they plan to use AI and how they intend to use it.

Reason: You need to know if AI has been used at all.

Transparency is the starting point. AI can influence many parts of a report, sometimes in subtle ways. If lawyers do not know where AI has been used, they cannot properly review its reliability. Asking the question up front, at the instruction stage, ensures everyone starts from a position of openness. The best lawyers do not necessarily work harder; they communicate earlier, document clearly, and anticipate uncertainty.

  1. Scrutinise AI Outputs for Accuracy

Practice Standard: Check AI-generated content carefully and make sure experts confirm that all key points are accurate.

Reason: False or made-up information can damage credibility.

AI is prone to “hallucination,” where it generates convincing but false information. This has already caused embarrassment in the courts. Every fact, citation, or figure derived from AI must be double-checked against reliable sources. If something cannot be independently verified, it should not form part of expert evidence. The answer is not more regulation; it is more trust by building credibility through transparency.

3. Push for Explanations of the Process

Practice Standard: Ask experts to explain how the AI reached its conclusions and what human checks they carried out.

Reason: AI processes can be hidden and hard to understand (the ‘black box’ issue).

Without knowing the inputs, prompts, or checks involved, it is impossible to judge whether AI results are credible. Experts should document how they used AI and what they did to confirm its accuracy. This mirrors the expectation that experts explain their methodology when carrying out specialist tests. Clarity is not just good ethics; it is good business. The more transparent our processes, the fewer disputes we face and the stronger the expert evidence becomes.

  1. Question the Data Sources

Practice Standard: Ask experts to explain where their AI tool got its data and how well it matches the patient group in question.

Reason: Biased data can lead to misleading or unfair conclusions.

AI is only as good as the data it is trained on. If that data excludes certain groups or is unrepresentative, its results will be skewed. Experts must show why the dataset is appropriate to the case at hand. Vague reassurance that “this tool is commonly used” will not withstand scrutiny. Trust grows in the quiet moments. The most lasting expert relationships are often built between deadlines via the clarity of our updates and the honesty of our expectations.

  1. Require Human Oversight and Clinical Judgement

Practice Standard: Make sure experts explain how their own clinical judgement or experience supports the use of AI in the report.

Reason: Relying too heavily on AI without human input is risky.

AI cannot replace the role of the expert. Judges decide the facts; experts apply their knowledge to assist the court. If an opinion is shaped by AI, the expert must still show how their personal experience and clinical judgment underpin the conclusions. The irony of modern practice is that whilst we have never had more data, we have never been more in need of human judgement and empathy.

  1. Confirm the AI Tool Was Current and Reliable

Practice Standard: Confirm that the AI tool used was up to date and reliable at the time, and that the expert accounted for any changes in the tool or data.

Reason: Older or inconsistent tools can give inaccurate results.

AI systems evolve quickly. A report based on an outdated or uncalibrated tool risks producing flawed conclusions. Experts should identify which version of a tool they used, when it was last updated, and how they ensured its reliability.Every challenge here is really a test of systems, not people. When errors arise, it is rarely about intent; instead it is about process design, communication, or follow-up.

  1. Distinguish Between Correlation and Causation

Practice Standard: Ask experts to clearly show why each step in their reasoning is more likely than not, rather than just a possible link.

Reason: Avoid confusing coincidence (correlation) with cause.

Spotting patterns is AI’s strength; but not every pattern demonstrates causation. In law, the test is probability, not possibility. Experts must explain why AI-identified links genuinely represent cause and effect, not just coincidence. It is often not about more expertise; but the better use of it. Applying expert insight consistently is what earns trust, both internally and with judges.

  1. Challenge Anything That Looks Odd

Practice Standard: Follow up on anything in the expert’s report that seems odd, unlikely, or hard to follow.

Reason: Blind trust in AI output leads to ‘missing the obvious’ weak links in an opinion.

When AI results seem to “fit” too neatly with assumptions, there is a risk they are accepted without question. Lawyers should treat AI outputs like any other evidence: probe unusual or unclear points and insist on clear reasoning. Behind every assumption lies a question of trust. Experts’ assumptions are only as effective as the confidence practitioners and judges place in those who apply them.

  1. Agree Standards of Compliance in Advance

Practice Standard: Agree early on with your expert what legal and professional standards apply to the use of AI.

Reason: Poor understanding of the rules can make evidence inadmissible.

Both legal requirements (such as CPR 1998, Part 35) and professional guidance are evolving. Agreeing standards at the outset avoids problems later. We can help experts with the legal framework, while both experts and lawyers must ensure their professional obligations are met and problems headed off. Every practitioner remembers a case that made them rethink how they communicate with experts and changed our perspective.

  1. Develop and Use a Consistent Checklist

Practice Standard: Use a simple checklist to assess how AI is used in expert evidence and whether it meets the required standards.

Reason: A consistent approach helps manage risk and avoid surprises.

It is one thing to understand the practice standard; it is another to see how it plays out under pressure. AI is here to stay, so ad hoc responses are not enough. A checklist gives both experts and lawyers a structured way to assess reliability, transparency, and compliance. Judges are more likely to value evidence that comes from a careful, documented process; moving us from principle to practice.

Cost Effectiveness

In many firms, it is not the big policy changes that build trust with new technology; it is the everyday habits that show reliability.

If you think about how you ran your cases before AI, it was not unorthodox behaviour to check all of the work that is done by others, including junior members of the team and experts. Using AI does not mean that we should escape our routine; only that we upgrade our eyesight to notice some of those possible pitfalls. Small habits create big results.

We consider that the easiest way to start will be continuing to know the facts of our cases and keeping an eye out for anything that seems unusual. As always, do thorough checks and ask those questions that you consider relevant. This is definitely costs effective and rewarding long term.

Sanctions

Sanctions for using false data are severe, so it is mandatory to ensure that your expert evidence is completely accurate. Just to give some flavour to all that we have raised above, reputational damage and losing cases are potential consequences. The sanctions that lawyers could face include, but are not limited to, fines imposed on law firms; unfavourable judicial rulings and comments; wasted or adverse costs orders; and regulatory outcomes like being struck off. 

We can face these sanctions even where it is the expert who has improperly used AI. The cases paint a clear picture: lawyers start with good systems until real deadlines, health issues, workload and costs pressures tested them and their ‘errors’ becoming public.

Conclusion

AI is reshaping the way expert evidence is prepared and challenged. While it can support data analysis and improve efficiency, it also introduces risks that cannot be safely ignored.

These ten practice standards provide a framework for responsible use: promoting transparency, insisting on verification, and keeping human judgment at the centre. Handled in this way, AI can be a useful tool without undermining the credibility of the experts or the fairness of proceeding.

At the end of the day, we are in control of managing our use of AI and experts. It is our duty to check everything and ensure that evidence is accurate. The turning point comes when we stop feeling ‘managed’ by AI and start to understand how we can use it confidently. We hope that these practice standards assist you with your day-to-day job as much as they do us.

Ramune Mickeviciute, Solicitor at Hugh James LLP and Co-Author of ‘A Practical Guide to Fixed Costs in Clinical Negligence Cases’.

Geoffrey Simpson-Scott, Partner at Hodge Jones and Allen LLP and Author of ‘A Practical Guide to Clinical Negligence’ (Third Edition).

Both are available from Law Brief Publishing.

Image ©iStockphoto.com/Andrii Yalanskyi

Chaos Beats Causation: The Limits of Accident Reconstruction - Michael Brooks Reid, Temple Garden Chambers

17/10/25. Michael Brooks Reidcomments on the High Court’s approach to the neurosurgical evidence in the case ofMW (a child) v Wilkinson & Anor[2025] EWHC 2300 (KB).

Facts

A car vs pedestrian accident took place near a school causing a young child, ‘M’, to suffer life-changing injuries. At trial, the Claimant argued that even if the collision was unavoidable, M’s injuries would have been significantly reduced had the Defendant been driving at a lower speed (he was driving at around 20mph, the advisory speed limit). The Claimant relied on neurosurgical evidence which posited that a marginal reduction in impact speed would, on the balance of probabilities, have avoided the severe head injury M sustained.

Neurosurgical Dispute

The experts were both eminent neurosurgeons. The Claimant’s expert relied on generic statistical data from paediatric pedestrian studies showing a dramatic decrease in severe injury risk at speeds below 20 mph. He argued that even a 1-2 mph reduction would have altered the dynamics of the impact, giving M time to rotate his body, leading to a different—and less severe—injury profile. He asserted that, statistically, a child struck at such low speeds typically avoids significant head injury.

The Defendant’s expert rejected this as overly speculative. He highlighted that the...

Image ©iStockphoto.com/Lya_Cattel

Read more (PIBULJ subscribers only)...

The Price of Change: Niprose Investments Limited and 30 Other Claimants v Vincents Solicitors Limited [2025] EWHC 2084 (Ch) - Georgina Pressdee, Temple Garden Chambers

29/09/25. On 6 August 2025, His Honour Judge Hodge KC handed down his judgment in Niprose Investments Limited and 30 Other Claimants v Vincents Solicitors Limited [2025] EWHC 2084 (Ch).

Issues

The central issue before the High Court was who should bear the costs arising from two linked applications:

  1. The Defendant's unsuccessful application to strike out the claim or obtain summary judgment, which failed because the Claimants were allowed to amend their pleadings.
  2. The Claimants' opposed but ultimately successful application to amend their Particulars of Claim.

Together the applications generated two full days of hearings and substantial costs.

Background
The Claim

The proceedings concerned a professional negligence claim brought by purchasers against their former conveyancing solicitors, Vincents. The claimants had invested in a failed residential development scheme and consequently lost substantial up-front payments.

The Applications

In March 2024, Vincents applied for strike out/summary judgment. Judgment was reserved until April 2024, when the Court directed the Claimants to serve draft amended Particulars of Claim, with the Defendant to indicate its position on those amendments. Costs were reserved.

By the time the matter returned to Court in December 2024, limitation had expired. Vincents opposed the amendments first on the basis that they introduced new, time-barred claims and second as a matter of discretion. Judgment was handed down in January 2025. The majority (but not all) of the amendments were permitted, with the Court finding no jurisdictional bar based on limitation grounds.

At the subsequent CCMH in July 2025, Vincents argued it should recover most of its costs of both its own application and the Claimants' amendment application, amounting to nearly £35,000. The Claimants sought their own costs of the applications (almost £100,000) while accepting they should bear the costs associated with the amendments themselves.

Ruling

HHJ Hodge KC held that the amendments had amounted to a "comprehensive reformulation" of the case. Following Bellhouse v Zurich Insurance Plc [2025] EWHC 1551 (Comm), he ruled that where a claim is only allowed to proceed because of wholesale amendment, the starting point is that the respondent should pay the applicant's costs of the strike-out/summary judgment application. Accordingly, the Claimants were ordered to pay 71% of Vincents' costs up to 22 July 2024. The 29% reduction represented the value of the Claims which had settled by that point. However, the Court awarded the Claimants 90% of their costs thereafter, reflecting both their substantial success at the second hearing and Vincents' failure to engage constructively with the proposed amendments.

The Court summarily assessed the parties’ costs and ordered Vincents to pay the net of just over £2,000 within 14 days. The Claimants were ordered to pay Vincents’ costs occasioned by the amendments (to be agreed).

Comment

The decision illustrates the risks on both sides in strike-out/summary judgment and amendment disputes.

For respondents facing an application, poorly drafted pleadings should be proactively addressed by a formal application to amend, ideally after seeking the other side's agreement. This can neutralise the application for strike-out/summary judgment and may prevent an adverse costs order.

For applicants, the judgment highlights the dangers of overplaying opposition to amendments. While a defective pleading may justify an initial application, unreasonably opposition to proposed amendments and/or a failure to engage with them risks costs penalties.

Image ©iStockphoto.com/ilkersener

Fundamental Dishonesty? Court needs to see the Homework, not just the answer - Michael Brooks Reid, Temple Garden Chambers

25/09/25. Michael Brooks Reid comments on an aspect of the recent High Court judgment in Brown v Morgan Sindall Construction and Infrastructure Ltd [2025] EWHC 2204 (KB).

The FD argument

Following trial, the Defendant alleged that the Claimant, Mr. Brown, was fundamentally dishonest in exaggerating his psychological injuries, arguing for the well-known consequences that this entails. The Defendant’s FD argument rested predominantly on the expert evidence of the Defendant’s psychiatrist, Dr. Wise, who had administered a series of proprietary psychological "validity tests", said to be consistent with a 99% probability of malingering.

The Issue

The issue was that the tests themselves, and the Claimant's answers, were not disclosed to the Court or the Claimant's legal team. This was due to licensing conditions imposed by the test publishers, intended to protect the tests' integrity by preventing them from becoming public knowledge and susceptible to...

Image ©iStockphoto.com/AdShooter

Read more (PIBULJ subscribers only)...

QOCS in mixed claims: Sex, lies and a £100,000 costs bill - Michael Brooks Reid, Temple Garden Chambers

21/08/25. In Samrai and Ors v Rajunder Kalia [2024] EWHC 3143 (KB), seven claimants brought claims against the defendant, a religious leader, alleging to have been financially and sexually exploited by him. Four of the claims included claims for both personal injury (“PI”) and non-PI losses (i.e. “mixed claims”).

Each of the claims was either dismissed or struck out.

The Defendant’s costs bill had run to some £2 million and the matter came back to the Judge ([2025] EWHC 1449 (KB)) to deal with, inter alia, to what extent the First to Fourth Claimants (“the Mixed Claim Claimants”) were entitled to Qualified One-Way Costs Shifting (“QOCS”) protection.

The Law

The Judge set out the relevant QOCS provisions, namely CPR 44.13 and the exception under CPR 44.16(2), which provides:

“Orders for costs made against the claimant may be enforced up to the full extent of such orders with the permission of the court and to the extent that it considers just where … 

(b) a claim is made for the benefit of the claimant other than a claim to which this section applies.”

The Judge considered authorities set out in the CPR 44.16(2) White Book commentary. In Brown v Commissioner of Police for the Metropolis [2019] EWCA Civ 1724, it was held that if proceedings can fairly be described “in the round as a PI case” then, unless there are exceptional features (such as a “grossly exaggerated hire claim”), the court will usually exercise its discretion to apply QOCS to the whole claim. In Siddiqui v University of Oxford [2018] EWHC 3536 (QB) the court applied a broad-brush approach to separating PI and non-PI elements of the claim, ordering the claimant to pay 25% of the defendant’s costs.

The Arguments

The Defendant argued that the Court should apply the broad-brush approach endorsed in Siddiqui and order that the Mixed Claim Claimants pay 60% of the Defendant’s costs. In support, the Defendant noted that as little as 5% of the damages claimed by the Mixed Claim Claimants arose from the PI elements.

The Claimants, on the other hand, argued that this had been, “in the round”, a PI claim. Sexual exploitation was at the heart of the case and occupied the majority of the judgment, and the other claims were ancillary. Applying Brown, QOCS protection should apply to the whole claim.

Alternatively, the Court should exercise its discretion under CPR 44.16(2) to apply QOCS to the whole claim for reasons including:

  • The Defendant’s false denials which affected how the trial was run.
  • The disparity in status and financial positions of the parties.
  • The psychological consequences that a costs order would entail for the First Claimant who had mental health difficulties.
  • A public policy interest in not deterring individuals from making allegations of misconduct in the religious context.
  • The fact that a large and substantial part of the reason for the failure of the claims related to the negligence of the Claimants’ previous legal team.

The Decision

The Judge found that the mixed claims could not be described, in the round, as PI claims. Although PI was an important aspect and took up a large proportion of the trial, there was also significant time taken up on non-PI aspects.

The fact that the Fifth to Seventh Claimants brought claims on broadly the same basis but without any element of PI showed that the PI aspects and non-PI aspects could clearly be distinguished.

Further, the Judge declined to exercise his discretion to apply QOCS to the whole claim, rejecting each of the arguments put forward on behalf of the Mixed Claim Claimants.

Noting that the most expensive part of litigation is the trial itself, which was weighted heavily in favour of the PI claim, and applying a broad-brush approach, the Judge ordered the Mixed Claim Claimants to pay 40% of the Defendant’s costs.

Comment

The most important consideration will always be whether the claim can be fairly described, in the round, as a PI claim. If not, the Court will take a broad-brush analysis, particularly bearing in mind the costs devoted to the non-PI elements.

Claimants will note that despite the Court having discretion, the Judge was unmoved by arguments on public policy, mental health and financial disparity between the parties and the Mixed Claim Claimants were footed with a £100,000 costs bill each.

Claimants should be robustly advised of costs risks in mixed claims, and may do well to take out ATE insurance. Representatives should consider how best to plead and present a claim to amplify the PI elements and minimise the risk of a large adverse costs order.

Defendants will take comfort in the fact that the Court declined to exercise its discretion to apply QOCS to the whole claim, notwithstanding some potentially attractive arguments raised by the Claimants.

Image ©iStockphoto.com/imagestock

All information on this site was believed to be correct by the relevant authors at the time of writing. All content is for information purposes only and is not intended as legal advice. No liability is accepted by either the publisher or the author(s) for any errors or omissions (whether negligent or not) that it may contain. 

The opinions expressed in the articles are the authors' own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand. 

Professional advice should always be obtained before applying any information to particular circumstances.

Excerpts from judgments and statutes are Crown copyright. Any Crown Copyright material is reproduced with the permission of the Controller of OPSI and the Queen’s Printer for Scotland under the Open Government Licence.