This site uses cookies.

Safe Use of AI in Legal Proceedings - Dr Mark Burgin

26/06/25. Dr Mark Burgin uses Gemini LLM to consider a recent paper considering AI and the UK Judiciary from the point of view of a medical expert with wide experience in AI.

Artificial Intelligence (AI) in the legal field has been generally welcomed but Judges have raised concerns. There are a number of reports of lawyers having been caught out citing non existent cases. This suggests that AI use in law is already widespread and largely unregulated either by legal firms or through formal guidance. The judges in this paper have a unique viewpoint and their work is a key step to safe AI use.

AI's Role in Law: Early Lessons

AI is increasingly discussed for tasks like legal research and case management. However, direct experiences from UK judges show significant issues. Current AI models often "hallucinate," meaning they invent facts or cases, rendering them unreliable for critical legal work. They can also struggle with the nuances of UK law, frequently drawing on different legal systems.

The judges have noted that errors often creep into the courtroom without being noticed. They consider some types of output to be unusable and raise an interesting concern about AI’s lack of sensitivity. They have limited belief that judgment writing will be assisted by AI. These flaws highlight that AI is far from infallible and its outputs require rigorous verification.

Safe vs. Unsafe: A Deliberate Distinction

AI's safe applications are limited to tasks that enhance, rather than replace, human intellect and precision. These include:

  • Error Checking: AI can effectively double-check documents for grammatical errors, typos, or inconsistencies that a human might miss. This is akin to an advanced spell-checker.
  • Idea Generation/Development from Prompts: When given a clear prompt, AI can help brainstorm or expand on existing ideas, acting as a thought partner to refine arguments or explore different angles.
  • Style Transformation (Simplification): AI is useful for altering the presentation of information, such as simplifying a complex legal judgment for public understanding or adapting it for a specific audience. This focuses on communication, not core legal reasoning.

An example of this from my own practice is when I have written short notes about a medical condition I will ask AI to create a well written version that explains the subject simply and also check that I have not made any significant omissions.

Unsafe Uses (AI as a Decision-Maker or Primary Information Source)

Any use of AI that involves judgment, critical legal analysis, or unverified information is considered unsafe:

  • Summarising Documents: While seemingly benign, relying on AI to summarise legal documents is risky. Current AI can turn a negative comment into a positive not recognise the significance of important details and struggles with pictures.
  • Deciding Cases: Judges emphasised that this is fundamentally an unsafe use. Legal judgment is not pure logic; it requires human reasoning, empathy, and an understanding of societal values. Delegating this to AI would erode fairness, trust, and the essential human connection within the justice system.
  • Legal Research: Despite the potential for efficiency, direct reliance on AI for legal research is currently unsafe. The risk of AI generating false citations or factually incorrect information ("hallucinations") means that any AI-generated research output cannot be trusted without extensive, time-consuming human verification against authoritative sources. This defeats the purpose of efficiency and introduces unacceptable risk.

I have experimented with asking AI to summarise computer medical records using confidentiality protocols and despite detailed prompts I still need to read all the records as it will only finds 50% of the key records.

Responsible AI in Law: Key Steps

At present AI is an experimental system and everyone including ‘AI experts’ are on a learning curve. High risk cases such as family law should wait until the technology matures and safety protocols are put in place.

Training, skills and assessment

  • Training: All legal professionals should have a basic training in prompt analysis and safe and unsafe use of the technology.
  • Skills: Lawyers should do the work twice, first using their legal skills and second using the AI to collaborate and improve their work.
  • Assessment: Assessment of the work is necessary even in those who are competent as they are learning how to work with AI so their performance will reduce until they have experience.

Openness & Accountability

  • Disclosure: Any use of AI in legal processes or documents must be explicitly and specifically disclosed (e.g., "AI assisted with initial research" or "AI simplified this judgment's language").

Leadership & Guidance:

  • Leadership: Leadership must have written policies for safe AI use and how they will support those who make mistakes.
  • Iteration: Reports and analyses of successes, failures or errors allow the firm to iterate more effective guidance.
  • Collaboration: Collaboration between lawyers and AI will develop with time and can be assessed by the effectiveness of their output.

Conclusions

The paper provides a detailed assessment of current difficulties with AI in the UK justice system. Judges are uniquely placed to assess the effectiveness of AI in a legal case and mirror concerns that AI experienced professionals have raised.

This article offers SMART objectives that legal firms can implement immediately. Ask for both copies of the lawyer’s work (before and after AI), formally assess the work against a range of criteria, have a house style for AI use notifications, use this article as a guide of safe and unsafe use of AI, develop a portfolio of evidence of organisational learning and increasing collaboration.

AI is already revolutionising law and education, soon it will also change the way that medicine and perhaps medical expert’s work. My experience of similar changes is to try to be prepared but wait until you can see what you are doing before diving in.

Doctor Mark Burgin, BM BCh (oxon) MRCGP is a Disability Analyst and is on the General Practitioner Specialist Register.

Dr. Burgin can be contacted on This email address is being protected from spambots. You need JavaScript enabled to view it. and 0845 331 3304 websites drmarkburgin.co.uk and gecko-alligator-babx.squarespace.com

Your PGCME: An Amateur Guide to Medical Education by Mark Burgin (about AI supported learning).

Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary CHIWORK ’25, June 23–25, 2025, Amsterdam, Netherlands

This is part of a series of articles by Dr. Mark Burgin. The opinions expressed in this article are the author's own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand.

Image ©iStockphoto.com/style-photography

All information on this site was believed to be correct by the relevant authors at the time of writing. All content is for information purposes only and is not intended as legal advice. No liability is accepted by either the publisher or the author(s) for any errors or omissions (whether negligent or not) that it may contain. 

The opinions expressed in the articles are the authors' own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand. 

Professional advice should always be obtained before applying any information to particular circumstances.

Excerpts from judgments and statutes are Crown copyright. Any Crown Copyright material is reproduced with the permission of the Controller of OPSI and the Queen’s Printer for Scotland under the Open Government Licence.