Legal development

The wrong type of legal spectacles: authenticity in litigation

red dots pattern

    In the recent High Court case of UAB Business Enterprise & Anor v Oneta Limited & Ors, ICC Judge Agnello KC rejected a witness's evidence entirely after he was found to have been using smart glasses connected to his mobile phone to receive real-time coaching during cross-examination. The case highlights the growing challenges that AI and digital technology pose to the authenticity of evidence in litigation. As AI tools become more sophisticated and accessible, courts, legal practitioners, and their clients must adapt to identify and address authenticity risks.

    Background

    The case concerned a dispute over the ownership of a property development company. During cross-examination of one of the claimants' witnesses, defence counsel identified audio interference coming from around the witness. The witness asked if he could take his glasses off, as she suspected the glasses were smart glasses. When he did remove the glasses, the witness's mobile phone began broadcasting a voice out-loud. The witness denied that he was being coached through the cross-examination. The witness agreed to the inspection of his phone and its metadata. However, before inspection could take place, the witness claimed that his phone had been stolen. The Judge concluded that the witness had been receiving assistance during cross-examination through his smart glasses connected to his mobile phone. She held that his evidence was "unreliable and untruthful" and rejected it entirely.

    A rapidly evolving landscape

    The UAB v Oneta case is part of a broader trend. The use of AI and digital technology in litigation is a rapidly evolving area, with new tools being developed at pace. Recent guidance issued to the judiciary on AI (October 2025) warns judges that AI tools are being used to produce fake material, including fabricated text, images and video. As the technology becomes more sophisticated, the challenge of identifying manipulated evidence grows. Authenticity challenges are arising in several jurisdictions. For example:

    France

    • AI-assisted statements of case and submissions: Courts, tribunals and clients are receiving statements of case and submissions that appear to be AI-generated. These documents are often lengthy, ostensibly plausible but legally incoherent. For example, the Administrative Court of Rennes1  rejected an application that was "clearly drafted using a generative artificial intelligence tool" on the basis that it was not "accompanied by the necessary details to enable the judge to assess their merits". Such pleadings can make it difficult for defendants to determine how to respond, leading to significant wasted costs.

    The United States

    • Deepfake video evidence: In Mendones v Cushman & Wakefield,2  a US case, a self-represented litigant submitted deepfake videos and altered images in support of a motion for summary judgment. The Court identified that the witness in the video lacked natural expressiveness and displayed "robotic" behaviour. The case was dismissed as a result.
    • AI-assisted expert evidence: In the Matter of Weber3 , another US case, an expert witness used Microsoft Copilot to cross-check calculations for a supplemental damages report but could not recall the prompts he had used or explain what sources Copilot relied on. The Court found the expert's evidence to be "unreliable".

    In our own recent experience

    • AI-assisted disclosure: We have encountered claimants using AI to compile and produce email evidence for disclosure. As many of the emails produced were emails to which our client was allegedly a party, we cross-checked the emails against our client's records and identified numerous factual inaccuracies, for example incorrectly dated emails and emails which included words and phrases that the original emails did not include.

    What can parties do?

    In the current environment, parties should consider taking proactive steps to safeguard against AI-related authenticity issues:

    • Be alert to warning signs that evidence may have been manipulated. For example, if a document lacks its original metadata, or you are provided with an image or screenshot of a document rather than the original file. If the other side cannot produce the original, ask why. If you suspect that evidence has been doctored, consider using the CPR 32.19 procedure, which allows a party to serve notice requiring the other side to prove the authenticity of a document at trial. You may also want to include an allegation of forgery/fabrication in your statement of case.
    • Expert evidence controls: Ask experts to seek your permission before using any AI tools in preparing their reports, and require them to maintain logs of how AI has been used. This promotes transparency and helps avoid situations like the Matter of Weber case, where the expert's inability to recall AI prompts undermined his evidence.
    • Cross-examination: If you suspect that a witness statement is not in the witness's own words, targeted questions in cross-examination about whether the witness used AI in preparing their evidence, and if so, how it was used, may clarify the issue.

    Anticipating misuse: CJC consultation

    The Civil Justice Council (CJC) has published a consultation on the use of AI in preparing court documents. In doing so, it recognises that whilst AI has "enormous potential to be used for social good", these benefits "do not come without significant risk" to the integrity of the litigation process. A central concern is the authenticity of factual and expert evidence.

    The existing rules require witness statements to be, so far as practicable, in the witness's "own words". The CJC recognises that generative AI poses a threat to this requirement. Its consultation proposes that legal representatives should be required to declare that AI has not been used "for the purposes of generating [the] content" of a trial witness statement, "including by way of altering, embellishing, strengthening, diluting or rephrasing the witness's evidence".

    The CJC's proposals extend to expert evidence, where similar authenticity concerns arise. The consultation proposes that experts should be required to "identify and explain any use of AI" in preparing their reports, other than for administrative purposes such as transcription. Where experts have delegated calculations or technical exercises to AI, parties will need to understand what level of supervision and verification was applied. As the case of Matter of Weber illustrates, an expert's inability to explain how AI was used can render their evidence unreliable.

    Conclusion

    The UAB v Oneta case serves as a reminder of how inappropriate use of AI and digital technologies can undermine a case. Practitioners should be ready to spot the warning signs that evidence has been manipulated and take steps to safeguard the integrity of the litigation process. With the CJC's consultation signalling potential regulatory change, now is the time for parties to embed robust practices for managing AI-related risks in their cases.

    Other author: James Hughes, Trainee Solicitor


    1. Administrative Court of Rennes, 28 January 2026, 2506364

    2. Superior Court of California, County of Alameda, 9 September 2025, 23CV028772

    3. Matter of Weber 2024 NY Slip Op 24258

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.