Legal development

Litigation Trending: Judge in Alabama uses AI to assist in determining the meaning of a contractual term  

Insight Hero Image

    What is the ordinary meaning of a word? Ask a large language model.

    On 28 May 2024, Judge Kevin Newsom of the U.S. Court of Appeals for the Eleventh Circuit handed down a judgment which has gained a lot of attention (Snell v. United Specialty Insurance Company, 11th U.S. Circuit Court of Appeals, No. 22-12581). As far as we are aware it is the first time on record that a Judge in the United States has relied on a Large Language Model (LLM) to determine the "ordinary meaning" of a contractual term.

    Although unprecedented (and with the potential to appear gimmicky), Judge Newsom's use of the technology available to him was careful and reasoned. This judgment gives a helpful indication of how AI might be used effectively in the context of a case involving the interpretation of contracts.

    While some lawyers and judges may be unwilling to trust any aspect of their practice to AI, others are developing their confidence in the use of this technology as part of their day-to-day practice.

    What did Judge Newsom want to know?

    The case concerned the interpretation of an insurance contract.  Was the claimant covered by his insurer for claims made, or not?  He asserted that the "ordinary meaning rule" is the most fundamental semantic rule of interpretation.  In that context, Judge Newsom sought to determine whether the installation of an in-ground trampoline should be considered to fall within the ordinary meaning of "landscaping".

    According to his judgment, Judge Newsom asked the LLM ChatGPT 'what is the ordinary meaning of landscaping?'. The response was: '“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes.  This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.' Seeking further clarification, the Judge asked whether the specific act of installing an in-ground trampoline was "landscaping", to which ChatGPT responded, 'yes' and then provided further commentary. This process was repeated on Google Bard, which provided a very similar response.

    Why did Judge Newsom use AI?

    Judge Newsom explained that, initially, he had carried out more traditional research to define the term using dictionaries but was left feeling 'frustrated and stuck'. The Judge explained this 'ultimately led [him]—initially half-jokingly, later more seriously—to wonder whether ChatGPT and other AI-powered large language models (“LLMs”) might provide a helping hand'.

    Judge Newsom highlighted the following advantages and disadvantages of relying on LLMs to define the ordinary meaning of terms.


    • LLMs are trained on ordinary-language inputs and are trained on data which aims to reflect language that individuals use in their everyday lives.
    • LLMs can “understand” context, to some extent. Judge Newsom acknowledged that while they are not infallible, they are able to recognise how words are used in context and detect language patterns.
    • LLMs are accessible and can provide individuals and companies with an inexpensive research tool (for example, for the purposes of drafting contracts).
    • LLM research is relatively transparent as the questions raised and answers provided can be audited and may actually enhance the transparency and reliability of the interpretive enterprise itself.
    • LLMs hold advantages over other empirical interpretive methods such as dictionaries, which may not be reflective of an ordinary citizen's understanding of a word.


    • LLMs can “hallucinate”. It is no secret that LLMs can produce answers which might seem plausible but are factually incorrect.
    • LLMs do not capture offline speech and may replicate biases inherent in their training sets. There is no way of knowing how reflective the model is of an "ordinary" person when it is not fully inclusive.
    • Lawyers, judges, and would-be litigants might try to manipulate LLMs. The prompt given to an LLM determines its answer. As anyone who has used AI-based tools will know, small differences in prompts can lead to different responses. There is a risk that individuals may master the skill of 'prompting' to produce answers which work to their advantage.

    Could an English judge use AI in this way?

    Many English judges may feel uncomfortable following Judge Newsom's novel approach to contractual interpretation. However, others may feel more inclined to "give it a go", particularly given the recent comments of senior judiciary who have advocated for the adoption of AI by the courts. Last year, we reported on Sir Geoffrey Vos' lecture to commercial litigators in which he discussed embracing the use of AI to resolve disputes. Court of Appeal judge Lord Justice Birss has praised the "real potential" of LLMs and has spoken about his own use of AI to draft parts of a judgment.


    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.

    Key Contacts


    Stay ahead with our business insights, updates and podcasts

    Sign-up to select your areas of interest