At the outset of any investment transaction, it is important to identify the core assets that support the business; particularly in the context of AI. In the investment market, the term ‘AI’ is used much more broadly, often liberally, than the definition used in this guide. Investors should take time to understand, assess and allocate the AI associated risks before moving ahead with the transaction.
Investors should seek to conduct a thorough due diligence exercise into the target and its AI; with any report produced aimed at informing the appropriate provisions to include in the transaction documentation.
General Commercial
As part of any due diligence process, an investor should look to the market in which the business sits and its competitors. This section deals with the general commercial questions an investor should be asking as part of its due diligence process.
In the context of an AI transaction, investors should ask at least the following:
- How is AI being used by the target and what form does it take?
- Is the target using an AI model that can be easily explained or is it using a 'black box' solution?
- Are there third parties involved in the provision of the AI system (e.g. TensorFlow/Torch/PyTorch) (if so, what elements are used and what are the hosting, maintenance, pricing positions and the underlying terms of use)?
- Where did the target get its data and how did it clean it?
- Who owns the IP in the AI output?
- Measure the targets regulatory compliance, including with data protection legislation.
- What AI model provides the backbone of the AI system and what is the rationale for using this model over others? Was the model developed in-house by the target or by a third-party? Investors will want to understand the capability of the AI system hardware used, or cloud, and how it is supported and maintained.
- Consider reviewing the target’s insurance coverage.
- Consider reviewing the target’s internal governance documentation in relation to:
- accountability for AI related issues;
- transparency in relation to the AI system; and
- ethical considerations made by the target in relation to its AI.
- Consider reviewing whether the target has embedded AI considerations into its organisational structure. At a high-level the target's organisational structure should incorporate the following roles and responsibilities:
- Product managers (giving assurances to senior management and compliance teams that AI system are developed and deployed in compliance with applicable laws).
- AI development team (responsible for collecting, procuring and analysing input data and selecting an appropriate AI model to use and bringing in domain experts (such as doctors/scientists for medical use cases) as required by the context in which the AI system is being used).
- Implementers (humans that use AI systems to make evidence-based decisions) who are adequately trained and cognisant of both the inherent benefits and limitations of AI.
- Compliance and the data protection officers (responsible for monitoring the ongoing compliance of the AI system with applicable laws).
- Senior management (responsible for oversight and ultimate responsibility for AI-assisted decisions).
- Check whether the organisation has policies and procedures in place that foster consistent and standardised AI-assisted decision-making.
- Request information on litigation, including where appropriate, in the context of product liability.
- Review customer terms and conditions to determine how the target apportions liability through indemnities, liability caps and other terms.
Current at 20 November 2020