Business Insight

From principles to practice: AI & Data Governance in the Boardroom

Forefinger on hand touching electronic screen

    Shifting from governance design to enterprise implementation

    Many organisations have moved quickly over the past year to develop AI principles, policies, and high-level governance frameworks. In 2026 the issue is no longer whether such frameworks exist, but if they are implemented, operational, and defensible in practice.

    AI is increasingly being embedded across core business functions from customer engagement and decision support, to risk management and internal operations. The "AI imperative" is further heightened by stakeholder and public hype alike: pressures from above to "do more with less" abound, as does a strong belief in this transformative technology. Yet, in many organisations, governance remains fragmented and frameworks sit alongside discrete processes for IT procurement, data governance, cybersecurity controls, and third-party risk management, as opposed to being integrated across the organisation. For Boards, this creates a rapidly growing assurance gap, especially as adoption outpaces effective risk management. It also raises questions around accountability, oversight, and legal risk - none of which can be addressed through policies alone.

    Agentic AI as a governance stress test

    Agentic AI is sharpening these challenges. Unlike machine learning or generative AI systems, agentic tools are defined by autonomy - they can initiate actions, interact with other systems, and operate with limited human intervention. This places intense pressure on existing unsteady governance arrangements and exposes weaknesses in controls around delegation of authority, auditability, explainability, and record-keeping.

    Where governance has not been embedded into enterprise processes, agentic AI amplifies existing risks relating to data protection, cyber security, misuse of confidential information, digital law compliance issues and reputational harm. Look at it this way: agentic AI does not create entirely new risks, rather it makes pre-existing governance failures more consequential.

    Regulation as signal, not a safety net

    Recent regulatory developments reinforce this point. The European Union’s digital omnibus proposals and AI Act implementation delays reflect recalibration over deregulation. The United Kingdom continues to pursue a principles-based, sector-led approach that places responsibility firmly on organisations to demonstrate effective governance. Australia’s National AI Action Plan similarly emphasises capability, guardrails, and responsible deployment. When compared to the United States where Executive Orders abound, policy positions change daily, and regulatory frameworks diverge sharply at the state and federal level – it is only natural for multinational Boards to be left scratching their heads.

    Compounding the issue is regulatory intersectionality. Even in jurisdictions outside the European Union, sector, market, and product-specific regulations still apply to AI systems. When data, products and services flow across borders and regulation moves faster than roadmaps, Boards must find a way to see the wood for the trees.

    The direction of travel is clear: relying on regulatory clarity alone is not enough. Boards should be prepared to evidence robust internal governance, and strive to embed compliance across all aspects of their business.

    Data as a strategic asset and a governance challenge

    Data is the foundation for informed decision-making – it underpins AI, advanced analytics, demands nuanced (yet intuitive) data accessibility frameworks, and is increasingly reliant on participation in data sharing ecosystems. New regulations such as the European Union Data Act places a greater emphasis on strategic thinking as to how to innovate through the smarter use of data, by unlocking data access and sharing rights, cloud switching rules and promoting the role of data intermediaries. For Boards, realising the value of data depends on strong governance across its management and use, privacy and data protection, cyber security and resilience, and the protection of trade secrets and intellectual property.

    AI governance and data governance are now inseparable – and should be a standing boardroom topic.

    AI literacy as a Board-level capability

    Against this backdrop, AI literacy has become a Board-level requirement. Boards, General Counsel and risk leaders do not need to be technologists, but they do need to understand where AI is deployed, what risks it creates, how they are mitigated and documented, and how the governance around it operates in practice. This demands a deeper level of understanding, especially with respect to how different teams and parts of your organisation leverage AI.

     

    Read about the other Board Priorities for 2026

    Read More

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.