Presenting AI consistently across different report sections and recognising both opportunities and risks supports more complete and credible reporting, reinforcing transparency and the company’s approach to responsible AI use.
Stay informed with regulations, insights & events by joining our mailer
Artificial intelligence (AI) has rapidly become one of the defining issues in corporate reporting. What was once the preserve of technology companies is now shaping business models, risk management frameworks and governance disclosures across many sectors.
Investors, regulators and stakeholders are increasingly asking not only how companies are deploying AI, but also how they are ensuring its responsible use. Against this backdrop, transparent and balanced reporting on AI is becoming an important marker of governance and resilience.
In this blog we look at how companies are addressing AI in their annual reports.
Current landscape: how AI is being reported today
A growing number of listed companies now reference AI in their annual reports, though the quality and depth of disclosure varies. Common approaches include: placing AI within strategic narratives; identifying it in risk registers; linking it to environmental, social or ethical issues in sustainability sections; and describing oversight within governance reports.
AI is increasingly presented in strategy sections as a driver of innovation and efficiency.
- #United Utilities (p.4–5) discusses AI as a market trend in its market review, explaining how the company is harnessing the technology to improve operational performance.
- Experian (p.12–13) features Q&A pullouts in its business model section, highlighting AI as a key differentiator.
- Informa (p.24–25) includes a concise overview of the benefits of AI within its investment case, supported with data to illustrate impact.
Recommendation
Where AI is material, companies should explain its role in supporting strategic priorities and overall business objectives.
Risk disclosures often acknowledge AI as either an emerging or a principal risk. Some companies also describe mitigations or governance processes.
- Compass Group (p.28) identifies AI as a principal risk and notes the development of comprehensive policies and frameworks for responsible use of AI.
- National Grid (p.41) places AI within its emerging risks table, connecting it to strategy using a key and indicating time horizons as a bar chart.
- Ocado (p.79) includes a pull-out case study on how the company advanced its approach to AI by moving it from the emerging risks register to a dedicated AI risk register. They describe how the AI Governance Group presents a deep-dive discussion to the Risk Committee.
- Coca-Cola HBC (p.189) includes AI as an emerging risk and responds on the key drivers, consequences, indicators and adaptations in a table format. In their 2023 report (p.106) they refer to both the benefits and the risks of AI, supported by policies, guidelines and training.
Recommendation
Clear positioning of AI within risk disclosures, alongside mitigation activities, helps to show how companies are addressing both operational and ethical considerations.
Some companies are linking AI to sustainability and ESG, highlighting both its potential and the need for safeguards.
- Ocado (p.49, 59) identifies AI ethics as a material issue through its materiality assessment. The company outlines commitments on the responsible use of AI and robotics, structured around principles of fairness, transparency, governance, robustness and impact, supported by oversight from a cross-functional AI Governance Group.
- United Utilities (p.70) discusses the use of AI within its “How we’re delivering our purpose: greener” section, highlighting how the technology helps to predict and address flooding, water pollution and water conservation. The disclosure presents AI as a tool for improving environmental performance and operational resilience.
Recommendation
Sustainability sections can highlight how AI is applied to material ESG topics, supported by ethical frameworks and reporting processes.
Governance disclosures increasingly note board involvement in overseeing AI, skills within board composition and training activities.
- Hays (p.111) includes AI as a key skill in its board skills matrix, highlighting the importance of relevant expertise for overseeing emerging technologies.
- Tesco (p.87) includes a data spotlight in the audit committee report in the form of a case study, outlining the benefits of AI in automating processes.
- ICG (p.66) includes AI in its “Governance at a glance" section on how the board spends its time, detailing how the board assesses both the benefits and risks associated with AI as part of its wider oversight activities.
- United Utilities (p.110–111) includes commentary in the chair’s letter on the board’s oversight of cyber security and AI, reflecting how the board monitors emerging technological risks alongside its broader governance responsibilities.
- J Sainsbury’s (p.84–85) lists AI as a topic the board received training on, with more details on the specific training discussed in the board development section.
- Weir (p.84) highlights how AI was used to obtain feedback from employees based on factors like gender, job type and location, allowing them to identify trends across different employee groups as part of the board’s employee engagement.
Recommendation
Disclosures on how boards consider AI – through skills, training or specific activities – provide useful context on governance oversight.
Balanced reporting
Including a balanced view of AI in reporting is important to provide a realistic understanding of both its potential benefits and associated risks. Highlighting opportunities demonstrates how AI can support efficiency, innovation and value creation, while simultaneously acknowledging risks such as ethical considerations, cybersecurity, bias or operational impacts. Presenting AI consistently across different report sections and recognising both opportunities and risks supports more complete and credible reporting, reinforcing transparency and the company’s approach to responsible AI use.
Conclusion
AI is now referenced across multiple sections of annual reports, from strategy to governance. Disclosures vary in approach, but examples show how companies are beginning to set out the opportunities, risks and safeguards involved. Clear classification in risk sections, integration into strategy narratives, links to sustainability and evidence of board oversight are all becoming part of reporting practice .
If you would like to explore how to position AI more effectively in your own reporting – whether to understand emerging trends, benchmark sector practices, or identify practical steps for reflecting AI across your disclosures – speak to us. We can help you navigate this fast-evolving area with clarity and confidence.