What are the risks and benefits of AI in corporate reporting?
Daniel Redman

"In the current climate, the consumption of annual reports by machine reading software is growing year on year and is set to possibly be the core driver of report consumption in the future."

Stay informed with regulations, insights & events by joining our mailer


What are the risks and benefits of AI in corporate reporting?

As a personal experiment, I didn’t ask ChatGPT to answer the above question until I had finished writing my own response. My goal is to see how differently (if at all) my measly human brain could address such a nuanced subject when compared with the might of a neural language generator (NLG). At the end of this article, I asked ChatGPT the same question and you can see a summary of our differences. 

How AI is being used in corporate reporting 

The opportunity for AI to impact all elements of corporate reporting is far too large for me to address in a blog. So, I have decided to focus on the role of AI, namely an NLG like ChatGPT, in writing the strategic report of an annual report. From my conversations with clients, auditors and investors, it seems there are already many case studies where AI is being used in the assurance space for data analysis, audit reviews, fraud detection and trend identification. 

In terms of the production of an annual report, we are seeing case studies where AI can roll forward prior-period numbers, send out notifications to section owners or source data. However, successful case studies of ways in which AI can improve the creation of an annual report’s narrative, which explains a company’s financial and sustainability performance as well as its future focus and strategy, are a little less clear.

In this blog I aim to look at how AI can impact annual reporting in a number of ways. 

  1. The benefits 
  2. The risks 
  3. The annual report content 

How companies can benefit from AI when writing the strategic report

The most obvious and well-known way that AI can impact the writing of the strategic report is, of course, content creation. Now, there is no doubt that ChatGPT can provide efficiencies when staring at a blank page with the goal of writing a section of an annual report; however, later on in this blog I address the challenges and risks that this can pose. This benefit is definitely one that needs to be considered more deeply.  

Although there are risks to having AI write the whole report, there are benefits in how it can be used to shape and consider the content of the report. AI can help in the effectiveness of corporate communications by using sentiment analysis to consider how the report will be read. AI can be used to help analyse audience sentiment, identify key themes and measure the impact of communications. All this knowledge allows corporates to tailor their messaging accordingly and proactively address concerns.

One area that needs a blog of its own is the potential for AI to make greater strides in the integrated reporting challenge. AI may be able to finally provide an integrated solution for combining financial and non-financial management information, an area with which many organisations struggle, helping to truly drive integrated thinking.

The risks of AI in developing reporting

In the current climate, the consumption of annual reports by machine reading software is growing year on year and is set to possibly be the core driver of report consumption in the future. Investors have been utilising AI support for analysis since long before ChatGPT existed. However, if we consider the future for a moment… 

What one investor can do with AI, so can others. Over the longer term, the commoditisation of AI within the investment analysis sector could erode any competitive advantage from using AI and/or alternative data analysis. This could potentially mean that human engagement and tailored reporting are actually more important than ever. It is for this reason that reporting should absolutely stay authentic and content should be human led to continue to stand out amongst the competition. 

Businesses are complex, as is the data that flows from them. Making sure we fully trust that data involves a mix of using our judgement and applying scientific methods. Whilst AI will certainly play a role in both internal and external audit processes in the future, the overall complexity and uniqueness of each business requires a human + AI approach for the foreseeable future. In the same vein, the narrative that supports this data needs to appeal to humans and their judgement, and therefore should be a human + AI approach.

Another consideration is perhaps the risk of power centralisation when constructing stakeholder views and content. In a recent essay written by accounting professors at the University of Auckland, they note that the use of AI-generated text could potentially centralise power in the hands of those with technological expertise, while marginalising the perspectives and voices of other stakeholders. Power can manifest in decisions about which data to include or exclude, how metrics are prioritised and the overall shaping of the report’s narrative. 

The same essay from the University of Auckland cites the dangers of AI in harnessing greenwashing. AI assistants such as ChatGPT are trained on data from multiple sources. Considering the prevalence of greenwashing, this training data set can include “greenwashed” sustainability information alongside more factually accurate accounts of performance. ChatGPT may therefore replicate “greenwashed” content and style if tasked with writing sections of sustainability reports.

Additionally, recent research into AI vs human reporting have shown that AI tends to make broadly positive claims and often fails to weigh the positives and negatives of performance, as leadership statements and annual reports are tasked with doing. 

Given that external reporting is a highly sensitive area, it is also likely that AI use would come only once management are comfortable with it and trust the technology. Automated text generation software cannot be held accountable for anything and does not care how stakeholders are likely to use the information it provides; this creates many risks to management who may be liable for any issues AI-generated text may cause.

Furthermore, the detection of AI text is coming on leaps and bounds. Many different software solutions claim to be able to “detect” AI use. If stakeholders can detect that a leadership statement, for example, is written by AI this may impact trust. 

I won’t profess to be a cybersecurity expert, but I do know that AI presents many ethical and privacy concerns.

At Design Portfolio we have long taken the mantra that the biggest benefit of corporate reporting is the process. When we kick off a corporate reporting project, we often find that the team in the room that we are taking a brief from have not been in a room together all year. The annual report project provides unique opportunities to talk to and learn from one another, producing a narrative that is carefully considered and validated by leadership. It’s the only time management has to zoom out and consider what their business model would look like if they were to explain it, how their strategy is going to be presented and what has changed from a governance point of view. If this process was all to be created by an AI tool, companies would lose that valuable opportunity to break down silos and work together.

How to leverage AI in your next annual or sustainabiliy report

Taking a different angle on the impact of AI in narrative reporting, I want to look at how and where companies should be considering addressing AI in their annual reports. 

Board training 

In a similar way to how cybersecurity knowledge on the board has become a key focus for investors, companies should also consider explaining any plans to train the board on AI or have some level of AI expertise representation at leadership level. 

Risk reporting 

The risk section of the report is meant to address both opportunities and risks that could lie ahead for the business, so there seems no better place to address AI. This year we have seen leading reporters address AI as an emerging risk; however, as time goes by, we expect to see this sit within principal risks which are being continuously managed. 

Business model

Of course, AI can drive huge efficiencies within businesses and even alter business models. This area of reporting should be used to explain how this is happening, as well as how potential positive and negative impacts on stakeholder value creation are being managed.

The role of wider communication channels

In a world of AI-generated text, stakeholders will look for more authenticity. Corporates should begin looking at the annual report as a hybrid document which drives the reader to digital content that can be optimised to provide authentic messaging. This can include online case studies, videos and leadership interviews.

The experiment 

Unsurprisingly, when I asked ChatGPT the question in the title of this blog, it gave an overwhelmingly positive response. It provided ten bullet points that all broadly said that NLG capabilities can help convert complex financial and operational data into easily understandable and well-structured narratives. I actually agree, but I also hope that my response was slightly more nuanced. However, I will let you be the judge. 

Speak to our experts