OpinionPREMIUM

RIAAZ MAHOMED: Internal audit’s important journey from AI hype to practical assurance

Organisations rush into AI without strategy, leaving governance gaps and rising audit challenges

Author Image

Riaaz Mahomed

Before we rush to audit AI, we need to understand why organisations are adopting it, says the writer. (Zulfugar Karimov/Unsplash)

In almost every board meeting I attend these days executives are asked what the company is doing about AI. There is rarely a comprehensive response grounded in a clear AI strategy, yet every organisation is feeling the pressure to stay ahead of the AI curve — and none of them want to be left behind.

As internal auditors, we have a responsibility to help our clients move beyond the hype and excitement and ask the harder questions about how AI initiatives actually link to strategy, what risks they introduce, and who remains accountable when things go wrong.

This presents a challenge and an opportunity. Organisations are looking to us for increased guidance on AI, whether as advisers on risks and controls or as assurance providers on processes that use AI. The good news is that we already possess the foundational skills for success, such as critical thinking, process mapping, risk assessment, control evaluation and an understanding of how organisational strategy links to technology. Now we need to adapt these skills to this new AI reality.

Of course, before we rush to audit AI, we need to understand why organisations are adopting it. Too often, I see clients jumping on the “typical” AI use cases, such as developing an agent or deploying a tool, simply to avoid being left behind. But when we ask how this AI journey links to organisational strategy, the answers are seldom clear.

To help businesses make this link, internal audits should be asking them challenging questions such as, “What strategic objectives do their AI efforts support? Are they trying to create efficiencies, reduce costs, or improve customer experiences? And critically, who is accountable when something goes wrong?”

This last question is vital because accountability cannot be delegated to an algorithm. As humans, we retain responsibility for the decisions AI systems help us make — which means we must be able to explain the rationale behind those decisions and understand the risks.

AI introduces a range of risks that internal auditors need to understand and that we at SNG Grant Thornton are increasingly seeing evidenced. At the entity level, we often see a lack of clear ownership and incomplete AI governance frameworks. Many organisations have not developed AI policies or identified approved platforms for controlled use. This vacuum creates the very significant risk known as shadow AI.

Many organisations have not developed AI policies or identified approved platforms for controlled use. This vacuum creates the very significant risk known as shadow AI.

Shadow AI occurs when employees use uncontrolled, unsanctioned AI tools — for instance, adding AI agents to meetings to take minutes, uploading sensitive data to public platforms, or using generative AI without considering privacy implications. This is often done without any consideration for where these recordings will be stored or the real danger of data exfiltration.

Data governance is equally critical. Incomplete, inaccurate, biased or out-of-date data will drive the wrong AI outputs, which is why internal auditors must assess data quality, validity, accuracy and completeness, particularly when AI agents connect to organisational data sources.

Privacy and compliance

Of course, cyber-risk cannot be overlooked. An unsecured AI agent performing critical activities becomes a target for threat actors. And privacy and regulatory compliance are paramount, especially where AI processes personally identifiable information. Then there’s ethical risk around transparency, explainability and bias. In the end, one of the worst possible outcomes a business can face is a CFO making a decision based on an AI recommendation that cannot be explained or justified.

Fortunately, regarding AI compliance, businesses don’t need to reinvent the wheel. Structures such as NIST’s AI risk management framework and the relevant ISO standards provide solid starting points for AI governance. These just need to be leveraged to develop clear AI policies, data governance standards around master data management, secure coding practices, robust access and change controls, and comprehensive testing processes for fairness, bias and model explainability.

This shift needs to happen quickly. We cannot audit what we do not understand ourselves.

For organisations using third-party AI platforms, vendor assurance is also critical. Comprehensive service provider reports are non-negotiable and must provide evidence that vendors have robust controls in place.

As compliance becomes embedded in AI agents rather than performed by people, internal audit itself has a responsibility to transform its approach. Consider a procure-to-pay process run by an agentic AI agent managing vendor files, purchase orders, three-way matching, and workflow. When compliance is embedded rather than performed, we can no longer rely on sampling transactions and requesting audit evidence. Instead, we must audit the AI agent itself, looking closely at its design, architecture, governance and decision-making rationale.

This shift needs to happen quickly. We cannot audit what we do not understand ourselves. So, we need to invest the time, effort and resources to fully grasp AI in the business context, embrace the challenges and opportunities it presents, and find our sweet spot in helping our clients navigate their own AI journeys responsibly.

• Mahomed is director of business risk services at SNG Grant Thornton.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon