The AIs have it: regulatory bodies weigh in on new technology

The audit profession is no stranger to the explosion of optimism and excitement around the possibilities afforded by AI. A barrage of new tools and ideas abounds, but that can often feel overwhelming, and clarity on why and how we should be adopting new technology is often in short supply. Two recent publications – the FRC’s AI in audit paper and the ICAEW’s summary of its first AI assurance conference – give us some timely insight on the potential future direction of the profession.
‘Black box’ issue
The concept of a ‘black box problem’ is not new to audit. Anybody who has worked on a complex estimate will be familiar with the detailed audit procedures which go into understanding, explaining and challenging the underlying process by which these estimates are made. Until now though, that process has been operated principally by humans and, as such, engagement teams have been able to discuss with experts or review assurance reports produced on their behalf. As AI and AI powered tools progress and become more sophisticated, the processes they undertake can become increasingly opaque to those without specialist skillsets. The ability for AI powered tools to undertake analyses and spot patterns at a level of subtlety which is unachievable by rules-based processes driven by people is clearly an invaluable opportunity for auditors but, equally, it presents a black box issue on a whole other level from what we have seen previously.
Exercising scepticism
Much has been made in the industry of the opportunity for AI to alleviate pressure on both preparers and auditors to perform quotidian, mundane tasks such as AP/AR data entry, repetitive calculations and the audit of both, amongst others, thereby freeing up time for both groups to work on more value added, subjective areas. However, that same process of automation on both sides opens up another series of questions for accountants and auditors. As AI powered tools begin to operate at a level of sophistication and subtlety which defies explanation by all but highly expert individuals, how will users be able to exercise scepticism at the outputs provided, understand when a product is not functioning correctly or address possible independence issues where tools are potentially powered by the same underlying model? How can regulators and reviewers look to take assurance over the same?
Considering new papers
This is where the concepts of explainability and AI assurance come in, as set out by the FRC and ICAEW this week. The former’s paper is split into two parts – an illustrative example of a hypothetical firm designing and rolling out an AI powered tool for use in journals testing and a breakdown of regulatory expectations with regards to documentation of these tools and their output. The first element of the paper gives valuable insight to firms looking to undertake a similar process. With a bewildering array of tools available, it’s understandable that practitioners may feel that a key point – the utility and applicability of them to audit – is being overlooked. The FRC’s example is a useful walk through of key considerations around selecting and justifying development of a tool, with a key focus on human inputs; even with highly sophisticated tools, professional judgment will still be required around the suitability and quality of data available for analysis and scepticism will still need to be applied to the results. The second part of the paper emphasizes the importance of explainability. Whilst AI is developing at a frenetic pace, anything audit firms do needs to remain grounded in the established auditing standards, and therefore ensuring that documentation of work performed is consistent with the requirements of ISA (UK) 230, amongst others, is key. The FRC’s paper provides welcome clarity on its expectations with regards to this.
The latter point also segues into the concept and likely future importance of AI assurance. Auditors have needed to gain assurance over third party systems and providers for a long time. Whether it’s an outsourced payroll provider, software used for management of a subledger or an expert valuer or actuary, the process of gaining comfort over the reliability and accuracy of that system, process, service organisation or expert by performing testing or reviewing the work of another auditor is well established. The development of AI looks set to make this element of assurance more prominent, as what may be seen as more traditional audit work (casting accounts, reconciliations, manually filling out substantive testing schedules using prime documents supplied) becomes the domain of highly accurate automated tools. As such, there is a possibility that the work of assurance providers will increasingly begin to focus on understanding and reviewing the operations of the various models and the integrity and suitability of data used to train them and specific instances employed within in house tools – this summary of discussions at the ICAEW’s first AI assurance conference provides a useful jumping off point on the topic.
Staying focussed
As technology develops, auditors (and other stakeholders) need to remain focused on the underlying concepts of the profession. Regardless of the tools used, the requirement to provide assurance on financial information in a way that is compatible with the relevant standards will remain a constant. The pace of change makes it vital for competitors in the sector to adapt and understand the new environment, whilst keeping a firm grasp of core principles.