Of more interest perhaps is the nature of the failings and what other auditors can learn from this. The sanctions outlined above were in respect of the 2016 year-end audit of Laura Ashley. Fortunately, there was no indication that there were any errors in these accounts, but the investigation found that there were severe problems with:
- Going concern.
The level set for materiality was more than three times the level that the FRC concluded was appropriate. This happened because group materiality was calculated by taking an average of 5% of profit before tax (PBT) and 1.5% of revenue (later increased to 2%). However, using revenue for a profit-oriented entity, especially a high-volume low-margin business was not appropriate.
The calculations carried out produced an initial materiality of £3.5m (13.2% of estimated PBT) which was later increased to £4.3m (16.2% of estimated PBT) during the audit. This error was compounded by the key matters in the auditors’ report then incorrectly stating both the materiality and the way in which it had been calculated, as a percentage of revenue.
Calculation of materiality can sometimes cause an issue in audits, especially where averages are used to find the figure, or the level is changed during the audit without clear documentation or justification. Note that using an average of benchmarks, such as turnover, profit before tax or assets does not produce an appropriate materiality figure. You need to consider which the important figure is for the client and how the different levels interact. So, for instance here, the turnover figure is far too big to be used to calculate materiality, as errors of below materiality would have a big effect on profit (up to 16.2%) which would clearly be material.
The issue here was that computer assisted audit techniques (CAATs) were used by the firm with the aim of obtaining the necessary evidence of (primarily) completeness of income. Unfortunately, whilst the planned test would have gathered the required evidence this is not the test that was performed. Instead of tracing sales from primary records the CAATs checked two systems which were internal to the client, the inventory management system and the accounting system.
This occurrence highlights a couple of key issues, firstly that testing revenue the “wrong way” or checking it to another part of the same internal system, is an audit fault seen rather more often than anyone would like. Care must be taken in the planning, performing and reviewing of such work to ensure this type of mishap does not occur.
Here, as we have discussed, the planning was appropriate. However, the test was then carried out using CAATs and it appeared the individual/s involved with doing this did not have enough experience and capabilities and therefore the audit partner was at fault for allowing this. The documentation did show how the test had been performed and this should have enabled the partner to realise that the work had been done incorrectly. However, I wonder if there is a bit of a “black-box” syndrome here, where the partner thinks its all to do with IT specialists and must be right or its all too difficult to understand.
This raises an important point which increasingly has relevance for firms. If you are using CAATs as part of your audit and you need data specialists to run them, you must still ensure that there is appropriate oversight of the work and that the partner is able to understand the work done and conclude on whether or not the audit objectives have been achieved.
Much has already been said in recent months and years about the quality of audit in relation to going concern, but here is another example of weak work. Laura Ashley had both an overdraft (where they were nearly at the limit) and cash balances, with a right of set-off in favour of the bank. The audit team had not considered the impact of this right or looked at the heavy demands on cash which would be due as Christmas approached and stocking up was required.
In addition to this omission, the audit did not include an appropriate assessment of management’s ability to continue as a going concern and there was no evidence that management were asked to produce any forecasts or other assessment as required by ISA 570. There was also a lack of consideration of the sensitivities in the monthly management forecasts that the auditors looked at in assessing going concern. These forecasts showed that if sales fell by more than 5.5% the business would make a loss. In the period being audited sales fell by 7% and the auditor commented on the budget that it was “very ambitious”. More scepticism should have been applied in considering the implications of the over-ambitious forecasts.
This highlights the need to ensure that audit of going concern is robust, sceptical and considers the evidence that management have used to assure themselves that the business is a going concern. In a simple business you might use a conversation with those charged with governance to determine how a going assessment has been made. In bigger or more complex businesses and certainly in a listed company, the auditor should expect and require that those charged with governance have assessed going concern using forecasts and other information that the auditor can in turn assess. The auditor should not just provide their own assessment of going concern, but audit the entity’s assessment, ensuring that figures and assumptions are challenged where necessary.
There is much more detail in the decision itself, which expands upon the points I have mentioned above. However, it is worth all auditors being aware of these particular risks of audit failure and that where such failures are found cooperating fully with the regulator results in lighter penalties.