In relation to data analytics work performed as part of individual internal audit projects, how do you gain comfort over the work performed and the outputs from the DA work? How do you include an assessment of the quality of DA work in your IA function's QA program?

1.8k viewscircle icon3 Comments
Sort by:
Senior Compliance engineer2 years ago

Its important to check for completeness, correctness to assess quality of DA work. Source of data, scope of data, output of data can be verified through reconciliation as well.

Oracle ERP System Analyst / Accounting Manger in Retail2 years ago

I agree with James' comment and will also add that if any analytics performed includes  assumptions that might be open to interpretation to arrive at a conclusion you will want to ensure the policy or procedure includes information on approved or expected assumptions to be taken and is approved by appropriate individuals in your organization. If you don't already have a memo or a audit program documenting both the standard steps you followed (which only needs to be reviewed for changes periodically) as well as any assumptions made with the current data. This will help to ensure the peer reviewer doesn't miss something because they didn't realize an assumption the analyst performed wasn't a hard fact.

Lightbulb on1
AVP, Internal Audit in Insurance (except health)2 years ago

Even without knowing the specifics of the depth and type of DA work you're performing I would consider the following questions:

1) Is the DA model or framework unique to this particular audit or has the model been used before and you are simply adjusting the inputs?

If it's unique, more checking of the DA work is needed to gain confidence in the quality of the DA work.

2) How complex is the DA work? Are you referencing multiple datasets or importing lots of variables into one spreadsheet/database? 

If, like most office professionals, you have a passing Excel/Access skillset but aren't an expert, if you wind up using formulas you've never been trained on or have to use YouTube to understand how to make them work, you're probably at the edge of reliability for DA work products. The more complex the input, the more likely you are to introduce errors, especially if you are unfamiliar or untrained with linking/combining data. 

Having tread this topic myself in the past, I enlisted the help of people I knew in the company were much more experienced with linking and connecting datasets to review my work (peer review).  I also sampled a lot of the output to ensure confidence (just like I would with an audit), using the experience from both to gain overall competency with DA processes.

Recently, I used Generative AI products (ChatGPT type AI) to assess my work as well (the formulas and architecture of the DA model, not the data itself).

Ultimately, I think it's wise to get at least one more set of eyes on your framework/architecture and then heavily sample the output to gain confidence with the overall DA work product. I would include these results in the workpapers.

The assessment would, for me, essentially be total confidence that the DA work product is complete and accurate. The work and documentation to achieve total confidence would vary with the complexity and uniqueness of the DA work at hand.

I hope this is helpful. Good luck!

Content you might like

90 Days12%

365 Days40%

3 years30%

5 years9%

7 years9%

Other (share in the comments)

View Results

< 40 hrs11%

40 - 55 hrs68%

56 - 65 hrs21%

66 - 75 hrs7%

76 - 85 hrs1%

> 85 hrs

View Results