top of page

The Augmented Auditor: How to Partner with AI Without Losing the Human Touch

  • Writer: Henry Hon
    Henry Hon
  • Dec 3, 2025
  • 4 min read

GRC professionals must adopt AI responsibly. Learn the Human-in-the-Loop golden rule to boost audit efficiency and upskill your risk and compliance team



There is a joke circulating among auditors and risk professionals that you may have already heard of: “The safest way to use AI today is to not use it at all”. While this might make us laugh, it highlights a real hesitation in our industry. We are paid to be sceptical. Our job is to identify risks and ensure compliance, so handing work over to a computer program or AI model that "thinks" or “determines” the most probable next "token" feels wrong to us.


This hesitation is understandable. We have all seen the negative news headlines. For example, in Australia, a Victorian solicitor was stripped of his ability to practice as a principal lawyer after he submitted court documents containing fake cases invented by AI. In the world of Governance, Risk, and Compliance (GRC), where accuracy is everything, errors like that are unacceptable.


However, viewing AI only as a threat is a strategic mistake. The risk is not just in using AI, but also in failing to adapt. We should not avoid this technology. Instead, we need a disciplined, human-led approach that uses AI to improve our capabilities.


The Golden Rule: The Human Remains the Gatekeeper


For GRC professionals, using GenAI must follow one main rule: AI is a tool for summary and suggestion, not judgment.


Algorithms do not take responsibility for mistakes. Humans do. Therefore, an auditor cannot blame the software if things go wrong. If AI suggests a control is working, and it turns out to be broken, the auditor faces the consequences. This means we need strong governance, specifically "Human-in-the-Loop" (HITL) workflows. We must treat AI outputs as a first draft, never the final result.


A Tale of Two Audits: The Narrative of Efficiency


To understand the real value here, let us walk through a typical internal control review. We can look at how it runs today, and how it could run with AI support.


In a traditional audit, the process often starts with a blank page. The auditor spends significant time manually drafting a Document Request List (DRL) to gather evidence. They rely on their own memory or old templates to figure out what to ask for. Once the documents arrive, the " hard work" begins. The auditor reads through hundreds of pages of policy documents, searching for specific clauses about password complexity or vendor access. It is manual, slow, and tiring. Finally, when the team compiles the report, the lead auditor has to stitch notes from different people together, often leading to inconsistent writing styles that take hours to fix.


Now, imagine this same process is supported by a controlled AI environment.


In this AI-augmented scenario, the audit begins differently. The auditor prompts the system with the specific scope of the review. The AI instantly suggests a tailored Document Request List based on relevant baseline, industry standards (like ISO or NIST) and historical data. The auditor reviews it, accepts the good suggestions, and sends it out in minutes rather than hours.


When the evidence arrives, the auditor uploads the policy documents into a secure, managed AI environment, ensuring no data leaks to the unauthorised third-party. Instead of reading every page line-by-line, the auditor asks the AI tool: "Summarize the access control requirements for third-party vendors and provide references."


This is where the "Human-in-the-Loop" is critical. The AI provides the answer and links directly to the page where the information sits (a process known as RAG, or Retrieval-Augmented Generation). The auditor clicks the link to verify the text is real. The heavy lifting of search is done by the machine, but the verification is done by the human.


Finally, at the reporting stage, the AI acts as an editor. It scans the notes from the whole team. It spots where Auditor A and Auditor B might have contradicted each other and flags it for review or further clarifying with auditee. It also helps smooth out the writing style, ensuring the final report sounds like one unified voice.


The Future of the Workforce


Perhaps the most sensitive topic in this change is the impact on people. There is a fear that if AI automates the basic tasks, like drafting lists and summarizing policies, business leaders will stop hiring junior staff. This would be a terrible mistake for our profession.


The leaders of today learned their skills by doing the heavy lifting twenty years ago. If we stop hiring juniors, we destroy our future pipeline of talent. Instead of hiring fewer people, we must change what they do.


We need to train junior professionals to become critical thinkers earlier in their careers. Instead of spending 80% of their time gathering data and 20% analysing it, AI allows them to flip that ratio. They should not just be checking boxes. They should be learning how to prompt the AI, how to verify its outputs, how to understand the business logic behind the controls and how to communicate/manage human stakeholders effectively.


Conclusion


Generative AI in GRC is not about replacing the human. It is about freeing the subject matter experts to do what humans do best: exercise judgment, understand context, and build trust.


We should not fear the technology, but we must partner with it responsibly. By using controlled AI environments, maintaining human oversight, and focusing on upskilling our teams, we can turn a potential risk into our greatest advantage.


Figure 1: The infographic is generated by NotebookLM powered by Google
Figure 1: The infographic is generated by NotebookLM powered by Google


ISC2 Sydney Chapter President

CISSP, CISA, CDPSE, TAISE, CCZT, CCSK, OSCP, CREST CRT

Comments


bottom of page