AAP Roundtable: Strategies for Risk Assessments, Developing Findings, and Follow-up
By Susie Geiger and the AAP Subcommittee
On January 14, 2026, the Auditing and Accounting Principles (AAP) Subcommittee of the Association of College and University Auditors (ACUA) hosted a roundtable to help institutions strengthen conformance with Domain V of the updated Institute of Internal Auditor’s Global Internal Audit Standards. The session focused on three standards central to effective engagement execution: Standard 13.2 on engagement risk assessments, Standard 14.2 on analyses and developing potential findings, and Standard 15.2 on confirming the implementation of recommendations or action plans. Nearly 50 ACUA members participated by sharing challenges, comparing practices, and identifying strategies to improve consistency, efficiency, and quality across their audit functions.
This highly interactive roundtable was facilitated by AAP Committee members Hollie Andrus, Patty Davidson, Jennifer Dent, Erin Egan, John McDaniel, and Agnessa Vartanova. After sharing the requirements of each of the Standards, the participants met in breakout rooms to discuss how institutions conduct and document risk assessments, analyze information to identify findings, and perform follow-up activities. These discussions are summarized below.
Standard 13.2 – Engagement Risk Assessments
Standard 13.2 requires internal auditors to understand the activity under review, assess relevant risks, evaluate governance and compliance processes, and identify the significance of risks, including fraud risks. Institutions reported that meeting these expectations consistently remains a significant challenge. In the breakout rooms, participants were asked to discuss the question “How does your institution conduct and document your engagement risk assessments?”
Challenges in Risk Assessment
Several audit shops struggle with training auditors to identify and analyze risks in a consistent manner across diverse engagements. Limited subject‑matter expertise, particularly in IT and fraud, further complicates risk identification. Many offices also lack tools or data analytics capabilities to support more sophisticated assessments.
Organizational dynamics add another layer of difficulty. Risk tolerance varies widely across campus units, and leadership turnover can disrupt expectations. Smaller audit shops, in particular, often lack a centralized risk management function to help define institutional risk tolerance and appetite. Communication barriers also arise when auditors and clients use terminology differently, leading to misunderstandings that hinder risk identification.
Strategies for Improving Risk Identification and Evaluation
Despite these challenges, participants shared a range of practical strategies for effectively identifying and evaluating risks. Many offices have adopted standard templates to document risk assessments and utilize inventories of common risks, including ones specifically related to fraud. Others conduct team brainstorming sessions at the start of each engagement to determine potential risks.
Audit functions are also drawing on diverse information sources such as prior audit reports, peer institution audits, policies, strategic plans, regulatory guidance, and ACUA resources such as the Risk Dictionary. Some offices incorporate frameworks like the Association of Certified Fraud Examiner’s (ACFE) Occupational Fraud Framework to strengthen evaluation of fraud and other types of risks. Some shops work directly with their institution’s risk management office to get an understanding of the institution’s risk tolerance and appetite. A few institutions have hired a Certified Fraud Examiner or are encouraging existing auditors to pursue the designation with the support of the office.
To improve consistency, several participants described developing scoring systems for prioritizing risks and creating standard lists of client questions to guide kickoff meetings. Others emphasized the value of pre‑engagement research and the use of asking the client open‑ended questions such as “what can go wrong?” to uncover contextual risks. Another suggestion was to provide audit clients with a confidential method for communicating their concerns to internal audit.
Standard 14.2 – Analyses and Potential for Engagement Findings
Standard 14.2 requires auditors to analyze relevant, reliable, and sufficient information to develop potential findings and evaluate identified differences between the evaluation criteria and the existing state (condition) to determine which are reportable findings. In the breakout rooms, members shared several challenges they have faced in developing testing observations into reportable findings and worked together to brainstorm strategies for improving conformance with this standard. Participants were asked to discuss the questions “How does your institution conduct and document your engagement risk assessments?” and “How much testing is needed for assurance engagements to develop an issue?”
Challenges in Testing and Analysis
Higher education institutions often have multiple decentralized systems that do not interface, leading to inconsistent datasets that complicate testing. Participants also identified inconsistent testing methodologies, lack of standard templates, and difficulty balancing over‑ and under‑documentation as common concerns which can lead to poor quality assurance.
Some offices reported experimenting with AI tools but expressed uncertainty about evaluating the reliability of AI‑generated results. Others noted that some clients may not fully understand internal controls, making it harder to validate observations or explain findings. Several participants also said that their offices have trouble performing root cause analysis, especially for newer auditors who may be tempted to rely on assumptions rather than structured analysis.
Strategies for Enhancing Finding Development
Participants shared several approaches to improve testing quality and consistency. Many offices use verification between data sources to validate accuracy. Others have adopted standard testing templates with required fields but built‑in flexibility to accommodate diverse audit areas. Some shops have created specialized procedures and templates for conducting and documenting the testing of regularly reviewed processes like travel expenses and procurement card transactions.
Training plays a central role, and many offices are emphasizing documentation expectations during onboarding and ongoing professional development. Audit management software, such as TeamMate, helps some teams link risks, controls, and testing more effectively. To strengthen root cause analysis, participants recommended techniques such as the “five whys” and incorporating client input. AI tools may also support testing when procedures are carefully designed and validated.
Determining the Extent of Testing
Participants also discussed how much testing is needed to develop a finding. Resource constraints and inconsistent access to data prevent audit functions from effectively employing population-level analysis at a significant scale, leading many offices to rely on judgmental sampling. Some auditors also struggle to move beyond inquiry and neglect to corroborate client statements with evidence. Several participants said that their offices do not have procedures for assessing and prioritizing/ranking identified observations.
To address these issues, offices are developing standard definitions of “relevant, reliable, and sufficient” information aligned with Standard 14.1. Others use external frameworks (AICPA, FASB) to guide finding criteria. Risk decision matrices help some teams evaluate materiality and determine whether an observation warrants verbal communication or a formal finding. Some shops are making efforts to transition from testing small samples to population-level testing when feasible. Auditors are also being trained that inquiry is often not sufficient to confirm or dismiss a finding; they should corroborate management statements with observation, examination, or reperformance.
Standard 15.2 – Confirming the Implementation of Recommendations or Action Plans
Standard 15.2 requires internal audit functions to confirm whether management has implemented action plans and, when they have not, to follow the CAE’s established guidelines for management acceptance of risk. Participants were asked to discuss the question “How does your institution monitor and perform follow-up activities?” and discussed the following related challenges and strategies in the breakout rooms.
Challenges in Follow‑Up
Follow‑up processes are often less structured than planning or testing phases. Many offices rely heavily on e-mail and phone communication and lack standardized tools for tracking status updates. Some participants felt the follow‑up process is treated like an afterthought and receives much less attention and standardization than the planning and testing phases.
Delays in management action plan completion are common, and some clients do not provide explanations or updated timelines. Auditors also struggle to balance accountability with maintaining positive client relationships. Determining what constitutes sufficient verification, especially when deciding between retesting and reviewing client-provided evidence, remains a challenge.
Strategies for Effective Follow‑Up
Participants highlighted several practices that improve follow‑up effectiveness, including transitioning from “trust but don’t verify” cultures to more evidence-based follow-up procedures. Some offices include follow‑up activities directly in the audit plan to signal their importance to leadership and audit committees. Many have also developed standard templates for documenting action plan status updates.
Regular follow‑up cadences, such as every 90-120 days, help maintain momentum. Some offices conduct interim check‑ins rather than waiting for due dates, which has improved implementation rates. Prioritization methodologies also help identify high‑risk findings that require closer monitoring or escalation.
Clear communication is essential. Some offices have the client determine the specific action plan, with internal audit approval, to mitigate each finding rather than prescribing action plans they may not fully understand. During reporting, auditors define what “implemented” will look like for each action plan and explain how non‑responsiveness may be escalated. Some offices require written justifications for non‑implementation or use standard forms for documenting risk acceptance. Others report overdue action plans to leadership using dashboards and visualizations, and a few require clients to present their rationale for non‑implementation directly to the audit committee.
Conclusion
This roundtable generated discussion that was fruitful and varied, allowing participants to find comfort in shared struggles while also coming together to share creative ideas for solutions. Participants reported in a post-event survey that they found the roundtable valuable and would be interested in attending future roundtables focused on the Standards, as well as other topics.
The AAP Subcommittee will host another roundtable focused on conformance with Standards 13.2, 14.2, and 15.2, this time from the lens of conducting advisory work. This event is tentatively scheduled for Wednesday April 15, 2026, and invitations to register for the session will be e-mailed soon.

