Internal control failures during the admissions process increase the risk of misreporting admissions data.
Admissions Statistics
This is the most important measure of an institution’s selectivity. The admissions rate is the number of admitted students divided by the number of students who applied. This seems simple enough; however, when the goal is to make the denominator (i.e., the applicants), as large as possible, and make the numerator (i.e., the admitted), as small as possible, there is an opportunity to manipulate both populations. For example, how does your institution define an “applicant”? Completed and actionable applications only or everything received, including started/incomplete applications? The latter approach makes the denominator significantly higher. How does your intuition define “accepted”? Do you waitlist (only count matriculating individuals) or push admits to the spring semester (do not count individuals in the fall census)? The latter approach would make the numerator smaller. These combined approaches may increase both an institution’s selectivity and ranking.
Test Scores
Test scores are another measure of selectivity. As scores increase, rankings increase; however, which scores do you report? Reporting scores from all students may reduce the average. Are test scores optional for admissions? Typically, students with strong test scores are the ones that report if scores are optional, but reporting on this population very well may inflate average scores. What about excluding scores for international students, developmental cases, or deferred admissions? This approach would eliminate the population of potentially low scores, inflating the average score.
Graduation Rates
This is another area of focus for rankings agencies. The graduation rate is calculated by dividing the number of graduates by the initial cohort. While the number of graduates is difficult to change, there is more subjectivity around what constitutes the initial cohort. Is it defined as the number of students who show up on day one, those who stay a week, or is it those who stay the full semester? The longer you push the date into the semester, the smaller the size of the cohort. The later you measure the size of the cohort, the more likely it is that that group will go on to graduate, making your graduation rate stronger, and increasing your ranking.
Challenges in Data Reporting and Approaches for Evaluating Controls
The rationalization, pressure, and opportunity exist to misreport data intentionally, but the consequences are severe. Even well-intentioned institutions who misreported and self-disclosed errors have suffered negative consequences. These include financial, legal, and reputational damage; lawsuits from students; investigations by federal and state agencies and attorneys general; fines; probation by accrediting agencies; and removal from rankings in publications.
Reports and surveys are often prepared in a decentralized environment, with little or no formal quality control review by senior administrators or data owners.
Putting fraud aside, institutional data reporting is a challenging endeavor in its own right. As previously noted, reports and surveys are often prepared in a decentralized environment, with little or no formal quality control review by senior administrators or data owners (e.g., deans, program directors). Additionally, the following challenges exist:
- Inconsistent roles and responsibilities for those involved in reporting
- Lack of knowledgeable personnel dedicated to reporting tasks
- Inconsistent attention to maintaining accurate data
- Varying interpretations/definitions of survey questions and changing of definitions year-over-year by rankings agencies
- Informal or undefined processes for extracting and compiling data
- Inconsistent time periods used to extract data (e.g., university census, academic year, calendar year, end of semester)
- Different systems or queries used to extract data
- Lack of formal documentation retention policies, resulting in missing or incomplete documentation to support survey responses
Internal auditors have an opportunity to serve as a strategic partner in helping their institutions evaluate their processes and controls related to external data reporting.
Internal auditors have an opportunity to serve as a strategic partner in helping their institutions evaluate their processes and controls related to external data reporting, and help to prevent the negative consequences faced by other institutions. You can start by performing a risk analysis, or developing an audit or advisory plan designed to:
- Evaluate organizational and reporting structure for institutional data reporting. Are surveys and reports prepared from a single centralized location or within many offices across the institution? Is there a central institutional research (IR) office that supports and reviews surveys before they are issued externally?
- Determine mission, authority, and oversight of a central IR office. If a central IR office exists, what specifically is it preparing, analyzing, or reviewing? Has the institution’s senior leadership established the requirement or expectation that the IR must review and approve all materials prior to distribution?
- Gain an understanding of the volume or scale of material reported externally, centrally, and at the school/college/department level. Is there an inventory of all surveys and reports submitted externally? Does the institution know everything going out the door, including who is preparing and reviewing items? Who makes the decision about whether or not the institution will respond to optional surveys? Lastly, is there a process to update the inventory on an annual basis?
- Evaluate whether IR staffing is scalable to meet survey volume and review demands. If there is an expectation that IR reviews all materials before submitting externally, does it have enough staff to meet demands effectively? Does staffing allow for appropriate segregation of duties, (i.e., the individual preparing a survey is not the same person reviewing and performing a quality control check)?
- Evaluate whether schools and colleges have appropriate and skilled personnel preparing surveys and reports. If the schools and colleges are responsible for the preparation and reporting of surveys, do they have dedicated and knowledgeable staff to support the function and prepare high quality, accurate materials?
- Determine whether policies and processes are documented around authority, submission, data extraction, and review of materials. Does the institution have formalized policies and procedures around institutional data reporting? Specifically, are roles and responsibilities defined? Are there processes and tools for data extraction? What processes exist for review and approval prior to external submissions?
- Assess whether the institution maintains a data dictionary of commonly used interpretations/definitions of key terms, questions asked, or system queries. Has the institution developed common understandings and interpretations to achieve consistency in responses? Has the institution also defined the timeframe for extracting data (e.g., census)?
- Identify where to maintain completed surveys and relevant extracted data. Has the institution determined where, and for how long, completed surveys and supporting documentation will be stored?
- Identify key systems involved in maintaining data, as well as how to enter and extract data from the systems. Does the institution use queries or reports to access data (e.g., Cognos), or does it access an ERP or data warehouse directly? Are there any data security issues to consider?
- Test a sample of data elements to determine whether key controls are working effectively. If processes are defined, and surveys are supported with underlying documentation, the process should be repeatable and an auditor should be able to reach the same answers.
Conclusion
Institutional data reporting is challenging, but necessary. As stakes get higher, it invites fraud and an increased risk of financial, legal, and reputational damage. Thus, evaluating internal controls and increasing accountability and transparency related to institutional data reporting should be a priority for senior leadership. Internal Audit can serve as a trusted advisor and strategic partner in this effort by evaluating institutional data reporting processes and controls, as well as identifying issues and areas of concern which should help to prevent reputational harm.
Note: Katlyn Andrews, Manager, Kyra Castano, Consultant, and Jennifer Romano, Senior Consultant of Baker Tilly’s Philadelphia Risk, Internal Audit, and Cybersecurity practice contributed to this article.
Chris Garrity
Chris Garrity, CPA, CIA, CFE is the Chief Audit Executive at Saint Joseph’s University of Philadelphia, Pennsylvania. Chris has over 20 combined years of experience within the not for profit and healthcare industries. His experience includes performing and supervising numerous fraud investigations as well as financial, operational, and compliance audits. Chris has also provided advisory and consulting services to audit committees, boards, counsel, and management in business and issues such as endowments, enterprise risk management, process reengineering, and ERP implementations.
Articles
Climbing the Ranks: Preventing Fraud and Misreporting in Institutional Data
Doing Right by Your Donors: Auditing Gift Management