Climbing the Ranks: Preventing Fraud and Misreporting in Institutional Data

November 9, 2019


The recent Varsity Blues college admissions scandal was shocking, to say the least. We have long known that legacy and donations influence admission decisions at highly ranked institutions. However, this scandal revealed just how easily families with financial means can circumvent admissions processes and controls. Even more disturbing is how long this unethical and immoral behavior persisted and the number of students it affected. In an era in which the value of college degrees is up for debate,[1] the ensuing reputational damage done by the scandal is widespread, and not just for the named institutions.
 
Internal control failures during the admissions process increase the risk of misreporting admissions data.

An area related to this scandal that is worth additional attention is the impact of fraud in the admissions process, as well as the related collection and reporting of institutional data. Internal control failures during the admissions process increase the risk of misreporting admissions data (as the underlying data may be flawed), which is a significant component of the institutional data reported to governmental, accreditation, and rankings agencies.
 
As internal auditors, we have an opportunity to serve as strategic partners and evaluate the institutional data reporting internal control environment. This article will define institutional data reporting, highlight how data may be misreported, and what we, as internal auditors, can do to evaluate our institution’s processes and controls covering institutional data reporting.

What is Institutional Data Reporting?

Institutional data reporting is the “collection, analysis, interpretation, and communication of data, and the strategic use of information for effective decision making and planning.”[2] Institutions report such data to a number of external entities, who use the data in various ways. These entities include federal agencies (e.g., the Integrated Postsecondary Education Data System), accrediting bodies (e.g., Middle States Commission on Higher Education), rankings agencies (e.g., U.S. News and World Report), bond ratings agencies (e.g., Moody’s), external stakeholders (e.g., prospective students, parents, and alumni), and internal stakeholders (e.g., administration, trustees, and students).

Institutions are required to report data externally for a number of reasons, including:
  • Federal mandates (e.g., Title IV of the Higher Education Act of 1965, which covers the administration of federal student financial aid programs)
  • To maintain accreditation
  • To be visible and recruit
  • To obtain financing for capital projects
Data reporting is also required for strategic planning and decision-making, and can influence an institution’s decision to open a new school, college, or academic program.

Why and How Fraud and Misreporting Occur

In recent years, a number of institutions have intentionally misreported institutional data, particularly to rankings agencies. The misreported data included:
  • Inflated grade point averages and test scores for both undergraduate (e.g., SAT) and graduate (e.g., GMAT, LSAT) students
  • Overstated class percentiles
  • Over-reported six-year graduation rates
  • Understated enrollment spending
Why would an institution intentionally misreport this data to a ranking agency? To increase both interest in the institution and applications. A study at Harvard Business School found that increasing its rank by just one spot in the U.S. News & World Report College Rankings[3] led to a one percent increase in an institution’s applications for admission. One percent can be the difference between meeting and missing an admissions target. An increase in demand may also affect tuition pricing. Institutions that are in-demand have less tuition price sensitivity from current and potential students.

Now that we understand the why, we can move on to the how. The preparation of rankings surveys, and other externally-reported data, is often decentralized, with little or no oversight (or accountability) from institutional leadership or a central institutional research office. There is often a lack of a formal review process to evaluate the quality of the reported data. In addition, institutions apply broad interpretations to rankings questions (in the absence of explicit directions from the surveyor) and the way data is gathered and analyzed. As a result, misreporting can occur in the following ways:

Admissions Statistics

This is the most important measure of an institution’s selectivity. The admissions rate is the number of admitted students divided by the number of students who applied. This seems simple enough; however, when the goal is to make the denominator (i.e., the applicants), as large as possible, and make the numerator (i.e., the admitted), as small as possible, there is an opportunity to manipulate both populations. For example, how does your institution define an “applicant”? Completed and actionable applications only or everything received, including started/incomplete applications? The latter approach makes the denominator significantly higher. How does your intuition define “accepted”? Do you waitlist (only count matriculating individuals) or push admits to the spring semester (do not count individuals in the fall census)? The latter approach would make the numerator smaller. These combined approaches may increase both an institution’s selectivity and ranking.

Test Scores

Test scores are another measure of selectivity. As scores increase, rankings increase; however, which scores do you report? Reporting scores from all students may reduce the average. Are test scores optional for admissions? Typically, students with strong test scores are the ones that report if scores are optional, but reporting on this population very well may inflate average scores. What about excluding scores for international students, developmental cases, or deferred admissions? This approach would eliminate the population of potentially low scores, inflating the average score.

Graduation Rates

This is another area of focus for rankings agencies. The graduation rate is calculated by dividing the number of graduates by the initial cohort. While the number of graduates is difficult to change, there is more subjectivity around what constitutes the initial cohort. Is it defined as the number of students who show up on day one, those who stay a week, or is it those who stay the full semester? The longer you push the date into the semester, the smaller the size of the cohort. The later you measure the size of the cohort, the more likely it is that that group will go on to graduate, making your graduation rate stronger, and increasing your ranking.

Challenges in Data Reporting and Approaches for Evaluating Controls

The rationalization, pressure, and opportunity exist to misreport data intentionally, but the consequences are severe. Even well-intentioned institutions who misreported and self-disclosed errors have suffered negative consequences. These include financial, legal, and reputational damage; lawsuits from students; investigations by federal and state agencies and attorneys general; fines; probation by accrediting agencies; and removal from rankings in publications.
 
Reports and surveys are often prepared in a decentralized environment, with little or no formal quality control review by senior administrators or data owners.
Putting fraud aside, institutional data reporting is a challenging endeavor in its own right. As previously noted, reports and surveys are often prepared in a decentralized environment, with little or no formal quality control review by senior administrators or data owners (e.g., deans, program directors). Additionally, the following challenges exist:
  • Inconsistent roles and responsibilities for those involved in reporting
  • Lack of knowledgeable personnel dedicated to reporting tasks
  • Inconsistent attention to maintaining accurate data
  • Varying interpretations/definitions of survey questions and changing of definitions year-over-year by rankings agencies
  • Informal or undefined processes for extracting and compiling data
  • Inconsistent time periods used to extract data (e.g., university census, academic year, calendar year, end of semester)
  • Different systems or queries used to extract data
  • Lack of formal documentation retention policies, resulting in missing or incomplete documentation to support survey responses
Internal auditors have an opportunity to serve as a strategic partner in helping their institutions evaluate their processes and controls related to external data reporting.

Internal auditors have an opportunity to serve as a strategic partner in helping their institutions evaluate their processes and controls related to external data reporting, and help to prevent the negative consequences faced by other institutions. You can start by performing a risk analysis, or developing an audit or advisory plan designed to:
  • Evaluate organizational and reporting structure for institutional data reporting. Are surveys and reports prepared from a single centralized location or within many offices across the institution? Is there a central institutional research (IR) office that supports and reviews surveys before they are issued externally?
  • Determine mission, authority, and oversight of a central IR office. If a central IR office exists, what specifically is it preparing, analyzing, or reviewing? Has the institution’s senior leadership established the requirement or expectation that the IR must review and approve all materials prior to distribution?
  • Gain an understanding of the volume or scale of material reported externally, centrally, and at the school/college/department level. Is there an inventory of all surveys and reports submitted externally? Does the institution know everything going out the door, including who is preparing and reviewing items? Who makes the decision about whether or not the institution will respond to optional surveys? Lastly, is there a process to update the inventory on an annual basis?
  • Evaluate whether IR staffing is scalable to meet survey volume and review demands. If there is an expectation that IR reviews all materials before submitting externally, does it have enough staff to meet demands effectively? Does staffing allow for appropriate segregation of duties, (i.e., the individual preparing a survey is not the same person reviewing and performing a quality control check)?
  • Evaluate whether schools and colleges have appropriate and skilled personnel preparing surveys and reports. If the schools and colleges are responsible for the preparation and reporting of surveys, do they have dedicated and knowledgeable staff to support the function and prepare high quality, accurate materials?
  • Determine whether policies and processes are documented around authority, submission, data extraction, and review of materials. Does the institution have formalized policies and procedures around institutional data reporting? Specifically, are roles and responsibilities defined? Are there processes and tools for data extraction? What processes exist for review and approval prior to external submissions?
  • Assess whether the institution maintains a data dictionary of commonly used interpretations/definitions of key terms, questions asked, or system queries. Has the institution developed common understandings and interpretations to achieve consistency in responses? Has the institution also defined the timeframe for extracting data (e.g., census)?
  • Identify where to maintain completed surveys and relevant extracted data. Has the institution determined where, and for how long, completed surveys and supporting documentation will be stored?
  • Identify key systems involved in maintaining data, as well as how to enter and extract data from the systems. Does the institution use queries or reports to access data (e.g., Cognos), or does it access an ERP or data warehouse directly? Are there any data security issues to consider?
  • Test a sample of data elements to determine whether key controls are working effectively. If processes are defined, and surveys are supported with underlying documentation, the process should be repeatable and an auditor should be able to reach the same answers.

Conclusion

Institutional data reporting is challenging, but necessary. As stakes get higher, it invites fraud and an increased risk of financial, legal, and reputational damage. Thus, evaluating internal controls and increasing accountability and transparency related to institutional data reporting should be a priority for senior leadership. Internal Audit can serve as a trusted advisor and strategic partner in this effort by evaluating institutional data reporting processes and controls, as well as identifying issues and areas of concern which should help to prevent reputational harm.

Note: Katlyn Andrews, Manager, Kyra Castano, Consultant, and Jennifer Romano, Senior Consultant of Baker Tilly’s Philadelphia Risk, Internal Audit, and Cybersecurity practice contributed to this article.
 
[1] Luca, Michael, and Jonathan Smith. "Salience in Quality Disclosure: Evidence from the U.S. News College Rankings." (pdf) Journal of Economics & Management Strategy 22, no. 1 (Spring 2013): 58–77.
[3]   Mitchell, J., and Belkin, D. (2017, September 7). Americans Losing Faith in College Degrees, Poll Finds. [online] The Wall Street Journal. Available at: https://www.wsj.com/articles/americans-losing-faith-in-college-degrees-poll-finds-1504776601?ns=prod/accounts-wsj.

About the Authors

Adrienne Larmett

Adrienne Larmett, MBA, CRA, is a senior manager within Baker Tilly’s Risk, Internal Audit, and Cybersecurity practice in Philadelphia, PA. She has over 15 years of higher education industry and professional services experience. She has...
Read Full Author Bio

Adrienne Larmett

Adrienne Larmett, MBA, CRA, is a senior manager within Baker Tilly’s Risk, Internal Audit, and Cybersecurity practice in Philadelphia, PA. She has over 15 years of higher education industry and professional services experience. She has performed engagements at 29 higher education institutions in internal audit, research compliance, business process improvement, and enterprise risk management (ERM). She currently provides outsourced and co-sourced internal audit support, and risk advisory services to seven institutions in the Philadelphia tri-state region. She may be reached at adrienne.larmett@bakertilly.com.

Articles
Climbing the Ranks: Preventing Fraud and Misreporting in Institutional Data
Environmental Health and Safety in Higher Ed – How ‎institutions can implement internal controls to protect their ‎community

Chris Garrity

Chris Garrity, CPA, CIA, CFE is the Chief Audit Executive at Saint Joseph’s University of Philadelphia, Pennsylvania. Chris has over 20 combined years of experience within the not for profit and healthcare industries. His experience includes...
Read Full Author Bio

Chris Garrity

Chris Garrity, CPA, CIA, CFE is the Chief Audit Executive at Saint Joseph’s University of Philadelphia, Pennsylvania. Chris has over 20 combined years of experience within the not for profit and healthcare industries. His experience includes performing and supervising numerous fraud investigations as well as financial, operational, and compliance audits. Chris has also provided advisory and consulting services to audit committees, boards, counsel, and management in business and issues such as endowments, enterprise risk management, process reengineering, and ERP implementations.
 

Articles
Climbing the Ranks: Preventing Fraud and Misreporting in Institutional Data
Doing Right by Your Donors: Auditing Gift Management