Risk Management

Reaching a Consensus on Management Review Controls

Management review controls are often subjective, complex to analyze, and harder to audit than other kinds of financial information.
John FogartyFebruary 25, 2016

Requirements from the Public Company Acccounting Oversight Board are causing auditors to require a level of precision and specificity for management review controls beyond prior years. Auditors are also reviewing far more documentation than they use to. At the same time, there is a lack of clarity on what exactly is sufficient in management review controls and how precise they need to be. This is troubling, since MRCs are crucial to the financial reporting process.

Management review controls (MRCs) are the reviews conducted by management of estimates and other kinds of financial information for reasonableness. They require significant judgment, knowledge, and experience. These reviews typically involve comparing recorded amounts with expectations of the reviewers based on their knowledge and experience. The reviewer’s knowledge is, in part, based on history and, in part, may depend upon examining reports and underlying documents. MRCs are an essential aspect of effective internal control.

Examples of MRCs include:

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

  • Any review of analyses involving an estimate or judgment (examples: estimating a litigation reserve or estimating the percentage of completion for long-term construction projects);
  • Reviews of financial results for components of a group;
  • Comparisons of budget to actual; and
  • Reviews of impairment analyses.

MRCs are significantly different from other kinds of controls, however. In particular, MRCs have a higher-level focus than transaction controls, as they examine aggregated results rather than individual transactions. Also, MRCs are inherently subjective. Unlike transaction controls, which are “yes/no,” MRCs typically involve some level of subjectivity and uncertainty (i.e., shades of grey, not black and white). In addition, they require knowledgeable and experienced reviewers who have an understanding of the business at a level of detail that enables them to identify issues for follow-up. What’s more, MRC reviewers often rely on data from other sources, not data they personally create or have direct control over.

Since MRCs are often subjective, they can be complex to analyze and harder to audit than other kinds of financial information that are more objective and concrete. However, despite their inherent challenges, MRCs are absolutely essential to the financial reporting process and usually cannot be replaced by other types of controls.

Management reviews provide a unique and crucial perspective and reality check that transcends the details of normal transaction controls (i.e., seeing the forest, not just the trees). No amount of detailed scrutiny can ever replace the valuable big picture insights gleaned from a higher-level management review. Nor can automated or transactional reviews ever take the place of detailed management reviews of estimates or results of components.

Effective MRCs

Because MRCs are inherently fuzzier than other kinds of controls, companies must make an extra effort to nail down and effectively describe whatever aspects of the process they can. Factors that companies should contemplate when designing such controls include:

  • Clear accountability for performance. It should be crystal clear who is responsible for performing the management review, and what they are expected to do with the results.
  • Appropriate precision. The level of review precision needs to satisfy materiality criteria and be clearly defined and documented. At many companies today, the level of precision has not been formally determined. Just because a review sometimes catches errors at a particular dollar level does not mean it will always catch such errors.
  • Performance based on a complete and accurate set of facts. People who perform management reviews often must rely on data they did not personally prepare and do not have direct control over. As such, they need a robust way to determine if the data being used is reasonably accurate and complete. A risk assessment can help determine where to focus the controls based on potential risk and impact.
  • Performers with adequate knowledge and experience. Management reviews, by their very nature, require a broad perspective and relatively high degree of expertise and insight about the business area being reviewed. Many reviewers are confident (perhaps even overconfident) about their ability to spot problems simply because of their extensive experience conducting such reviews. However, a truly robust management review requires deep knowledge of the business area being reviewed — not just deep experience doing the reviews, which are sometimes superficial, or only flag numbers that are clearly off-plan instead of spotting hidden problems in areas that on the surface appear to be on-plan.
  • Clear process for performance. Many management reviews are not formally defined or documented, relying instead on ad hoc processes, gut feel, and the reviewers’ personal experiences. A review process that is not well defined and documented is almost certainly not being applied accurately and consistently across the enterprise — making it hard to determine whether it is being performed effectively.

Improvement Areas

Because of their importance, MRCs need to be as effective as possible. Yet they are often difficult to evaluate both by internal and external parties since management reviews are inherently subjective, and because many MRCs lack the key characteristics noted above. Some common real-world problems include:

  • Inadequate definitions. The review process, accountability, and required level of precision have not been adequately defined and documented.
  • Questionable data quality. There are significant issues with the quality and reliability of supporting data.
  • Under-qualified reviewers. Reviewers do not have enough knowledge and experience to make informed judgment calls about the business area they are reviewing.
  • Imprecise reviews. Some reviews take place at a highly summarized level by senior officers in the organization that, while important to do, likely are not conducted at a level of detail sufficient to identify potential issues. In these situations, the knowledge of the reviewer may depend heavily on information provided to her or him from reports or other people rather than first-hand knowledge.
  • Personal bias. Bias is a factor in any subjective review and can never be entirely avoided; however, having a diverse team of reviewers with different perspectives and personal motivations can help reduce bias and make the review more thorough and reliable. A common example of bias is over emphasis on confirming information and under emphasis on disconfirming information. In other words, a reviewer can tend to see what the reviewer wants to see.

How Much Is Enough?

Because of their highly qualitative nature, there is a great deal of controversy over what MRCs should look like. Some businesses are concerned about the level of supporting detail that auditors and the PCAOB inspectors now require when evaluating MRCs. There are widely varying opinions about the appropriate level of detail a review must be conducted at — and about the need for testing the accuracy and completeness of data used in the reviews. However, we believe that a few carefully crafted examples could really help clarify things — both for companies and their auditors.

“We believe that a few carefully crafted examples could really help clarify things — both for companies and their auditors.”

Risk assessment is one of the keys to designing effective controls (management’s responsibility) and conducting an effective audit (the outside auditor’s responsibility). Clearly, areas with a high risk of material misstatement need strong internal controls, which often means MRCs with higher precision. But the converse may also be true — when the risk of material misstatement is lower, then a lower level of precision in a MRC might be sufficient. The same principle applies when considering the accuracy and completeness of the data used in a review. This is another area where it would be very helpful to have generally accepted examples of how risk assessment can be applied to the design and auditing of controls.

As examples of both strong and weak management review controls emerge and are documented, leading practices will become widely accepted and will help establish consistency and reliability. The following scenarios are examples to aid in the development of and documentation of management review controls.

The Strong and the Weak

To illustrate how improved documentation can provide a more detailed understanding of a management review control, the following examples contrast typical summary control documentation with improved documentation.

Sample of a control overview

Typical description:

  • The CFO reviews the impairment analysis for appropriateness. Monthly, the controller prepares an undiscounted cash flow analysis, which is then reviewed and approved by the CFO. The CFO reviews the various schedules and signs off on the control package.

Problems with the typical description:

  • Insufficient control description (does not describe what the CFO does); unnecessary process description.
  • Inconsistent references to the inputs (e.g., impairment analysis, undiscounted cash flow analysis, schedules, control package).
  • Lack of cross-references to where the information used in the control has been appropriately addressed.

Improved description:

  • Inputs: Undiscounted Cash Flow Analysis (UCFA), including supporting schedules.
  • Specific monthly review activities: CFO (1) discusses the current and forecasted business environment with the CEO, the COO, and the vice president of operations; (2) reviews each of the assumptions and support within the UCFA with a particular focus on the weighting assigned to each outcome; (3) challenges any assumptions or weights that may have a significant impact on the conclusion.
  • Outputs: Any questions are sent to the controller to be addressed and resolved to the satisfaction of the CFO, at which point the CFO signs off on the UCFA.

Sample description of the control purpose

Typical description:

  • The management review control is a “flux analysis” (e.g., an analysis of the change in account balances from month to month or year to year). The review only focuses on the items with variances; therefore, significant items without variances are not considered. The review is high-level and only checks for “reasonableness” (i.e., similar to providing negative assurance). The review does not consider all accounts or information necessary to appropriately detect a misstatement.

Problems with the typical description:

  • Describing a review control only as a flux analysis without sufficient explanation as to why it focuses only on variances is insufficient to address the identified risk of material misstatement.

Improved description:

  • The review addresses all relevant accounts or information — not just those with variances.
  • The review is sufficient to enable a positive assertion by the reviewer that the subject of the review (e.g., the account) is not materially misstated.
  • The review of financial data considers multiple data points (e.g., trend lines, variances, and KPI), such that it is likely that a misstatement would be detected.
  • The review is performed at a sufficiently detailed level to detect errors that in the aggregate could be significant.

Sample description of reviewer competence

Typical description:

  • The description of the competence of the reviewer addresses the reviewer’s education, certification, and tenure.

Problems with the typical description:

  • It assumes the competence of the reviewer is “implied” due to his or her position and experience within the entity.

Improved description:

  • In addition to considering the reviewer’s education, certification, and tenure, the description of the competence of the reviewer addresses the reviewer’s: (1) knowledge of the specific subject matter and the activities he or she is involved in to maintain and update that knowledge, and (2) basis for being able to develop an independent expectation (similar to substantive analytical procedures), which would then allow him or her to be able to identify an error in the financial information being reviewed.
  • Consider and document observations based on prior interactions with the reviewer with respect to the subject matter.

Sample description of threshold or criteria for investigation

Typical description:

  • The review threshold (1) is defined as the greater of $X or Y% of any financial line item, which results in a threshold that is not sufficiently precise, or (2) is not stated at all (i.e., the threshold for investigating items/differences is not defined and thus lacks sufficient basis to conclude on the precision).

Problems with the typical description:

  • Failing to evaluate whether an established criteria for investigation exists.
  • Failing to evaluate whether the criteria for investigation is sufficiently precise.

Improved description:

  • The review applies explicit thresholds that are sufficiently precise for the intended purpose.
  • The review regularly identifies items to further investigate based on the reviewer’s expectation of what the balance/account should be, which provides further evidence of the criteria for investigation.

Going Forward

On the surface, there appear to be differing views between all of the various MRC stakeholders — companies, auditors, and regulators — about what constitutes “enough” with respect to MRCs. However, we believe much of this disagreement is being driven by perception, not reality, and that the positions of all parties are actually much closer then they realize. By establishing a constructive dialogue between companies, auditors, and regulators to clarify and define “how much is enough” — with a focus on creating illustrative examples that bring the definitions to life — we believe it is entirely possible to reach a reasonable and practical consensus that works for everyone.

John Fogarty is a partner in the professional practice network of Deloitte & Touche LLP. He is currently the senior audit consultation partner and an engagement quality control reviewer for the audit of a major company. He is a past chairman of the Auditing Standards Board.