Avoiding Decision Traps

Cognitive biases and mental shortcuts can lead managers into costly errors of judgment.
Edward TeachJune 17, 2004
Avoiding Decision Traps

Can we be counted on to make sound decisions under uncertainty? Are our judgments always rational? Do we invariably make choices in our best interests, based on a full understanding of trade-offs and probabilities? Are we truly, in short, the homo economicus assumed by many economic models?

Or are we instead a more-flawed species — a creature of bounded rationality, driven by emotions and desires? Is our understanding of probabilities incomplete? Are we susceptible to cognitive biases, and do we confront uncertainty with misleading rules of thumb? Do our decisions, in short, sometimes run counter to our interests?

To those who study behavioral finance, the answers to the sets of questions above are no and yes, respectively. Over the past 30 years, beginning with the seminal work of Daniel Kahneman and Amos Tversky, the behaviorists have demonstrated that people routinely employ heuristics — rules of thumb, or mental shortcuts — to simplify and, worse, oversimplify decisions under uncertainty. Moreover, they have shown time and again that our choices are frequently skewed by an array of cognitive biases.

Most work to date in behavioral finance has focused on asset pricing and the behavior of investors. But increasingly, attention is being paid to decision-making in the corporate realm. Because of their training and experience, managers might be presumed to be less likely to use mental shortcuts, and less vulnerable to cognitive biases. True or not, consultants in decision analysis have made a good living by showing managers how they fall into decision traps, and professors have delighted in showing their executive MBA students just how flawed their judgment can be.

Over time, the behaviorists have compiled a long list of biases and heuristics. No one can say with certainty which of these inflict the most harm, but financial managers would do well to watch out for five: anchoring and adjustment, framing, optimism, overconfidence, and self-serving bias.

Anchoring and Adjustment

Kahneman and Tversky contended that people frequently form estimates by starting with a given, easily available reference value — which could be arbitrary — and adjusting from that value. An estimate, therefore, would be “anchored” to that value. (Think of auto salespeople starting negotiations at the manufacturer’s suggested retail price.)

To demonstrate this heuristic, Dan Ariely, professor of management science at MIT Sloan School of Management, conducted a mock auction with his MBA students. He asked students to write down the last two digits of their Social Security numbers, and then submit bids on such items as bottles of wine and chocolate. The half of the group with higher two-digit numbers bid “between 60 percent and 120 percent more” on the items, says Ariely.

“People don’t know how much something is worth to them,” he comments. An anchor helps them decide. Once a value is set, people are good at setting relative values, Ariely explains. But “it’s very hard to figure out what the fundamental value of something is,” he adds, whether it’s an accounting system, a company’s stock, or a CEO.

Consider the work of Paul J.H. Schoemaker, a professor at the University of Pennsylvania’s Wharton School and chairman and CEO of Decision Strategies International, a West Conshohocken, Pennsylvania-based consultancy. Last year, he sought to find out whether anchoring propped up the rate of bad loans at a fast-growing Southern bank. When evaluating a loan’s performance, a bank officer would begin (naturally enough) by reviewing the loan’s current rating. That rating, surmised Schoemaker, would act as an anchor for the new rating — should I upgrade, or downgrade? Because of the anchor, a downgrade would tend to be an incremental adjustment, which meant that by the time a loan was classified as troubled, it could be too late to take remedial action.

Invited to speak to the bank’s top 100 managers, Schoemaker proposed an experiment. Of the next 200 loans they reviewed, he instructed, make 100 of them “blind” — that is, without reference to the previous rating — then compare the two groups and the adjustments they make. “My prediction,” he says, “is that they will make much bigger adjustments with the group that has no anchors.” Schoemaker and the bank’s CEO planned to meet at the end of May to discuss the experiment’s results.

Broadly speaking, anchoring is present whenever one manager or group reviews another’s proposal, says Hersh Shefrin, professor of finance at Santa Clara University’s Leavey School of Business. “The fact that you start with somebody else’s proposal means there’s an anchor being presented to you,” he says. People may be optimistic or want a project to be accepted, and therefore be inclined to inflate cash-flow projections. The challenge for those who sign off on proposals is to adjust sufficiently for the inflation.


In this heuristic, the way a situation is presented, or framed, greatly influences the action taken. If a frame is poorly constructed, a manager may unwittingly make a money-losing choice.

For example, Shefrin says, managers can stumble by framing costs in the context of gross margin (a financial accounting number) rather than contribution margin (a cost-accounting number). “If they have to decide whether to accept a special order, given their fixed capacity, the criterion they ought to use to make the decision is contribution margin,” notes Shefrin. That measure might indicate that the special order is worth doing — whereas gross margin could show the opposite.

Another psychological tendency, called aversion to a sure loss, can combine with framing to produce the “sunk-cost fallacy.” Studies have shown that people are generally reluctant to accept a sure loss, and therefore are willing to make unsound bets in the hopes of breaking even, says Shefrin. If managers dismiss the textbook advice to forget sunk costs, and instead frame those costs as if they were recoverable, then aversion to a sure loss will tempt them to continue funding a failing project. The companies that have thus thrown good money after bad are surely legion.

Optimism and Overconfidence

As a rule, leaders are optimistic and confident, but behaviorists say that both qualities can be carried to excess. Overconfidence, for example, may lead a CEO to ignore red flags and make a value-destroying merger or acquisition (see “Watch How You Think“).

As experiments have shown, people in general are optimistic. In a classic study by Neil Weinstein in 1980, undergraduates were asked to rate how likely various life events were to happen to them, relative to their classmates. The result: students systematically thought that good events were more likely to happen to them, while bad events were more likely to happen to other students.

But how do seasoned executives think? Shefrin has replicated Weinstein’s test with his students, from undergrads to executive MBA candidates — “age 35 to 55, all with at least 10 years of management experience. Some are CEOs, CFOs, VPs of marketing.” The outcome? Executive MBA students are just as optimistic as undergrads — “except a little more so,” says Shefrin.

Managerial optimism can result in all kinds of flawed and risky decisions. But it may also have fundamental implications for what a company does with its free cash flow. J.B. Heaton, a partner at Chicago law firm Bartlit Beck Herman Palenchar & Scott LLP who holds a Ph.D. in finance from the University of Chicago, argues that managerial optimism provides a better guide to the problem of free cash flow than rational — but conflicting — notions of agency costs and asymmetric information.

According to the asymmetric-information approach, free cash flow is good. That’s because a company’s securities are typically undervalued, since managers have information that the market doesn’t. If so, then managers assumed to be loyal to shareholders will be reluctant to fund even positive­net present value projects by issuing more undervalued securities. Without free cash flow on hand, they will underinvest.

But this scenario conflicts with the rational agency-cost approach, which holds that free cash flow is bad. Why? Because managers are assumed to place their interests above shareholders’ and will invest in projects that boost their power and compensation, even if those projects have negative NPV. With free cash flow on hand, managers will overinvest.

Heaton’s model assumes that managers are neither loyal nor disloyal, but optimistic. Managerial optimism explains both underinvestment and overinvestment, depending on a company’s situation, and determines what should be done with free cash flow. You will find this site . Very systemic information.

“In a company with generally marginal investment opportunities, managers are going to perceive that the opportunities are better than they are, and that means they’re going to think some bad projects are good projects,” says Heaton. At the same time, because they’re optimistic, managers will think the company’s securities are undervalued and won’t want to issue more. “So if they have extra cash lying around, they’re going to [overinvest],” says Heaton. “This suggests that if you have a company that is declining and doesn’t have a good set of investment opportunities, you want to tie their hands” by disgorging cash.

What if a company does have good investment opportunities? Again, optimistic managers will be concerned about issuing undervalued securities. Such companies want to have extra cash on hand, because without it, managers will underinvest.

Self-Serving Bias

Unlike biases that can lead people to act against their interests, the self-serving bias motivates people to reach conclusions that favor them. For that reason, such “motivated” biases may be more powerful, says Max H. Bazerman, Straus Professor at Harvard Business School.

Narrowly defined, self-serving bias leads people to see data in the way they most want to see it, which may prompt them to take credit for successes and shun blame for failures. More broadly, this bias can refer to a person’s inclination to unintentionally select or distort facts to suit his preferences. It frequently rears its head in negotiations, says Bazerman, “where two honest people both believe they deserve 60 percent of the pie, and are not able to reach an agreement.” Self-serving bias can also wreak havoc on a group undertaking, where afflicted people may perceive that others are not doing their fair share of the work.

Auditors anxious to please clients are particularly vulnerable. In a 2002 experiment, Bazerman and three colleagues gave five ambiguous auditing vignettes to 139 auditors in a Big Four accounting firm and asked them to judge the accounting. Half the auditors were asked to pretend they were hired by the company being audited, and the other half that they were hired by a company doing business with the audited company. The result: the auditors working for the audited company were 30 percent more likely, on average, to find that the accounting complied with generally accepted accounting principles.

Like the other biases, says Bazerman, self-serving biases can be mitigated, but are too strong to eliminate completely. They thus require structural fixes such as formal checks and balances for project approval and monitoring.

For this reason, Bazerman believes the Sarbanes-Oxley Act of 2002 is destined to fail. Auditors, he points out, “still have a motivation to get rehired, to sell tax services, to potentially take jobs with the firms they audit.” Without truly independent auditors, investors will still be at the mercy of fallible human judgment.

Edward Teach is articles editor of CFO.