Risk Management

Why Big Projects Go Bad

Two key mistakes: getting time and cost estimates from project advocates, and evaluating big projects like small ones.
Ronald BisacciaJune 26, 2014
Why Big Projects Go Bad

Big projects more often fail because of poor evaluation than poor execution. But many organizations focus on improving only the latter. As a result, they don’t identify the projects that pose the greatest risks of delay, budget overruns, missed market opportunities or, sometimes, irreparable damage to the company and careers. Until it’s too late.

Organizations can significantly improve the overall ROI of their project portfolio and reduce the risk of project failures if they recognize and respond in time to two insidious threats, discussed below.

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

Ronald Bisaccia, managing partner, Alvarez & Marsal project management

Ronald Bisaccia, managing partner, Alvarez & Marsal

Once upon a time, a multinational IT services company made a bet: two years and $94 million. That was the time and expense estimated to build a large and unprecedented e-commerce website, including all its back-end systems. Two years later, the site was up but accessible less than 45 percent of the time. Purchases were 90 percent lower than projected, and much of the planned functionality was, well, not functional. The IT services company was eventually fired, and the total reported expense for Healthcare.gov is currently $319 million.

Unfortunately, the stories of many large, complex projects are similar. The typical Fortune 500 company, for example, spends nearly 6 percent of revenue on capital projects, 40 to 60 percent of which fail to meet their schedules, budgets or both. After years of being called in to rescue projects, we’ve found the gap between projections and performance to be significant. Schedules are missed by an average of 55 percent and budgets by 33 percent.

Extrapolate, and the average multinational loses nearly $230 million in annual budget overruns along with nearly a third of their projects’ expected benefits.

It would be easy to point the finger at poor execution. After all, problems manifest as cost or schedule overruns. But overruns are just symptoms of the real problem: poor estimation of project size and risk. As a result, organizations take on projects they shouldn’t and under-budget those they should. How, then, can we spot and correct troubled projects before they fail? Here are the two drivers of failure and how to avoid them.

1. Scope and Risk Estimates Are Sourced from Project Advocates.

In a project’s earliest stages, very little is known about what it will take to execute it. So most companies seek out expert internal opinions — usually from proponents of the project, since they are the most knowledgeable. The problem is bias. Research is clear that project proponents are likely to fall under its influence, favor optimistic outcomes and produce dangerously inaccurate estimates.

One simple way to expose the impact of bias is with something we call the project estimate multiplier (PEM). It’s simply a comparison of average actual costs vs. average original estimates. The larger the PEM, the greater the impact bias has on your estimating function.

Using this approach, one top-tier financial services company was shocked to learn that its PEM was 2.37, which meant that every $1 estimated turned into $2.37 in actual cost. A PEM that high impacts the business in three ways:

  • Approximately two thirds of the capital budget is being consumed by cost overruns.
  • It virtually guarantees that projects won’t deliver on their original business cases.
  • Because the company estimated incorrectly, it green-lighted too many projects, sucking valuable time and energy from the team.

Much of the bias can be eliminated by using an independent team to conduct estimates. The team can be internal to the organization or a contracted third party. Either way, the team should not have a financial or political interest in the project outcome. Note that the estimating team should use objective techniques such as reference databases or project simulations to conduct or verify their estimates.

[contextly_sidebar id=”rne71202lNXiMzlrKJYHRbBxK96xEbM5″]2. “Big Bet” Risks Are Evaluated the Same Way Smaller Projects Are.

When it comes to capital projects, size matters. A lot. The risk of failure is four times greater for large projects, and they exceed their already-large budgets by three times as much as smaller projects — enough to cost jobs, damage careers and even sink companies.

This perfect storm of higher failure rates, bigger failures and bigger budgets means that big-bet projects will inflict approximately 95 percent of the damage in capital-project portfolios.

Most companies can accurately estimate small projects that may take, say, three to six months, but they are horrible at estimating the time and cost of big ones. There are three key reasons for that.

First, large projects usually involve many interdependent tasks, which creates complexity that smaller projects do not have. That makes large projects prone to uncertainty and random events, so they can’t be estimated in the traditional way. Risk-adjusted techniques, such as Monte Carlo analysis, are significantly more accurate.

Second, large projects usually involve tasks whose difficulty is unknown at the time of estimation. For example, a task could require a new technology or a new use of an existing technology, or even a relatively common technology that the company has never used. Such tasks typically carry a very high risk of producing time and cost overruns.

Third, the tipping point between low-risk and high-risk projects is sudden, not gradual. Once a project passes the tipping point, the risk curve changes quickly and dramatically. Failure rates skyrocket, as does the resulting amount of damage.

Unfortunately, under the influence of bias, many companies fail to see the curve, much less correct for it, until it’s too late.

When a multibillion-dollar construction company set out to replace its entire portfolio of homegrown applications with one large ERP package, it relied on techniques used to evaluate and plan for construction projects. The team created an excruciatingly detailed and deterministic project plan of 3,000 tasks, including the dependencies and durations of each. Yet not one of those tasks accounted for unforeseen events that caused the project to fall further and further behind.

By mid-project, the detailed plan was abandoned and the company recognized it had not only confused precision with accuracy, it had also been blind to “Black Swans” —unforeseen, relatively rare, but highly impactful events.

To better assess and manage project risk, develop a process to measure projects against your tipping point. Projects that exceed the tipping point need to be estimated and managed differently. We’ve found the best way is to run the project plan through a series of Monte Carlo simulations. That not only accounts for the risk of uncertainty, it also identifies the tasks with the most risk of affecting the outcome.

The analysis output can then be used to develop a plan for mitigating the risk. This can include techniques like breaking the initiative into smaller projects or running tests to reduce uncertainty.

Organizational leaders should identify and address these risks by asking the hard questions before and during big projects. At worst, they will discover the risks are too high and the ROI too small, and spend can be redirected to higher-value pursuits. At best, they will verify the project’s value and accelerate its completion.

Ronald Bisaccia is a managing director with Alvarez & Marsal Business Consulting.