Big projects more often fail because of poor evaluation than poor execution. But many organizations focus on improving only the latter. As a result, they don’t identify the projects that pose the greatest risks of delay, budget overruns, missed market opportunities or, sometimes, irreparable damage to the company and careers. Until it’s too late.

Organizations can significantly improve the overall ROI of their project portfolio and reduce the risk of project failures if they recognize and respond in time to two insidious threats, discussed below.

Ronald Bisaccia, managing partner, Alvarez & Marsal project management

Ronald Bisaccia, managing partner, Alvarez & Marsal

Once upon a time, a multinational IT services company made a bet: two years and $94 million. That was the time and expense estimated to build a large and unprecedented e-commerce website, including all its back-end systems. Two years later, the site was up but accessible less than 45 percent of the time. Purchases were 90 percent lower than projected, and much of the planned functionality was, well, not functional. The IT services company was eventually fired, and the total reported expense for Healthcare.gov is currently $319 million.

Unfortunately, the stories of many large, complex projects are similar. The typical Fortune 500 company, for example, spends nearly 6 percent of revenue on capital projects, 40 to 60 percent of which fail to meet their schedules, budgets or both. After years of being called in to rescue projects, we’ve found the gap between projections and performance to be significant. Schedules are missed by an average of 55 percent and budgets by 33 percent.

Extrapolate, and the average multinational loses nearly $230 million in annual budget overruns along with nearly a third of their projects’ expected benefits.

It would be easy to point the finger at poor execution. After all, problems manifest as cost or schedule overruns. But overruns are just symptoms of the real problem: poor estimation of project size and risk. As a result, organizations take on projects they shouldn’t and under-budget those they should. How, then, can we spot and correct troubled projects before they fail? Here are the two drivers of failure and how to avoid them.

1. Scope and Risk Estimates Are Sourced from Project Advocates.

In a project’s earliest stages, very little is known about what it will take to execute it. So most companies seek out expert internal opinions — usually from proponents of the project, since they are the most knowledgeable. The problem is bias. Research is clear that project proponents are likely to fall under its influence, favor optimistic outcomes and produce dangerously inaccurate estimates.

One simple way to expose the impact of bias is with something we call the project estimate multiplier (PEM). It’s simply a comparison of average actual costs vs. average original estimates. The larger the PEM, the greater the impact bias has on your estimating function.

Using this approach, one top-tier financial services company was shocked to learn that its PEM was 2.37, which meant that every $1 estimated turned into $2.37 in actual cost. A PEM that high impacts the business in three ways:

  • Approximately two thirds of the capital budget is being consumed by cost overruns.
  • It virtually guarantees that projects won’t deliver on their original business cases.
  • Because the company estimated incorrectly, it green-lighted too many projects, sucking valuable time and energy from the team.

Much of the bias can be eliminated by using an independent team to conduct estimates. The team can be internal to the organization or a contracted third party. Either way, the team should not have a financial or political interest in the project outcome. Note that the estimating team should use objective techniques such as reference databases or project simulations to conduct or verify their estimates.

[contextly_sidebar id=”rne71202lNXiMzlrKJYHRbBxK96xEbM5″]2. “Big Bet” Risks Are Evaluated the Same Way Smaller Projects Are.

When it comes to capital projects, size matters. A lot. The risk of failure is four times greater for large projects, and they exceed their already-large budgets by three times as much as smaller projects — enough to cost jobs, damage careers and even sink companies.

This perfect storm of higher failure rates, bigger failures and bigger budgets means that big-bet projects will inflict approximately 95 percent of the damage in capital-project portfolios.

Most companies can accurately estimate small projects that may take, say, three to six months, but they are horrible at estimating the time and cost of big ones. There are three key reasons for that.

First, large projects usually involve many interdependent tasks, which creates complexity that smaller projects do not have. That makes large projects prone to uncertainty and random events, so they can’t be estimated in the traditional way. Risk-adjusted techniques, such as Monte Carlo analysis, are significantly more accurate.

Second, large projects usually involve tasks whose difficulty is unknown at the time of estimation. For example, a task could require a new technology or a new use of an existing technology, or even a relatively common technology that the company has never used. Such tasks typically carry a very high risk of producing time and cost overruns.

Third, the tipping point between low-risk and high-risk projects is sudden, not gradual. Once a project passes the tipping point, the risk curve changes quickly and dramatically. Failure rates skyrocket, as does the resulting amount of damage.

Unfortunately, under the influence of bias, many companies fail to see the curve, much less correct for it, until it’s too late.

When a multibillion-dollar construction company set out to replace its entire portfolio of homegrown applications with one large ERP package, it relied on techniques used to evaluate and plan for construction projects. The team created an excruciatingly detailed and deterministic project plan of 3,000 tasks, including the dependencies and durations of each. Yet not one of those tasks accounted for unforeseen events that caused the project to fall further and further behind.

By mid-project, the detailed plan was abandoned and the company recognized it had not only confused precision with accuracy, it had also been blind to “Black Swans” —unforeseen, relatively rare, but highly impactful events.

To better assess and manage project risk, develop a process to measure projects against your tipping point. Projects that exceed the tipping point need to be estimated and managed differently. We’ve found the best way is to run the project plan through a series of Monte Carlo simulations. That not only accounts for the risk of uncertainty, it also identifies the tasks with the most risk of affecting the outcome.

The analysis output can then be used to develop a plan for mitigating the risk. This can include techniques like breaking the initiative into smaller projects or running tests to reduce uncertainty.

Organizational leaders should identify and address these risks by asking the hard questions before and during big projects. At worst, they will discover the risks are too high and the ROI too small, and spend can be redirected to higher-value pursuits. At best, they will verify the project’s value and accelerate its completion.

Ronald Bisaccia is a managing director with Alvarez & Marsal Business Consulting.

, , , , , , , , , , , ,

3 responses to “Why Big Projects Go Bad”

  1. I am the global controller of a global decentralized manufacturing organization that grew, and continue growing through M&A. In that environment, from the technology perspective, we focus in replacing legacy ERPs with one of the company defined as “core” that we support.

    Part of my role is maintaining business continuation and be able to close the books. From that perspective I need to ensure implementations are successful.

    I have led less complex but company-wide systems implementations. I also have extensive operational experience. Therefore, I inserted myself in the planning phase of the implementation process. Not as a subject matter expert but asking “the” questions that give me the confidence that we can continue to operate without interruption.

    In my experience failures were generated by oversights in the planning stages. Particularly activities that prevented users to fully perform in the Go-Live/post Go-Live activities.

    Local management and users underestimated the impact the new environment would have in their daily activities. First, users neglected to be full trained before going live, and overestimated their knowledge on how to navigate within, and process transactions in the new system. Users forget that to get to the current level of expertise in their existing systems took many days, weeks and months, and processing different transactions, not simple ones. When going live, users are faced with an increased backlog of transactions generated during the data conversion period (period when neither the old or new systems are available and that may last somewhere between 3-7 working days). Lack of knowledge on how to process a wide range of transactions slows the organization. In addition, organizations are faced with incorrect processing, only identifiable at a later stage. As a result there is an increased rework.

    Second and more important (to me) is to be able to close the books. Users having worked within a system for years, have created reports and tools to extract, compose, validate and reconcile data to assure the information contained in the systems is accurate. These tools are not available (or usable) in the new systems unless designed, programmed and implemented. In many cases users were given training to navigate and process transactions. Users were also given standard reports, and in few cases customized ones.

    We found that users failed to communicate how they reconciled their figures in the existing systems, to ensure that they can repeat a data validation and reconciliation process.

    We also found that users were not trained in resolving problems. In some cases users could identify incorrect transactions: unfinished or incomplete, reversed, etc. However users lacked the tools and did not know the cause for the transactions to be in that status or situation, and worse, how to fix them. Users have to return to developers and implementation teams which have already moved to a new project.

    Thinking about what could go wrong and how to fix it is very important, for two reasons: one, design a plan on how to deal with it; two, design a plan to prevent it.

    In my view, and perhaps is because of my position, it is important that local management, and the users demand support, training and tools during the go-live/post go-live period. however, the onus is on them, once given, to fully utilize and take advantage of them.

  2. “Why Big Projects Go Bad” resonates with my career in business transformation / IT project management and project rescue. Ron Bisaccia observes that too often projects are estimated by advocates who are biased and are unable to give an objective assessment of the required resources, costs, risks and complexities of the project. In addition, due to the size and complexity of “big bet” projects, typically there are many unknown factors that are not accounted for or adequately addressed in the planning phases.

    Bill Velasco’s response provided real and tangible support for the article’s observations and he extended the conversation from project launch to the all-important transition phase. From his words, it is clear he has the scars to back up his insights.

    Project teams are typically focused on the end-state for the majority of the project. Often, it is late in the project lifecycle when attention finally turns to conversion, migration, training, support, and operating in a “dual state” with the old and new co-existing, sometimes for months. In any phased rollout (by business unit, by customer group, by geography, etc.), there is a period of time wherein the support load includes the old, the new, the temporary, and the learning curve. End users of the new environment are early in the learning curve, and so are the help desk and other support resources. It is a fragile time that must be planned, managed and funded. Bill correctly placed the onus both on the project team and the ultimate end users.

    I am also struck by how late at-risk projects are first identified as such. By the time the looming crisis is escalated, the funding and schedule are often 75+ percent consumed. There are two key contributing factors:

    1. Shift of time and budget to early phases
    In many projects, when early phases extended (requirements, design, development or configuration), the overall project end date doesn’t move. Instead, despite past experience, decisions are made to shift schedule and budget from testing and training with the belief that it can be made up later. This simple delusion can immediately postpones the crisis for months.

    2. Misuse of the Steering Committee
    Many project managers see the Steering Committee as a bottleneck and naysayers. Consciously or unconsciously, they downplay risks and issues and provide only the most positive of status reports to the committee.
    However, the Steering Committee shares the objective of project success. They can make decisions, add resources, and remove roadblocks if made aware of the true project challenges. Bad news never improves with age; the Steering Committee needs to be well informed.

    Project “success” is oft defined as “on time, on budget, on spec.” However, as Bill pointed out, ultimate success is defined as “benefits realization” by project sponsors, and as “ease of learning / ease of use” by end-users of the system. In addition to budget, schedule and process, the best governance processes drive to outcomes.

  3. Thank you for this article and both comments.
    I agree with independent estimates, more robust planning that explicitly models risk, and planning and commitments that don’t stop at go-live. Break down the silos!
    I also suggest moving towards variable-scope (agile) methods. They let you use scope as a buffer for estimation error, protecting the capital budget and other projects from the inevitable estimation errors. They also give stakeholders, including end users, looks at new system functionality, early and often. Finally, they tend to make estimation problems visible sooner, and prevent shoving them downstream to later phases like testing and training.
    I believe the tipping point is between Complicated and Complex, as defined in the Cynefin Framework. Once the interactions between systems and people reach a certain point, the assumptions behind plan-your-work, work-your-plan begin to break down. Planning is still essential, but it’s important to have more levers than Try Harder when reality is not cooperating with the plan. Firms also waste a lot of money on flailing, rework, and turnover when their teams are always working under excessive schedule pressure.

Leave a Reply

Your email address will not be published. Required fields are marked *