The computer industry has a certain genius for turning its own excesses and errors into new business opportunities. Computer code written with no regard for a new millennium? What an opportunity for Y2K remedial work. The Internet as a playground for hackers and identity thieves? Let us show you our security solutions.

Now, having sold so much hardware, software, middleware, vaporware, services, and sundry other line items to Corporate America — to the point where customers continue to have trouble digesting (not to mention paying for) it all — the industry has responded with a highly touted concept known variously as utility computing, on-demand computing, adaptive computing, pay-as-you-go computing, and software-as-a-service. While it’s not nearly as simple as it sounds, it certainly sounds good: customers find the idea of paying only for the computing resources they actually use irresistible. Before they can get to that metered model, however, they may have to incur some significant costs, primarily in shaping up their business processes. And the promise of lower overall IT costs seems to hinge on spending more money with key vendors, allowing them to enjoy larger slices of a shrinking pie.

That’s why some wonder if utility computing (to use the most generic-sounding option) amounts to what Forrester Research Inc. principal analyst William Martorelli calls “an account-control mechanism.” He sums up the keen vendor interest this way: “If you can sell customers on using your infrastructure, they won’t buy from your competitors.”

Spending more money with fewer vendors is an IT trend that predates utility computing, of course, and is in fact a tried-and-true cost-cutting strategy in virtually all expense categories. The utility-computing concept is appealing to many financial and IT executives, a number of whom have already taken the plunge. A recent study by Saugatuck Technology (in conjunction with our sister company CFO Research Services) found that nearly 20 percent of the more than 300 executives surveyed have already implemented some form of pay-as-you-go IT services. Reduction of capital and operating costs rank as the top lures, while the chief inhibitors are security, privacy, and vendor (over)dependency. Despite those qualms, Saugatuck concludes that utility computing will be mainstream by 2006.

The Future of Finance Has Arrived

The pace with which finance functions are employing automation and advanced technologies is quickening. Rapidly. A new survey of senior finance executives by Grant Thornton and CFO Research revealed that, for just about every key finance discipline, the use of advanced technologies has increased dramatically in the past 12 months.

Read More

Meter Leaders

Sensing that they have a winner, a substantial number of IT vendors have come out with utility-computing sales pitches, and many others are ready to respond if market acceptance warrants. IBM and Hewlett-Packard have been the most aggressive champions of the idea — or ideas, since the companies have different visions of what utility computing actually is — but the noise generated by smaller companies clearly indicates that if utility computing is ultimately about giving more business to fewer companies, the smaller firms don’t plan to disappear without a fight. And while it’s been quiet so far, Microsoft has made a number of moves that could help drive the utility trend.

IBM CEO Sam Palmisano has committed $10 billion over 10 years to develop and market “E-business on demand,” or, more casually, “on demand” services. HP claims it already has thousands of pay-as-you-go contracts signed, but emphasizes that the metered pricing model is but a stepping-stone to its grander vision of the “adaptive enterprise.” Sun Microsystems, EDS, and others all promise to bring savings, efficiencies, and greater business flexibility to corporate customers by allowing them to pay only for the computing capacity they actually use.

Early adopters include such large enterprises as Philips Semiconductor, which tapped HP to help it move to a utility model in order to utilize its internal-computing resources more efficiently. Large companies with complex infrastructures represent the sweet spot for utility computing today, but growth-oriented firms as well as those with seasonal spikes in demand are also likely candidates. Hallmark Cards contracted with IBM to avoid buying extra Website capacity that would be needed only around such busy card-mailing holidays as Christmas and Valentine’s Day. And Paul Mercurio, CIO of Mobil Travel Guide, says metered Web hosting by IBM allows Mobil to pay only for the computing resources it actually uses, saving it 25 to 30 percent annually compared with a traditional fixed-price outsourcing arrangement (see “Leave the Disk-Driving to Us,” at the end of this article).

Just how real is utility computing today? Nick van der Zweep, director of virtualization and utility computing at HP, says 40 to 50 percent of HP’s high-end 128-bit-processor Superdome servers “go out with some form of utility pricing.” IBM, which offers more than 30 flavors of utility services, has financial-services giants American Express Co. and JPMorgan Chase among its early customers.

Some executives are intrigued, but wary. “I love the concept, but it’s going to take some time to evolve,” says Joe Gottron, CIO at Huntington National Bank, a Columbus, Ohio-based regional bank with more than $30 billion in assets. Gottron likes the idea of slashing capital expenditures and operating costs, and he enthuses over the prospect of handing over the time-consuming task of negotiating individual software-licensing agreements. But he is concerned about the risk of signing a utility agreement that does not define charges for every service the bank may need in the future. “The biggest challenge is being all-knowing with the original agreement. When you hand control of your infrastructure to another company, you can really get burned,” he says.

Utility computing can be applied in any number of ways. In its most extreme form, a service provider runs a customer’s entire IT system through its own data centers, delivering services, applications, computing cycles, and storage on a metered basis. But customers can also implement utility models internally, using their own equipment and either managing it themselves or allowing the vendor to do so, paying based on usage versus outright ownership. Or they can apply a hybrid model in which they maintain some infrastructure and contract for externally managed add-on capacity and ancillary functions as needed.

This raises the logical question of how utility computing differs from outsourcing and leasing. As Leif Eriksen, founder and principal of Industry Insights, has noted, the key difference is in focus: utility computing begins by asking what companies need, whereas outsourcing and leasing begin by asking how current approaches to existing infrastructure can be done better. It’s a subtle distinction, at least as companies take early steps, but over time the emphasis on functionality should allow utility-service providers to build new architectures and infrastructures from the ground up that deliver savings well beyond what outsourcing currently provides.

In this model, IT really is a service, with companies paying only for what all the gear actually does. The oft-cited example of comes to mind: the company sells CRM functionality via the Web and sweats all the infrastructure details itself. Customers either like what the service enables them to do or they don’t, and subscribe or unsubscribe accordingly.

But all the IT infrastructure that companies have acquired can’t be made to vanish overnight, and this is where the utility concept may gain traction. In fact, since the biggest players in utility computing have built their empires on infrastructure, one may see far more action on this front than with software applications. Whether managed internally or acquired through a service provider, utility computing depends heavily on the ability to leverage shared resources. Individual servers and storage devices, for example, should be allocated to a variety of applications and clients, with capacity that is adjustable as needed. Known as virtualization, this is a capability long found on mainframe computers, which have been too expensive not to maximize. The more-recent extension of this capability to Unix and Linux servers has helped propel the utility concept.

For companies that have huge pockets of underused computing capacity, this approach promises greater efficiency. If, for example, a fast-growing business unit needs more hardware capacity, it does not need to go out and buy a new server: it can simply access underutilized assets elsewhere in the enterprise. Or a central IT function can load up on processing power, knowing that it will pay only for what’s used — and be able to track business-unit usage accurately and bill back to those units accordingly.

If utility computing is ultimately about functionality, it would seem that the vendor you pick means far less than what they deliver. But so far customers don’t see it that way. A study last year by Summit Strategies Inc. found that customers are “much more willing to work with IBM on every stage of a utility computing project than they are with any other leading systems vendor, systems integrator or outsourcer in the industry. . . . Hewlett-Packard is the only other vendor that even comes close.”

Why is IBM the early odds-on favorite? In a word, consulting. IBM claims it has 60,000 consultants available for on-demand engagements. The ability (and need) to leverage this massive and well-paid force helps explain why IBM is throwing its weight behind utility computing.

The CFO Connection

Much of that weight encompasses specific industry expertise, versus broad IT expertise, that IBM gained when it acquired the consulting arm of PricewaterhouseCoopers in 2002 for $3.5 billion. It is perhaps the central irony of the IBM-HP showdown over utility computing that HP was the first to bid for PwC, only to see IBM walk away with the prize at a discount price.

Utility computing emerged as a way to placate an increasingly restive customer base and keep that army of consultants busy. As Jim Corgel, general manager of IBM Global Services’s E-business hosting services, tells the story, it was at a meeting in Manhattan in May 2001 that he first got the “inspiration” for on-demand computing. Along with IBM CFO John Joyce, Corgel was hosting the CFOs of 8 or 9 Fortune 40 customers. During the session, several CFOs groused about all the money they’d wasted on IT in the 1990s and detailed the difficulties of making needed changes to IT infrastructure as their business needs rapidly evolve. Corgel recalls asking the CFOs: What if we eliminated your up-front investment? How about if we reset the metrics on how you get billed? “And boy,” he remembers, “did we get body-language changes.”

IBM’s still-evolving response to that mix of customer clamor and its own needs has been to develop many versions of on-demand computing. The company can manage server, storage, and networking equipment on a metered basis. Or it will manage traditional applications, typically in 1 of its 32 giant data centers, running the software on servers that are shared among a variety of customers. IBM is also developing new, specialized applications for vertical industries, created, like’s CRM application, from the ground up, to be shared across a wide customer base.

As one example, IBM is now talking to half a dozen retailers about metered hosting of new software (developed by IBM partner DemandTec) that is designed to optimize retail pricing. IBM Global Services now has 35 to 40 software partners developing such industry-focused software, according to Steve Kloeblen, director of finance and operations at IBM Global Services.

But how do those consultants factor in? “Utility service is one part of the on-demand message,” says Kloeblen. “Consulting services and business transformation and business-process outsourcing are other aspects.” In this vision, utility computing gets the house in order, and consultants step in to help companies “become more interconnected across networks of their own customers, suppliers, and employees.”

IBM’s message is that the greatest efficiencies from metered pricing accrue to those customers that examine their business processes to identify inefficiencies and standardize platforms, applications, and processes. This is necessary groundwork before resources can be fully shared across an enterprise.

Less clear at present is how these transformation services differ from business-process reengineering. Some customers, mindful of prior wasteful and harrowing business-transformation campaigns, may resist. “If CFOs and CIOs realize there is a similarity between what is being offered now and what was previously called business-process reengineering, that will bring back nightmares of unending projects that were expensive and in many cases of dubious value,” warns Eamonn Kennedy, a senior analyst with Ovum in London.

IBM says that business processes are often run on separate hardware and software systems, creating duplicate (or worse) expenditures that can only be reined in by integrating these processes to make efficient use of IT resources.

Meanwhile, in Palo Alto

While it does not (yet) have the broad consulting and vertical industry expertise of IBM, HP does have a longer history of providing metered computing services. HP first introduced its Instant Capacity on Demand (ICOD) offering in 1999. “When a customer buys a machine to meet a need satisfied by, say, four CPUs [central processing units], we ship it with eight,” explains van der Zweep. “They don’t pay for the extra CPUs until they’re turned on.” In late 2000, HP refined that approach with its Pay Per Use service, which is based on actual processing throughput versus simply ponying up more money to have additional CPUs activated. HP has already written more than 14,000 sales and lease contracts for ICOD and Pay Per Use, according to van der Zweep.

“IBM has the internal consulting strength and more vertical strength,” says Summit practice director John Madden. “HP’s message is, ‘We can’t be in all the verticals. We will rely on partners.'” (Major partners include Accenture and Deloitte Consulting.) HP does have strong vertical market capabilities in telecom and electronics manufacturing, and is developing expertise in financial services and the public sector.

As one example of differentiation, HP was among the founding members of the Enterprise Grid Alliance, announced in April, which pledges to bring grid computing to large companies. While that term is sometimes used interchangeably with utility computing, it’s really a subset that focuses on the software needed to link disparate computers into a unified processing engine. Grid computing focuses on the sharing of internal computing resources (see “You Can Always Tell a Harvard CIO…,” at the end of this article), and has attracted the interest of Oracle, Sun, Intel, EMC, and others.

Will utility computing shape up as a battle of the data-center heavyweights versus the consulting armies of IBM? It’s far too early to tell, but while every company will make its utility pitch based on a combination of savings and improved processes, they may differ substantially as to where they place the lion’s share of their efforts. Customers need to know that, and should ask hard questions about both short-term benefits and longer-term vision before signing on. Otherwise, the trend may earn yet another moniker: futility computing.

Norm Alster has worked for Forbes and Business Week and is a contributor to the New York Times.

Leave the Disk-Driving to Us

For 46 years, motorists have relied on the detailed printed maps of the Mobil Travel Guide. But with Web-based travel services among the dot-com survivors, Mobil executives decided to launch a Web service that would complement its printed guide with personalized road maps, hotel listings, and touring suggestions.

In launching the new site, CIO Paul Mercurio had to figure out what sort of a computing infrastructure he wanted. He realized he had one key advantage: “I had the opportunity to start from scratch.”

In May 2002, Mercurio sought bids for outsourced management of the new site. The idea, he says, was for Mobil to buy the requisite hardware and software, and have it hosted externally by a service provider. He soon narrowed the field to IBM and an unnamed rival.

At a meeting with IBM, Mercurio was given a final proposal for a traditional outsourced contract. But IBM then told him there was an alternative: Big Blue was launching its Linux Virtual Services, a metered pricing option. Mercurio was told he could be one of the first customers for the service, in which the Mobil site would be run on an IBM zSeries mainframe and share space with other customers, using less than 1 percent of the huge server’s processing capacity.

Mobil didn’t need to buy anything up front; it just had to agree on forecasted levels of processing power, storage, and connectivity bandwidth. Should the travel site — which expected spikes in demand during the summer — require more, it would be provisioned. For a business with uneven demand patterns, pay-as-you-go made a lot more sense than a traditional fixed-price agreement. Mobil signed on.

Now, nearly two years later, Mercurio is pleased with the arrangement. For one thing, the site ( was up and running within two months. And he estimates that Mobil has saved 25 to 30 percent annually on a discounted cash-flow basis. Much of the savings are in up-front capital costs and lower charges for months like February, when few drivers are planning leisurely cross-country treks. —N.A.

You Can Always Tell a Harvard CIO…

Both IBM and Hewlett-Packard tried to convince John Halamka M.D. of the merits of utility computing. Halamka, who is CIO of Harvard Medical School, was impressed. So impressed, in fact, that he decided Harvard would adopt a utility model — but without help from the two giant vendors.

“Harvard was big enough so that we felt we could be our own vendor, building an internal utility-computing service and making it available to the faculty. This is such new technology that we wanted to get in on the ground level and do it on our own,” says Halamka.

Harvard Medical’s shared research cluster allows researchers to access a common resource of Linux-based computing horsepower and storage, as well as a library of biocomputing applications. The arrangement depends in part on clustering software from Platform Computing, which bills itself as a leader in grid computing — yet another near-synonym for utility computing, at least as manifested in the sharing of internal computing power. At Harvard, the cluster replaces unlinked departmental servers, enabling researchers to tap into idle capacity.

By allowing researchers to access spare CPU cycles as needed, Harvard hopes to speed publication of research results, an unbending yardstick of academic prestige. The project cost $250,000 to implement, according to Halamka, and will bring significant savings, in part by allowing Harvard to consolidate servers and reduce space requirements, not to mention trim staff. In all, the project should save $500,000 in acquisition costs and $100,000 in annual operating expenses. —N.A.

Leave a Reply

Your email address will not be published. Required fields are marked *