For something billed as the hot new model in business computing, it’s got a distinctly unsexy moniker. Maybe that’s why everyone calls utility computing something else.
At IBM, it’s “E-business on demand.” At Microsoft, it’s “the dynamic systems initiative.” At Hewlett-Packard, it’s “the adaptive enterprise.” Sun Microsystems uses the efficient but cryptic label “N1.” And Forrester Research calls it “organic IT.”
While their approaches vary, all refer to the same basic concept: computing will evolve into a shared resource like electricity, water, gas, or telephone service. (In fact, it’s sometimes referred to as the fifth utility.) Essentially, the goal is for users to plug into the utility, using just as much computing capability as they need, for just as long as they need it, paying only for what they’ve consumed. As with traditional utilities, customers can make trade-off decisions about how much they pay versus the quality of service they get.
Like other utilities, this one relies on shared infrastructure—accessed, in this case, over a corporate network or the Internet. But unlike other utilities, it doesn’t hinge solely on outside suppliers. Many emerging initiatives are intended to help companies create their own “power plants” that can dynamically allocate computing supply to meet demand. Among those already in play is HP’s Utility Data Center, a package of hardware, software, and services designed to help companies easily manage and reallocate computing resources. IBM has pledged $10 billion to its effort. Some of that will be spent on customer education, which analysts say is key.
As with any potential technology revolution, this one faces plenty of potential barriers. Right now, there are no industrywide standards, although the much-discussed Web-services protocols designed to help applications communicate over the Internet will play a large role, as will grid computing. Although tools are emerging to meter and charge for payment, and to determine whose computing needs take priority, enterprise policy development, architecture design, and governance processes haven’t caught up yet. And there’s no simple way to turn a big company’s widely scattered IT systems into a single integrated utility without going through a major consolidation and standardization effort.
“Most vendors are underplaying the complexity of the changes companies have to go through to do this kind of dynamic utility computing,” says Mary Johnston Turner, vice president and director of the large-enterprise practice at Summit Strategies Inc., in Boston. And those changes aren’t just technological, she says: “To realize all these beautiful benefits—saving money, improving productivity, making IT responsive to the needs of the business—you really have to adopt a policy-based automated services management approach” as well. That will mean, among other things, striking a balance between selective outsourcing and internally-run systems.
Despite those challenges, analysts predict that, by any name, utility computing makes so much sense that it will become an everyday business reality—eventually. “There are a lot of moving pieces to it,” says Turner. “I really do believe we will get there, but I believe for most enterprises, it’s at least a five-year journey.” Forrester analyst Frank Gillett goes even further, predicting that “achieving the full vision” will take a decade or more. But at its heart, utility computing will transform IT from an asset-intensive undertaking to a variable cost, a service expense that rises and—in theory—falls.
