IT Infrastructure: Is Grid a Lock?

IBM's ''grid'' computing architecture harnesses the processing power of many computers connected by a high-speed network.
Scott LeibsOctober 1, 2001

Big Iron appears ready to give way to Big Pipe. Over the past few months, IBM has unveiled several iterations of its “grid” computing architecture, a system that tackles complex problems by harnessing the processing power of many computers connected by a high-speed network.

Sound familiar? Arpanet and NSFnet, forebears of today’s Internet, were designed in just that way and for just that purpose. This time around, the computers are more powerful and the bandwidth vastly wider, but the concept remains the same. And, just as those early networks began by serving a small, highly technical audience and evolved to meet the needs of computer users worldwide, grid computing will soon move from a specialized core constituency to a broad base of corporate customers. Such are the hopes at IBM and several of its partners, at any rate, which argue that companies can benefit either by building their own grids or by renting time on massive grids operated by IBM.

In dividing a computing task across a network of machines, grid computing also revives the concept of parallel processing, which was widely viewed as the future of supercomputing in the late 1980s and early ’90s. Deciding just how to parse the workload among many processors has long been a thorny software challenge, one that grid computing addresses with a layer of middleware developed by Globus, a nationwide consortium of academic and research institutions. The Globus tool kit determines the requirements of a computational task and decides where and how to run it. John Patrick, vice president of Internet technology at IBM, says that in many cases, a researcher using the grid won’t even know where the data resides, let alone exactly how a particular job is being processed.

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

Indeed, who could keep track? The grid being built for the National Science Foundation (at a cost of $53 million) will perform 13.6 trillion calculations per second, making it 1,000 times faster than IBM’s chess-playing Deep Blue. Such power will no doubt come in handy for genetic research, weather modeling, and other traditional supercomputing tasks, but where does that leave, say, a bumper manufacturer? Wes Kaplow, the chief technology officer for the government services arm of Qwest Communications International Inc., says that companies in the pharmaceutical, energy, and auto industries have an immediate need for the grid’s capabilities.

Most companies, however, do not, but IBM is positioning the grid as not just a supercomputer but a computing power plant that could operate like a utility, dispensing its services as needed. Some companies, Patrick says, will rearchitect their own computers into a grid, maximizing investments in storage, software, and processors. That is likely to be expensive. A more common scenario may be that companies will buy a range of computing services from massive grids operated by IBM. In fact, the company has already linked its own research centers into a grid, and plans to extend that effort to virtually all of its internal computing resources. As an outsourcing powerhouse, IBM’s Global Services unit is a logical candidate to rearchitect itself around a grid and sell every computing function imaginable as a service (so-called E-sourcing).

Patrick says he’s confident that grid computing will become a part of corporate computing within two years, winning business not only through economies of scale but because it is designed to tackle any workload, thus sparing companies the unpredictable performance of the Internet. Although E-business has lost some momentum, there is no doubt that companies have become extremely dependent on Internet connections to transmit data and run applications, and many have already learned that such reliance makes them vulnerable to traffic jams. Grid computing, with its massive pipelines and self-managing software, can, in theory, easily meet any demands placed on it.

Today, Qwest’s Kaplow is quick to admit that “you don’t need a 10-gigabit pipe to connect two Pentium PCs.” And early adopters will face the steep price tags that always greet those on the leading edge. But IBM vows to take grid computing from the esoteric ranks of supercomputing to the mainstream of corporate IT as fast as an $88.4 billion company can.