A number of surveys, including our own, have revealed the unsettling fact that many companies have bought more technology than they need. To grasp the magnitude of this waste, look no further than the powerful computer on your desk. What does it do most of the day? It waits…and waits…and waits. Even in those rare moments when you actually give it something to do—spell-check a document or recalculate a spreadsheet—you’re hardly pushing your PC to its limits. Now multiply that by the millions of computers around the world, many of them, like yours, sitting around doing mostly nothing.
Now imagine that all the unused processing power in these woefully underworked PCs could somehow be harnessed and shared over the Internet to run really big, taxing applications. That’s the idea behind a concept in IT known as grid computing.
Grid computing began as a way to build massively parallel processors from clusters of smaller computers, which was seen as a more-economical way to tackle compute-intensive problems than the handcrafted supercomputers used by researchers, academia, and a few large corporations. One well-known, if somewhat unconventional, grid-computing project is [email protected], which started in 1996. SETI stands for Search for Extraterrestrial Intelligence, and the project relies on hundreds of thousands of PC users worldwide who donate a portion of their machines’ computing capacity to help search for signs of life in outer space.
Early grid-computing systems were custom-built to handle specific, compute-intensive tasks. But recently the development of a technical specification—called the Open Grid Services Architecture—has allowed software vendors to create business applications that can harness grid resources. Many of these applications are designed for specific vertical industries, including oil exploration, financial services, and semiconductor research.
A separate but related commercial application of grid computing involves allocating business applications around a company’s internal network. Essentially, a central software program scans the network for unused computing capacity. When it finds an idle machine, the software makes a copy of the application and the user data, then sends the copies to the machine.
These new applications stretch the definition of grid computing and cause some confusion, to the point where “you could say grid computing is in the process of being reinvented,” says Forrester Research analyst Galen Schreck.
How It Works
Grid computing is akin to peer-to-peer networking, the same technology at the heart of the music-sharing controversy. A network of computers behaves as if it were a giant, parallel-processing supercomputer, allowing a single software application to dynamically harness the unused processing power of all computers on a network. Special software “looks” around the network to see which computers are free, sends those computers a piece of the job at hand, then takes the product of these various computations and assembles them into a coherent result. Special software is needed to create a grid, and the application software is tuned to the grid. Also, when grid computing uses the Internet as its network backbone, special security software is needed to block unauthorized users.
Why You Should Care
Done right, grid computing can increase a company’s computing power while reducing its computing costs. Royal Bank Insurance, for example, has used grid systems from Platform Computing to shorten the time needed for certain actuarial calculations from 18 hours to just 32 minutes. Agricultural giant Monsanto has used the same technology with an eye toward cost reduction, accessing untapped compute power at 10 percent of the cost for build-outs of a more-traditional architecture, in effect cutting net hardware purchases by 90 percent.
Cost savings like this are the result of grid technology’s efficient use of computing resources. Traditionally, companies do what’s known as “peak provisioning,” meaning they buy enough computing power to handle their very biggest jobs—even if those big jobs run only occasionally. As a result, at any given moment as much as an estimated 75 percent of most companies’ computers are either idle or underused.
Where It Stands
Grid computing has captured the computer industry’s attention. Several old-school IT vendors are now working on grid-computing solutions, including IBM, Oracle, and Sun Microsystems. Oracle, for example, says it will call the next version of its flagship database software “10g” to emphasize a new focus on grid computing, and will back an industry consortium to create commercial standards. A crop of new grid-computing specialty vendors has also sprung up, including Avaki, Platform, and United Devices. Users of grid computing include such well-known names as Boeing, Bristol-Myers Squibb, Ford, and Pratt & Whitney.
On the scientific/technical front, grid computing is being used for financial pricing and valuation, crash-test simulation, microchip design, genomics research, drug testing, and actuarial modeling. Pratt & Whitney engineers now simulate airflows through jet-engine designs using a computational grid made up of some 5,000 workstations in three cities. Similarly, Royal Dutch/Shell uses grid computing to process seismic data.
There are some dissenters. Carly Fiorina, CEO of Hewlett-Packard, recently told attendees of an industry conference that grid computing “has been more hype than reality.” Broad acceptance in Corporate America seems hamstrung not by the newness of the technology per se, but by force of habit. Nearly 90 percent of IT managers recently surveyed by Platform said one of the largest barriers to adopting grid systems is organizational issues.
“People are effectively server-hugging,” says Ian Baird, chief business architect and vice president of marketing and sales operations at Platform. “They’re afraid to share their resources.” Broad adoption, says Baird, will require a combination of education, business-process reengineering, and incremental, department-by-department rollouts.
What 2004 Will Bring
Software vendors such as VMware and Microsoft are working on a concept known as “virtual machines,” which—while not technically grid computing—will help firms consolidate underutilized servers. It permits a single computer to run multiple operating systems, and attendant applications, at one time. For example, one server could run Windows NT, Windows XP, and Linux concurrently. As with more “conventional” grid computing, the advantage comes from getting the maximum use of standardized, fairly commoditized computers. That, in turn, should reduce the total number of servers a company needs.
Forrester’s Schreck says what’s needed next is intelligent workload management, essentially software tools that monitor networks for idle computers and redistribute applications. “You’ll need intelligent control software that can reallocate resources on the fly,” he says. “It needs to look through your data center, see what’s underutilized, and move those functions to a consolidation server.”
The growth of grid computing could also give a boost to the ailing telecom industry. Insight Research Corp., a consulting firm in Boonton, New Jersey, says that while current grid-computing work focuses on getting the most from hardware, the next step for grids will be to maximize the efficiency of network services. In fact, Insight’s president, Robert Rosenberg, claims that telecom networks will be “essential to the success of grid computing.” He predicts worldwide grid-computing sales will grow by more than 80 percent a year through 2008 to reach $4.9 billion, up from just $250 million this year.
If customers are to spend nearly $5 billion in the name of savings, other sectors will have to feel the pain. To date, precious little hardware has been made obsolete by grid computing. But if efficiency remains the rallying cry in IT, grid computing may have what it takes to overcome corporate inertia.