Centers of Attention

Smaller, more-powerful servers are just one addition to the ever-evolving data center.
Doug BartholomewMarch 17, 2003

Last month, Sun Microsystems Inc. announced a raft of new technologies intended not only to give the company a boost in sales but also to refocus attention on an area of IT that hasn’t been buzz-worthy in years and possibly decades: the corporate data center.

Wasn’t it supposed to have gone “lights out” by now? Or be obsolete altogether, a victim of shoebox-size servers tucked into every nook and cranny at headquarters, or of outsourcing?

In fact, those trends are very much alive, but the data center lives on, continuing to command a sizable percentage of a company’s overall IT budget. Strategies for designing and operating the data center, however, can take into account more options than ever before: networking, storage, operating systems, server technologies, and many other facets of the “glass house” are evolving, as are techniques for managing it all.

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

Some drivers behind the changes have remained consistent: reduce cost and complexity.

Both were at issue, for example, when Brady Corp. embarked on a mission to centralize and consolidate its entire IT infrastructure. The $526 million manufacturer of identification, labeling, and data-collection products wanted to align its worldwide operations more closely, and to do that, “we needed common technology at the heart of everything,” says Keith Kaczanowski, vice president of IT and process improvement at the Milwaukee-based firm.

The company is moving operations from multiple locations around the globe into two data centers in Milwaukee. The goal is partly to save money, as well as to provide a platform on which portal technology can give all employees a better, more-timely window into operations.

Having key financial and operational data accessible to people throughout the company has not traditionally been a prime mission behind data-center redesigns, but these days companies are being more careful to ensure that decisions about data centers mesh with higher business goals.

The aim? To store data and run applications cheaply than it is to create a platform that “puts information into the hands of the business users who can do something about it,” says Kaczanowski.

Brady’s data-center consolidation is being repeated at hundreds of corporations. Nearly three out of five IT organizations in the world’s 3,500 largest companies are centralized, according to a survey by Forrester Research Inc. “For a large corporation, it’s fairly easy to get to millions of dollars in savings through recentralization of servers and expensive IT support,” says Peter Kastner, chief research officer at Aberdeen Group Inc., an IT research firm in Boston.

Adds Martin Piszczalski, president of Sextant Research, an IT research firm in Ann Arbor, Michigan, “the number-one way to cut IT costs is by consolidating data centers.”

Many corporations have ridden the consolidation wave to savings by slimming both hardware and staffing needs. “Overall, IT staffing has been reduced anywhere from 5 to 25 percent over the past several years,” says Kastner.

The typical data-center consolidation “invariably yields a double-digit head-count reduction, usually in the 10 to 20 percent range,” he estimates. For a global corporation with an IT budget of $100 million, the annual savings from unifying the computer shop under one roof can easily tally $10 million or more.

Certainly, newer technologies such as servers have been designed to require less manpower — Sun’s recently announced “N1” family of products and services offer a range of automated deployment and monitoring functions, and rivals IBM and Hewlett-Packard are also pushing the “do more with less” theme.

But even the venerable mainframe has been tweaked to keep pace with the times. Without doubt, companies feed and house fewer mainframes than they did half a dozen years ago. But the death of Big Iron has been greatly exaggerated for years. In fact, Mike Chuba, an analyst at Gartner in Stamford, Connecticut, says, “every company in the Fortune 100 has at least one mainframe or has its systems outsourced and running on someone else’s mainframe.”

Iron Giant

IBM, the biggest maker of mainframes, with nearly 90 percent of the worldwide market, sold an estimated 2,000 units last year, and has labored mightily to adapt them to current data-center needs. Data storage? Distributed computing? The Internet? Mainframes tackle it all, and remain a viable data-center workhorse.

National Gypsum Co. saves money by leveraging its IBM zSeries mainframe to support Web-based inquiries of its various mainframe-based applications. “Our Web site is bridged to our mainframe,” says Mike Brannon, senior manager of Internet technology at the sheetrock manufacturer, headquartered in Charlotte, North Carolina.

By integrating its legacy applications with the company’s Web site, National Gypsum was able to avoid having to buy new computers. “It actually takes fewer people to manage the mainframe environment,” says Peter McCaffrey, director of product marketing for the eServer zSeries mainframe business at IBM.

In keeping with the craze to save via centralization, some companies have been able to get rid of a dozen, or 50, or even 100 smaller servers by rehosting all their systems on a mainframe.

Similarly, Winnebago Industries Inc. continues to roll along with its big box. “It’s typical for companies to have unused capacity on the mainframe,” says Dave Ennen, technical support manager at the Forest City, Iowa, maker of motor homes.

The company has been able to run a number of newer Web-type applications on its IBM mainframe without having to add servers. “We’ve had mainframes here since the early 1970s, and we still have a lot of homegrown applications running on there that are critical to our business.”

But for companies without an existing investment in mainframes, the continuing surge of new server technologies tends to drive data-center design. Many are investing in powerful new machines called “blade” servers that essentially contain the guts of several PCs packed onto a single card (as opposed to the more “traditional” — if one can use that word in IT — rack-mounted server unit).

Ultimately, these machines offer the power of a supercomputer at far less cost. “Blade servers use very standard elements to optimize space, power, and cooling,” says Anil Vasudeva, president and principal analyst at Imex Research in San Jose, California. The modular servers’ flexibility means they can be configured into various telecom, networking, or storage modules.

Blade servers can be harnessed to run any number of computing power-hungry applications, including the high input/output requirements of online transaction processing, decision-support systems with online analytical processing, and on-demand video-streaming applications that require high bandwidth. The impact of blade servers “will be truly profound,” adds Vasudeva.

Blades are very much in keeping with the continuing commoditization of the server market. Low-end machines have plunged in price to the point where some observers refer to them as “disposable.” Once expensive and bulky, today’s servers are priced so low that if one fails, it’s more economical to replace it than to fix it.

In some cases, the spares are already installed in standby mode, waiting to be switched on. Dell Computer Co.’s lowest-priced server sells for $349, including rebates. As recently as the late 1990s, most servers typically were priced in the $12,000 to $100,000 range.

Regardless of what hardware one finds in the data center, the odds are increasingly good that it will be running some version of Linux, the legendarily “free” operating system that seems to win new converts — even among vendors — every day. Linux isn’t totally free, of course; when companies adopt it, they typically spring for maintenance and support services in order to have a vendor on call. But there is no license charge, and for that reason alone most large companies are giving Linux at least a chance.

Some have found that its associated costs — such as consultants to install it and hiring new employees or training current staff to use it — can swallow up much of the savings. “We asked ourselves if this could possibly be a replacement strategy to drive our costs down,” says Ann Turnbull, first vice president and IS manager at Mellon Financial Corp., in Pittsburgh. “We figured that the cost of conversion would outweigh any savings over a two-to-three-year period.”

Despite a surfeit of media attention, Linux has not in fact taken over the world; it accounts for just 5 percent of the server market. But its momentum is unmistakable, and even Sun, which has long touted its Solaris operating system, now offers Linux-based systems on the low end.

Linux is also making gains on the high end, as that endlessly adaptable mainframe proves a viable platform for the software. Winnebago jumped on the bus nearly three years ago, using mainframe-based Linux for such applications as Web serving, file sharing, and other Internet-related functions, and other mainframe shops are following suit.

More typical, however, is Cooper Tire & Rubber Co. “Our systems administrators are trying to learn and experiment with a version of Linux,” says John Mitchell, vice president of IT at the Findlay, Ohio, tire manufacturer. “We don’t know if we want to bet the company on Linux yet.” But many analysts expect that following a suitable period of tire-kicking, most companies will increase their reliance on Linux, perhaps not on the desktop, where Windows reigns supreme, but certainly within the data center.

Centralization entails software decisions beyond the operating system, of course. Oracle Corp., for one, is promoting “single-instance” software, whereby customers can run all their operations using one copy of its enterprise applications.

Always happy to hold itself up as an example to follow, Oracle got single-instance religion a couple of years ago when it found it was running some 64 databases worldwide. “We had essentially one per country,” says Steve Miranda, vice president for applications development at the Redwood Shores, California, company.

No more. Today Oracle has reduced 64 sites to just 3, and will soon be reliant on a single data center in Austin, Texas, with a redundant site at the ready in the event of disaster. Fifteen of Oracle’s large multinational customers are following suit. “The majority of our customers are in a similar boat, with their ERP systems installed multiple times throughout their enterprise,” adds Miranda.

Besides getting rid of redundant hardware and systems, companies find that this kind of centralized approach makes it easier to perform such reporting functions as global audits. The advent of Web-based technologies does, in theory, make a single virtual data center out of widely distributed computing resources, but proponents of the single instance argue that a single installation is rigid enough to enforce processes while remaining flexible to financial practices.

Copy That?

Even as companies centralize hardware and software, they are choosing to distribute data storage. The idea is to maintain identical sets of data in different locations to protect against service interruptions.

“If data center A goes away, we can bring all the same data up instantly on data center B,” says Allan Woods, vice chairman and CIO at Mellon Financial. This is a trend all CFOs — not just those at banks — should be aware of, he believes. “It will be on most CFOs’ radar soon if it isn’t already — the acceptance that disaster planning and business recovery are investments that companies will have to make,” he adds.

Costs per gigabyte of storage continue to plunge — bad news for vendors, since a concomitant rise in demand has not materialized, but good for customers. When it comes to storage, at least, “absolute dollar expense is not something that drives most CFOs crazy,” says Woods.

Helping to drive costs lower: the trend away from traditional direct-attached storage and toward various forms of networked storage. According to a study from McKinsey & Co. and Merrill Lynch & Co., a company with five terabytes of data storage would save $2.5 million during a three-year period by opting for networked-storage technology.

Glass House or Greenhouse?

The data center has long been renowned for its specialized needs in terms of the care and feeding of a mainframe: raised floors, special cooling systems, and plenty of backup power. But with so much processing power moving to small, low-cost servers, shouldn’t any room do for a data center?

Not necessarily. It turns out that newer computers often have gargantuan appetites for energy and require plenty of cooling capacity in order to counter the prodigious amounts of heat thrown off by multiple CPUs.

“The heat output and power requirements of the new generation of rack and blade servers are 6 to 10 times what many data centers are designed to handle,” says Carl Claunch, vice president and research director at Gartner. The typical data center was designed to provide 40 watts per square foot of space, versus a demand for 300 watts or more posed by new data centers. “For many companies,” predicts Claunch, “this will mean a fairly extensive retrofitting, or else they will have to find emergency or secondary centers to host their new systems.”

The problem is anything but trivial: building a new data center can easily run into the tens of millions of dollars. Whether building a new site or modifying an existing one, Claunch says that “the time to plan for the power and cooling needs is at the beginning, because it’s disruptive and expensive to build in the additional power and cooling later in the process.”

Some buildings can be retrofitted by raising the floor to run additional cooling equipment or ducting to change the airflow patterns. In other cases, underfloor space can be freed up for the installation of cooling gear by rerouting signal and network cables to an overhead tray. A better route, says Claunch, may be to check the local corporate real-estate market for a data center that was part of a failed dot-com or some other fast-shrinking business that no longer needs a 21st-century glass house. “There is a lot of space on the market, including improved data-center space that’s just sitting there.” Who would have thought that the bursting of the Internet bubble would have benefited, of all things, the good ol’ corporate data center?