If the thought of a little downtime sounds good to you, keep it to yourself. In today’s wired world, “downtime” doesn’t connote a day at the beach, but rather a technological headache that will most likely have your IT staff racing toward whichever server is on the blink.
Odds are good they’ll have plenty of machines from which to choose. Servers became the workhorses of corporate computing in the wake of the client/server revolution of the mid-1990s, and now figure so prominently in E-business infrastructure that they have become part of the common vernacular. Server fiascoes made national headlines during each of the last two Christmas seasons, as the systems of online retailers wilted under enormous customer demand. Among the more Net literate, the Web’s common “Server not responding” error message is used to describe someone who is a bit slow on the uptake.
A Click Away
When that message does flash across a computer screen, it may well mean that a server problem is about to prove costly. “When customers see your site is too busy, too slow, or unreliable, they may assume that your business is too busy, too slow, or unreliable,” says technology analyst Peter Firstbrook of Meta Group Inc., in Stamford, Connecticut. “In E-commerce, the competition is a click away.”
That means that choices about servers — which ones to buy, how to incorporate them into an IT infrastructure, how to acquire them most cost-effectively — are now a major part of a company’s technology strategy. Reliability is crucial, but context is everything. Knowing just what a server is expected to do, now and in the future, often determines which choices are the most sensible.
Firstbrook is quick to point out that hardware does not act alone — applications and networks are equally important elements of a successful E-commerce platform. Even hardware vendors agree. “We’ve gotten away from quoting 9s,” says Mike Maas, director of Web server products at IBM in White Plains, New York, referring to the common vendor practice of one-upping competitors by issuing server-reliability quotes such as 99.999 percent or 99.9999 percent. IBM opted out of these decimal duels, he says, because “reliability is so dependent on the application and the infrastructure.”
If the relationship among software, hardware, and other elements of corporate operations is complex, the situation becomes even more difficult when companies move their businesses online. “Because of the nature of E-commerce, the scale is often unknown,” notes Firstbrook. “The infrastructure has to be designed so it can scale incrementally,” meaning it can add capacity as needs dictate. This is where the enormous variety of servers can be a blessing or a curse. A bad decision can mean that a powerful — and expensive — server will go underused, he says. Alternatively, companies that underestimate demand can find themselves desperately adding servers every month.
But capacity alone is not the issue. In techno-speak, the primary issue for servers in mission- critical roles is “high availability,” or HA. Like the word “server,” HA varies in meaning — and cost. “Availability needs to be measured in terms of the degree to which the server directly supports the business,” says Peter Burris, another analyst at Meta Group. In other words, HA is one area of IT where the cost of each “nine” can — and must — be weighed against the costs, tangible and intangible, of downtime. For some companies, a 10-minute outage may be acceptable; for others, it can be disastrous.
Mirror, Mirror
And availability isn’t just about the size of a server, but about its design and its reliance on supporting technologies. Last year, for example, the Bank of New York spent more than $10 million to beef up the reliability of its online Inform system, says Richard A. Pace, executive vice president and chief technologist. Inform offers clients access to most of the bank’s products and services, and over the past five years “has become a primary way for us to deal with business customers,” he says. Today, more than 2,000 of the bank’s business clients use the system to get information, compile customized reports, and enact high-value transactions, including instructions to receive or deliver securities or cash, or to open or confirm trade credits. “These are high-value transactions in a deadline-sensitive environment,” notes Pace.
Inform’s customers, who include finance executives, treasury managers, investment managers, and compliance officers, use standard Web browsers to navigate the system, and more than half access the system through the Internet, says Pace. Large corporate clients (typically those with more than 100 users) tend to use private-line access for speed and reliability of connection.
Like most online transaction environments, Inform requires a wide range of servers on which to run. Most of the back-end transaction processing takes place on IBM mainframes. To make it easier to generate reports, mainframe data is replicated to a Sybase database on Sun servers. Inform’s Web applications are also handled by Sun servers. Inform relies on a large bank of 59 Compaq NT servers, about half of which are used to compile specialized reports from the data held in the Sun server database.
Inform is a model of high-availability best practices: failure is not an option. To guard against it, there are primary and secondary servers in both of Bank of New York’s data centers — one located in New York, the other in New Jersey. Over the past two years, says Pace, the Sun and Compaq servers have been “mirrored,” using special software from Veritas. If something goes wrong with the primary server, the transactions it is handling automatically fall over to a secondary server. At the macro level, the data centers, like the servers, mirror each other’s information.
The outlay for all this is “not an insignificant amount of money,” says Pace. Last year’s $10 million price tag, he says, was driven in large measure by an insistence on high availability “because this is so mission-critical for us and our customers.” Indeed, he claims, many customers are abandoning some of their own internal systems and relying on Inform’s online applications — a compliance system, for example, that allows managers to check whether their investment decisions comply with their own preset guidelines.
But the actual price tag of the servers that support Inform, says Pace, is small compared with the total cost of ownership. Indeed, he says, the enormous cost pressure on vendors has made hardware virtually a commodity. Vendors now compete by offering extended warranties, and by proving that their boxes are more reliable, scalable, serviceable, or manageable than those of their competitors. Ease of management has become the new battleground. As Meta Group’s Burris says, “It doesn’t make any sense to save half a percentage point on the out-of-pocket expense if you engender sizable support costs as a consequence of an unnecessarily unique implementation.” For example, the biggest factor in Bank of New York’s selection of NT servers from Compaq was not price, says Pace, but Compaq’s remote management capabilities. “It is very expensive to roll a truck to a remote location,” he notes.
Modest Expectations
For many companies testing the E-business waters, however, systems like those at Bank of New York would represent massive overkill. At the other end of the E-business scale, for example, is Boston-based Nstar, the parent company of Boston Edison and several other electric and gas utility companies in Massachusetts. Nstar’s executives know all about the need for high availability, and maintain highly redundant IT systems designed to keep power flowing under all conditions.
Nstar’s E-business efforts need to be as sophisticated as its mission-critical operational systems. Two years ago, the company rolled out an electronic bill-payment system as part of a push to improve customer service. To date, the system is used by fewer than 120,000 customers, according to chief information officer David Samuel. “We still have only early adopters involved. We don’t have the mainstream of our customer base doing this yet.” But he expects as many as 60 percent of the company’s 1.3 million customers to ultimately pay their bills online.
That level of adoption is at least a few years off, he says, and hasn’t been helped by such uncontrollable factors as the recent Love Bug virus; any reports of hacking or related computer problems tend to make the public skittish about transacting personal business on the ‘Net. Nstar’s application also is not as sexy as other Internet services, such as online stock trading or electronic toy stores, that have had to deal with sudden explosions of customer interest. “Remember, we’re talking about paying utility bills,” Samuel points out. “It is still not top-of-mind for most consumers.”
That means that unlike Bank of New York, which handles huge transaction volumes and also must be prepared for sudden spikes, the challenge for Nstar is to build a system that is likely to grow slowly. Reliability is still important, though, not only because it helps hook customers who do try the system, but also because the same system is used by customer service representatives at three Nstar call centers, who answer the majority of customer inquiries.
The heavy lifting of Nstar’s electronic billing system is performed by just two Compaq 6000 NT servers — machines that cost about $15,000 each, says Samuel. One server holds a database of billing information, while the other runs the applications used by both Web customers and the company’s customer service reps. Two Compaq 1600 NT servers — small machines that cost only about $5,000 each — sit outside the firewall and provide the interface that customers see. A third server, a Compaq 3000, is used to test the data before it is released to the production system.
All told, says Samuel, the billing application represents a total investment in servers of about $50,000 to $60,000, a fraction of the company’s $60 million annual IT budget. “This is the value of smaller servers,” he notes: they allow companies to deploy certain systems quickly and inexpensively. While most E-business strategies revolve around high-end servers with plenty of excess capacity, “we believe ours is the more prudent approach,” Samuel says. As demand grows, Nstar can respond first by clustering additional NT servers around the application, and defer moving to a big-ticket Unix server until absolutely necessary.
Finance Figures In
For companies insisting on high availability, how much to pay may not be an issue, but how to pay certainly is. The buy-versus-lease decision should not be made hastily. Bank of New York buys all of its NT servers, says Pace, because it believes that leasing at the Wintel platform level — the smaller servers that become obsolete fairly quickly — doesn’t make sense. At the Unix platform level, he says, there is a viable aftermarket for used equipment, and vendors are more prepared to “take the speculative residual of the technology curve.” Bank of New York purchased all of its Sun servers, but has leased some high-end Alpha servers from Compaq. “We run a lease/purchase analysis anytime we are making a purchase at that level,” Pace says, “based on the cost of financing, the technology outlook, and where we are in the technology development cycle.”
New Yorkbased brokerage firm Spear, Leeds & Kellog (SLK), by contrast, leases about 95 percent of the Unix servers that support its online systems, says CIO Larry Cohen. About 10,000 professional traders, hedge funds, institutions, and broker dealers use SLK’s REDI system to access the NYSE, the AMSE, Nasdaq, and options exchanges. Some 17 percent of the NYSE volume passes through REDI each day, says Cohen. SLK also built a trading system to support about 400 traders in the firm’s over-the-counter market-making division, which does more than 400,000 trades a day.
Backup hardware is essential, says Cohen. “Pretty much every time we need one server, we get two.” All told, SLK replaces as many as 80 to 100 servers a year. With that kind of turnover, and with the price/performance ratio of computer hardware continually on the rise, Cohen says he prefers to lease almost all of his machines. “Over the course of three to four years, we find we can get new hardware that is much faster and cheaper,” he says.
One of the key financial issues when evaluating servers, he notes, is “assessing how long you will use a machine and what the residual value will be at the end.” Sometimes, he admits, leasing isn’t the right move. “If you feel that the process that is running on that machine will never need anything more from a power point of view for the next five years, you’re better off buying it.” But while that may be the case for systems running relatively static applications like payroll, he says, it is rarely the case in E-business, where functionality requirements change regularly, and volume can spike dramatically.
In fact, adds Cohen, “we always expect to be able to double our business. We never like to run our machines at more than 50 percent of capacity.” That’s a conservative figure for most applications, he says, “but in the brokerage industry, volumes can explode very quickly.” At 50 percent capacity, SLK’s servers can also shoulder the load if the company should somehow lose an entire data center.
Meta Group’s Firstbrook says it is not unusual for companies to run servers well below capacity. “We have clients that build out for 4 to 10 times their average daily peak load,” he says. There’s no question that both Bank of New York and SLK need such gold-plated systems. “Banks and financial institutions are extremely good at building high-availability systems,” he says. “You really have to be prepared for the storm.”
Smart Shopping
While competition has helped drive down costs, it’s also vital to buy the right server for the job. “If a server is not well suited to an application,” says Meta Group’s Burris, “then the IT staff will have to modify it, or it will present integration and scalability problems, or require more support personnel. In the U.S. alone there are 850,000 IT positions that are unfilled. No CFO wants to have to overinvest in the most critical resource — people — when they could spend a few more dollars for the proper server product. The trend has been clear for a number of years: people cost more than hardware.”
Vendor support is also critical, notes SLK’s Cohen, whose servers are all from IBM. “You don’t pick the vendor for financial reasons alone, but for service, reliability, and reputation.” Perhaps for that reason, companies tend to use a single vendor, at least within a given product tier. Bank of New York, for example, uses Sun for its database servers and Compaq almost exclusively for smaller, Intel-chip-based servers, says Pace.
Nstar is even more consistent. While keeping an open mind about other platforms, the company currently uses Compaq exclusively at both the Wintel and Unix platform levels. Standardization, says Samuel, is an important element in life-cycle planning. So is knowing when to replace machines: Since repair costs can be high and downtime deadly, Nstar swaps out old servers in favor of new ones “whether there is more life in the box or not,” he says. “Preventive maintenance is one-third the cost of corrective maintenance.”
Despite the simplicity of his one-vendor approach, Samuel says that what matters most is partnering with a leading vendor, or a combination of vendors, that have the resources to constantly incorporate new technologies in their products. Increasingly, as servers go, so goes the business. “We try to create a standard, enterprisewide architecture that facilitates growth, reduces total cost of ownership, and provides stability and performance,” says Samuel. “That’s how IT contributes to earnings per share.”
Tim Reason is a staff writer at CFO.
A Wide World of Servers
Considering the degree to which businesses now depend on servers, it’s remarkable that senior executives know so little about them. Servers range from $5,000 to $500,000 and fall into three distinct tiers. Some look just like desktop PCs on steroids, others are slim boxes stacked in racks like an audiophile’s stereo system. Re-label mainframes “enterprise servers,” as IBM has done, and you’ve got massive machines the size of industrial refrigerators.
It’s not really size that matters, of course, but functionality. There are Web servers, application servers, database servers, E-mail servers, firewall servers, and many others, each usually playing a highly specialized role.
At the bottom of the scale are the Intel-based servers that typically run Windows or Windows NT (hence the term “Wintel”) and support the networked desktops used by employees. For E-business applications, NT servers, often clustered together, are increasingly used to serve up the presentation level of E-business applications; that is, they deliver the information the customer sees on the screen. Appliance servers, designed to be installed quickly and at low cost, and to provide power for specific applications, are a recent addition to this category.
Mid-tier servers — usually based on some flavor of the Unix operating system — are used for operational applications and are generally perceived as offering a higher level of reliability and processing power (although Microsoft hopes its newest server software will enable Wintel servers to compete at this level). E-business applications that allow customers to perform some sort of transaction over the ‘Net typically rely on this type of server at the back-end. Unix servers can be clustered together to provide significant, “mission-critical” processing power.
At the top of the scale are mainframe computers, which offer unparalleled reliability and transaction-processing power, but may have limited flexibility. The term “mainframe” is so often associated with the legacy systems at the heart of Y2K anxieties that it is out of favor among many technology providers. In fact, the distinction between mainframes and high-end servers is increasingly difficult to make, as new forms of “middleware” allow mainframes to take a more active role in the business — supporting thousands of users in real time, for example, versus smaller numbers accessing it via batch transactions. —T.R.
The Cost of Cool
If Bill Gates made it socially acceptable to be a geek, Linus Torvalds made it cool. Torvalds is the inventor of Linux (pronounced “Lynn-ix”), the Unix-inspired operating system kernel that can be downloaded for free from the Internet. Because its source code has been debugged and refined by scores of programmers around the world, Linux is widely touted as a stable — and cheap — operating system option for the smaller server markets dominated by Microsoft Windows and Windows NT.
Linux has a passionate following among the tech-savvy, and mainstream support from such hardware and application vendors as IBM, Compaq, Hewlett-Packard, Dell, SAP, and others. But not everyone is a fan. Meta Group Inc. analyst Peter Firstbrook goes so far as to say that “Linux should be shunned. It should not be a part of the business process.” Firstbrook objects to the very feature that most tout as Linux’s number one asset — the fact that anyone can tweak the code — because it creates a situation in which an IT staffer may make changes that no one else knows about, and that probably go undocumented.
Firstbrook also takes issue with Linux’s most famous feature — the fact that it is free. “Our analysis says that the cost of the operating system is only 3 percent of the total cost of ownership of the server,” he says. Labor is a far more significant proportion of IT costs, and the very cost that is likely to be affected if employees spend time tinkering with Linux.
“Linux is out there and people are using it, but it is mostly because of the cool factor,” he says. “Having somebody who can screw around with my operating system would make me very, very nervous,” he says. —T.R.