With all the movements afoot in information technology, and with the Year 2000 problem casting its ominous shadow, the picture of corporate computing seems more out of focus than usual. But here’s comforting news: many of the new technologies and trends in IT only appear to pull in separate directions. Whether it’s thin clients, Windows terminals, NCs, Java computing, manageability, or Web hosting, all point to the same welcome conclusion: the tyranny of personal computers is rapidly ending.
That doesn’t mean they’re going away. PCs will remain indispensable appliances for the desks, laps, or palms of workers. But in the data-crunching hierarchy, they will be demoted from captains to foot soldiers. In the process, a lot of the headaches–and costs–of managing them will vanish.
Throughout Corporate America, PCs are already plugged into networks–LANs (local-area networks), WANs (wide-area networks), and intranets. But conventional client/server computing, still the big play in IT, divides application programming into scattered pieces, with at least one piece on each PC that uses the application. Thus, IT staffs never stop sprinting to keep machines in synch with one another, and in compliance with applications.
Reinventing the PC as a network node and putting centrally operated servers in charge doesn’t solve all the problems associated with PC management, but it makes the ones that remain more, well, manageable. It will let companies keep close tabs on both their data and the applications that parse the data, whether enterprise resource planning (ERP) suites or sales-force automation systems.
“That way there is no software out there except what a company decides its workers should have,” summarizes Amy Wohl, editor of TrendsLetter, an IT news-letter. “There are not different versions in use, and there is no other software interfering with important applications”–two of the prickliest thorns afflicting PC management.
The TCO Challenge
The problems of PC management coalesce in the concept of total cost of ownership, or TCO, first coined nearly a decade ago by Gartner Group Inc. Although there is disagreement over how it should be calculated, TCO represents a commonly accepted fact: the purchase investment in computers, software, and related goods–the traditional focus of financial accounting–is nothing compared with the accumulating costs of getting everything running effectively and keeping it that way. “Labor is typically 70 percent of the total cost of [PC] ownership,” states Vaughn Frick, Gartner’s research director for IT business management.
Derived from activity-based costing techniques, TCO varies widely from company to company. For purposes of illustration, Gartner uses a hypothetical company that runs 2,500 workstations on Windows 95, with 25 servers, about 1,600 printers, and so on. In the worst case, with virtually no management efforts, such a company spends $10,000 annually supporting each PC, says Frick.
The money goes in a lot of directions. Gartner puts out a 21-page “Distributed Computing Chart of Accounts” that catalogs more than 60 cost contributors. They include time spent in meetings with vendors, productivity lost when equipment fails, labor involved in disposing of retired machines, and many more.
A lot of the labor costs aren’t easily measured or even identified. When Eastman Chemical Co., in Kingsport, Tennessee, took stock of its IT operations in early 1997, it uncovered nearly 150 unofficial PC gurus–chemical engineers, chemists, and other non-IT personnel who spent more than half their time troubleshooting PCs for co-workers. About 200 more employees spent lesser but still significant shares of their office hours fixing PCs, bringing the total of unofficial PC experts to 2 percent of the company’s then-7,000-employee workforce. The productivity losses involved may be obvious, but they’re tough to tally.
“Corporations are awakening to the costs of all their PCs and LANs,” declares Tom Austin, vice president and research fellow at Gartner’s electronic workplace technologies unit. “There are massive, ongoing costs associated with these technologies that people just didn’t anticipate.”
Distributed Headaches
As with any other asset, computer infrastructures will always require careful management. But some characteristics of today’s dominant computing model, client/server, make management particularly tough.
In conventional client/server set-ups, PCs have a good deal of power–especially in “fat client” configurations, where the data processing is handled on the desktop. In recent years, the fat-client model has been largely superseded by so-called 3-tier or multitier configurations, with the data processing migrating to servers, and clients becoming correspondingly “thin.” Still, a lot of companies have a lot of separate software pieces floating around their client/server networks.
Worse, those widely dispersed PCs have to work together as a coordinated whole. Accordingly, each computer has to meet the same requirements the application imposes, including the arcane particulars of operating systems, internal software configurations, and what techies call “hardware dependencies”–physical requirements like memory size and hard-drive space. It’s not easy to impose PC uniformity to support corporatewide applications like ERP, which span many operations and departments. Throw in the fact that many PCs run several such applications–each of which can demand their own desktop requirements–and the level of complexity skyrockets.
Costs shoot up with it. Maintain- ing uniformity demands constant vigilance from network managers, because PCs can spin out of spec pretty easily. Every time a worker loads new software, or tries to adjust the stuff already in his PC, he can upset precariously balanced and configured programming. That accounts for a lot of help-desk calls. Even trivial tasks, like personalizing a display, can scramble a PC’s innards. “Something as simple as a screensaver can be a technical nightmare,” laments Lena Zoghbi, vice president, enterprise system management, at People’s Bank, in Bridgeport, Connec-ticut.
In extreme cases, the quest for conformity can drive companies to a computer housecleaning, or as Dell Computer Corp. puts it, a “bulldozing.” For example, Eastman Chemical standardized its computer operations in 1997 by leasing 10,000 Dell computers. That meant tossing out its accumulated hodgepodge of 10,000 machines.
In the computer trade, that is touted as a success story.
NT Pain Relief
Acquiring PCs en masse from a single manufacturer is a good way to manage PCs, but controlling applications from centrally operated servers is an even better way. Not surprisingly, vendors are scrambling to develop solutions for achieving such control.
Microsoft Corp.’s solution is an operating system–Windows NT Server 4.0, Terminal Server Edition. The system lets PCs run in a time-sharing mode, running as mere terminals attached to a server that does the heavy lifting. What this version of Windows NT has that others don’t is MultiWin technology, from Citrix Systems Inc. MultiWin enables NT servers to perform data processing for 30 to 100 users per processor simultaneously, depending on the hardware and the application. (A system can be scaled up by uniting machines in what’s called a server farm.) Ordinary desktop clients basically become computer terminals: they send keystroke and mouse signals over the Internet or another network connection, and receive screen images from the server doing their bidding.
At the same time Microsoft released Terminal Server, last June, Citrix began selling MetaFrame, which puts a Windows NT server to work for clients operating outside of NT. The software starts at $5,000 for a 15-user package, available through software resellers and systems integrators.
Citrix’s products enable Windows-based thin-client computing, which at present is the most enthusiastically received approach to centralizing PC management. The products let companies retain their familiar Windows computing interface, while gaining the advantages of centralized processing. They also let companies keep their PCs.
“People can have a single desktop that’s a PC, for things like word processing and spreadsheets,” comments Laurie McCabe, research service director at Summit Strategies Inc., a market research firm in Boston. “But they’ll access more and more applications through the network to the server.”
Windows Terminals & NCs
On the other hand, the opportunity to consolidate big applications on host computers brings opportunities to depower desktops, especially when workers don’t need PC functions beyond a server gateway. That explains the sudden popularity of Windows terminals, essentially a new product category. Models like the $700-and-up ThinSTAR, from Network Computing Devices Inc. (NCD), in Mountain View, California, and Winterm, from San Jose-based Wyse Technology Inc., starting at $369, plug into servers supporting thin clients.
What about the loudly heralded network computer (NC)? It has found customers, though not as many as its sponsors hoped for. (See “Ding-Dong, PCs Are Dead,” CFO, July 1997.) NCs like IBM Corp.’s Network Station and Sun Microsystem Inc.’s JavaStation NC are more versatile than Windows terminals, with more onboard processing power, but they omit disk drives and other passages through which workers can locally load or extract programs.
NC advocates tout the management advantages of this tamper-proof design. For one, users can’t pile in software that may foul up the system. For another, the absence of on-the-desktop data storage eliminates losses from local catastrophes like hard-drive failures. In fact, because there are no moving parts in NCs, mech-anical failures are all but eli-minated.
Still, a fair measure of skepticism circulates around NCs. Doubts grow strongest where NCs depart farthest from the Windows/Intel environment, into the floating world of Java.
“The Java computer has failed,” asserts Gartner’s Austin. Its shortcoming, he maintains, is irrelevance: a JavaStation simply isn’t necessary to gain the advantages of a recentralized, thin-client configuration. Although a Java-Station can handle Windows applications, it favors programming based in Java, Sun’s system-independent programming language. Trouble is, Java-based applications are emerging only slowly, says Austin. Plus, Java programming already runs on conventional PCs that are loaded with software to decipher the language.
What’s more, earlier efforts to sell NCs based on lower purchase prices have been largely rebuked. “That’s nice hype, but by the time you purchase the thin client, purchase the warranty, purchase the software to run the thin client, and purchase the server to serve the thin client, you have as much money invested as you do in standard client/server,” says Duane Stanley, managing director of information systems for American Eagle Airlines Inc., a Fort Worth-based regional air carrier that uses IBM’s Network Station.
At American Eagle, the NC’s advantages show up in reductions in software maintenance and management. Stanley says he’s getting about 200 percent savings over what a conventional client/server setup would cost. He oversees a network of more than 350 Network Stations scattered across 16 maintenance centers, communicating with servers in Fort Worth over the Internet. Employees use them to send E-mail, requisition replacement parts from online catalogs, and maintain inventory.
Not only does this setup consolidate all application servicing and repair, but when a thin client does screw up, technicians can diagnose and, often, fix the problem by using the control computer to look inside the client from afar. “We can have a smaller group of people to solve problems, which lowers our cost,” says Stanley.
American Eagle set up its maintenance and repair system from scratch, automating what had been largely paper-based transactions. The move to tightly centralized network computing isn’t as easy where employees are accustomed to controlling their own, autonomous PCs. The remedy is a gradual transition, an approach the airline expects to take about 6 to 12 months for its PC-savvy office workers.
At Solectron Corp., a $5 billion provider of electronic manufacturing services in Milpitas, California, CIO Ken Ouchi says getting “more psychological separation” between workers and their PCs is key to centralizing software administration. Solectron is moving more data processing and storage onto servers and giving workers password-protected access to them from any PC on the network. “You can walk up to anybody’s PC and use it, like a phone,” he says. “You shouldn’t feel that the one on your desk is necessarily yours.”
Web Hosting
Still, Solectron plans to let its workers retain their PCs. So far that seems to be the most popular approach. Of the 2 million terminal connections via Citrix software, nearly 75 percent are made with conventional, fat-client PCs, reports Dave Manks, senior product manager at Citrix. “Our message is, Don’t throw away your hardware.”
Certainly the same message applies to Web hosting (also called Internet computing), which works with any but the most antiquated PC. Nothing more than a Web browser, which functions as a universal client, is required at the desktop. Server-based applications must be configured to take commands and dish up data via the Internet. After that, Web hosting provides the same benefits of other server-centered approaches–plus it works everywhere.
“In the old client/server paradigm, you had to optimize the client for every possible application,” notes Marty Gruhn, vice president at Summit Strategies. “That’s not true with universal clients. You need a reasonably fast PC, because you don’t want one that’s going to crawl along. But you certainly don’t need a heat-seeker, either.”
Predicts Gartner’s Austin: “By 2001, 50 percent of new business applications are going to be built and deployed and managed using this Web style.”
Thanks to an emerging service called Internet application hosting, companies now have the option of leasing Web-hosted applications, which employees access via browser. The service is currently available from third-party hosts, such as USinternet-working (USI), in Annapolis, Mary-land. USI implements and runs PeopleSoft financial and human resources applications for companies. The rate ranges from $15,000 to $200,000 a month, depending on the application. USI takes care of all server hardware and software, setting up the system and running it. It guarantees rapid implementation (30 to 90 days) and 99 percent uptime. A typical lease runs for three years.
But Web hosting can serve up its own set of problems. For instance, American Eagle ran into Internet communication delays that stretched out its NC-system installation from two months to six. Response time was too slow, because the system relied on standard Internet communication–Internet traffic ferrets out random pathways as it travels over the public telecommunications grid. American Eagle eventually fixed the problem with a connection method called priority peering.
Security may be the biggest concern, but tech watchers appear confident that current security measures are already adequate. Better ones are soon to follow. “If people are in a cold sweat about security, they can work with companies that provide virtual private networks, which are bulletproof,” says Gruhn.
Meanwhile, consensus is building that the management advantages of Web hosting outweigh such concerns. Austin cites a large consumer-goods company with more than 50,000 PCs that is, he says, a paragon of PC management. Nevertheless, he says, the company will no longer build client/server applications, but instead will build Web-based applications. Why? “It’s so much simpler.”
Jeffrey Zygmont writes on business and technology from Salem, New Hampshire.
———————————————————————— 7 BEST PRACTICES
Server-centric computing doesn’t solve all pc management problems.
There’s more to PC management than putting applications on host computers. Sure, that cuts out the need for so much conformity among hundreds or thousands of desktops. But collectively, those desktop computers still represent an expensive, mission-critical asset, and there’s lots that can go awry.
It turns out that centralized control, in various guises, remains the master key to PC management. The following best practices work from the center to spread PC sanity throughout an organization.
(1) Buy goods with built-in manageability features. Awakened to the horrors they’ve wrought, many hardware and software suppliers now pack their products with accessories designed to help companies tighten control. On the hardware side, the names Compaq, Dell, Hewlett-Packard, and IBM earn consistent praise for such efforts. On the software side, Microsoft Corp.’s autumn 1998 update of Windows NT 4.0 supports a standard called Web-based Enterprise Management, designed to ease communication among systems controlling desktop, network, and data-center management. Even chip-maker Intel Corp. is aboard with its Wired for Management program, an attempt to make management features on all Intel-equipped machines work together.
Generally, these manageability features come built-in, at no extra cost. They provide asset tracking and management capabilities, helping companies keep tabs on whereabouts and raising alarms when lids are lifted, preventing theft. They can also include PC self-monitoring capabilities that report problems and imminent failures to a network watchdog.
(2) Use the built-in manageability features. “Many companies are buying manageability features in their PCs, but few of them, maybe only 25 percent, have actually implemented manageability,” observes Roger Kay, senior analyst for IT market researcher International Data Corp. The main reason: it requires extra learning and effort for IT departments already stretched thin. It also doesn’t help that, in a typical mix-and-match corporate PC collection, not all machines include the built-in manageability features. And many of the ones that do come from different PC suppliers; therefore, they may not work together.
Early planning is key to getting the most out of such features, says Doug White, a partner in the enterprise infrastructure practice of KPMG Peat Marwick LLP in Chicago. That could include an acquisition strategy that puts a premium on manageability.
(3) Buy from limited sources. Every PC purchased by Frank Russell Co., the Tacoma, Washington-based investment management firm, comes from Compaq. And every laptop is either a Compaq or a Toshiba. That’s because Frank Russell recognizes that PCs from different companies have subtle differences in their internal architectures that can affect the ways software runs on them. Frank Russell wants software to work identically on each of its 1,500 desktops. “Now, a support person always knows how to start the problem-solving process,” notes John Stingl, the firm’s manager of roll-out services. “Before, it took 15 minutes just to figure out what each person’s desktop looked like.”
If the number of different products can be kept small, TCO is reduced, attests Stingl. The paybacks can come in subtle ways. It’s simpler to integrate service operations with the help desk of just one PC maker, for instance. And even the administrative burden of equipment ordering is eased, adds Laurie McCabe, research service director at Summit Strategies Inc. in Boston. Notes McCabe, “A lot of companies are saying, This is just crazy–we’re not going to continue to buy PCs from three or four vendors.”
(4) Adopt a common software image, and certify additions. Standardization at Frank Russell extends to software as well. The company’s look-alike machines run on the Windows 95 operating system. They also get the same versions of applications, such as Microsoft’s Office 97 suite for basics like word processing.
Still, to maintain consistency even with identical software pieces operating on identical hardware platforms, Frank Russell installs software in what’s called a standardized image: everything goes into the machine as one monster file, set up in advance to tune all the software elements to the particulars of the company’s standard PC configuration.
Why? The established method of loading software sequentially, in layers starting with the operating system, introduces inconsistencies among PCs, explains Lena Zoghbi, vice president, enterprise system management, at People’s Bank, in Bridgeport, Connecticut. That’s because the installer is asked to make decisions along the way, such as in which directory to place a particular piece. “Chances are, you’re going to have a lot of different flavors of the same application, because people are going to install it differently,” Zoghbi says. With a preestablished “image,” the decisions are already made.
What if workers want to add their own software? Frank Russell, for one, tries not to discourage that. Its compromise is to require personal software to be submitted for certification, so Stingl’s staff can ensure it won’t foul up a box’s standard configuration.
(5) Centrally monitor and distribute desktop software. When it upgraded operating systems to Windows 95, People’s Bank didn’t send out legions of technicians. Instead, it made the upgrades from headquarters, using Enterprise Desktop Manager software from Novadigm Inc., in Emeryville, California. (Competing software comes from such vendors as IBM Corp. and Microsoft.)
Novadigm claims this centralized approach can save 80 percent of the overhead covering the support of distributed computing applications. At People’s Bank, Zoghbi says the software saves “millions of dollars in people resources.”
When Frank Russell installed its Microsoft Exchange E-mail system in the fall of 1997, six technicians spent four months visiting desktops between 5 p.m. and 10 p.m. Now the firm is installing Novadigm’s distribution software, sinking a $500,000 investment in hardware, software, and services. “We will recoup the cost within 12 to 24 months, simply by reducing those visits to workstations,” predicts Stingl.
(6) Consider leasing. A number of the nearly two-dozen manufacturing sites operated by Solectron Corp. lease PCs. CIO Ken Ouchi prefers total lease-outsourcing packages for corporate locations that lack the support capabilities to run networks on their own. The lease providers preload all the software, configured to specifications provided by Solectron. Some outsourcers even provide a call-in help desk. “We just purchase the service, and every two years new PCs show up,” says Ouchi.
“The value of the PC depreciates so quickly these days that we’re seeing more and more companies leasing the machines,” points out KPMG’s White. He puts the technology life cycle of a contemporary PC at a mere 18 months. “If you go to a traditional, three-year depreciation, your asset is worth nothing halfway through its life cycle,” says White. An average three-year lease can stipulate that the lessor refresh the equipment with new technology once during that period. And when the period ends, the company isn’t burdened with PC disposal.
That also eliminates the temptation to propagate hand-me-down PCs–a bad habit, contends analyst Vaughn Frick of Gartner Group. “Retire hardware at the end of the amortization period, so that the spread of technology throughout your organization is relatively narrow,” he advises. Otherwise, the variety of boxes, with their own idiosyncrasies, overwhelms IT service personnel. “[Having] a lot of old stuff raises TCO,” warns Frick.
(7) Enforce personnel policies that discourage tampering. Thanks to such products as Microsoft’s Systems Management Server (SMS), Solectron can view the software on employees’ PCs every time its network boots up. “We scan the directory,” explains Ouchi. “We know what should be there, and anything beyond that shows up as an exception.”
It’s up to Ouchi’s IS police to scrub away the illicit programming. But more than that, the group notifies the offender’s supervisors. “If we find something, we go to the management, because it’s a management issue,” says Ouchi. “You need to have management step up to the fact that they own all this stuff.”–J.Z.
