Miklos Bajzath was in a bind. Staff at Volvo Information Technology, where he’s vice president of infrastructure and operations, were busier than ever providing managed-storage services for their main customer, Volvo Car Corp. While that was good for Volvo IT in one sense, in another it was a never-ending nightmare. As at so many other companies, the Swedish car maker was increasingly digitized and the electronic data that needed to be stored and managed was growing phenomenally — according to the latest estimate, its data has been doubling every 18 months. Keeping up with that data deluge was putting a strain on Bajzath and his team.
Sure, he could have splashed out for more bodies and hardware to help him cope, but with IT budgets being squeezed, that was an investment Bajzath wasn’t keen to make. So he began to search for a more efficient — not to mention, more cost-effective — way to store all the bits and bytes.
Last Year’s Model
If Bajzath’s predicament doesn’t sound familiar, then it soon will. According to IDC, an IT analyst firm, corporate demand for storage services is rising nearly 80 percent a year worldwide. With storage costs making up, on average, between 30 percent and 50 percent of a company’s IT budget, finding a home for data will start taking up more and more senior-management time.
The good news is that the raw cost of physical storage — disk drives, tape systems, and so on — is falling; by up to 50 percent a year, according to some estimates. The bad news is that according to various estimates, storing and managing data costs almost three times as much as acquiring it. If that is indeed the case, then companies must increase efficiency by 60 percent every year just to keep from drowning in data. The trouble is, says IDC, a shortage of skilled staff and skimpy IT budgets have left companies without the manpower to cope with the rising workload.
It may seem like a mission impossible. But, as Volvo IT and others are showing, there is a way through the great storage challenge.
This means breaking with tradition. In the past, when a company needed more storage capacity, the IT department simply bought more computers. After all, using what’s known as the direct attached storage (DAS) model made a lot of sense — storage devices, like hard drives, came directly attached to a PC or server.
But it wasn’t long before IT departments began noticing DAS’s shortcomings. First, with DAS, an application sitting on a “full” server has no ready access to other machines with half-empty disk drives. Second, companies end up with several pools of data, rather than a single repository. With the arrival of E-commerce — which requires faster and wider access to data — that’s a big drawback.
Fortunately, technology vendors have two new models — and acronyms — ready to replace the old one. A NAS (network attached storage) consolidates file storage in single-purpose “appliances,” which literally plug into a company’s local area network (LAN). This model is intended for file-sharing or simple applications, such as E-mails.
A SAN (storage area network) brings together bits of data — as opposed to files — allowing multiple servers to share a single pool of storage devices arranged in a network. Such networks tend to sit behind applications, like sales or financial systems, serving up data in response to queries from those applications.
The impetus behind both approaches is the same: to substitute a one-to-one link between a computer and a storage unit with a many-to-many arrangement. In fact, once depicted as rivals, SAN and NAS are now considered complementary strategies. “Just about every large customer we deal with has a requirement for both,” says Tony Reid, European infrastructure solutions director at Hitachi Data Systems (HDS), a U.S.-based storage vendor.
For its part, Volvo IT — an HDS customer — certainly could use the two types of networks. But to get the ball rolling, Bajzath wanted first to focus on implementing just a SAN.
Granted, a NAS would have given Volvo IT a cheaper way of consolidating numerous servers. Andy Walsky, European marketing director for Snap Appliances, a NAS vendor, estimates a total first-year cost — covering installation, licenses, and the like — of $3,250 (E3,700) for a 160-gigabyte NAS appliance. (160 gigabytes, or 160GB, equals approximately 160 billion bytes.) He reckons that an equivalent DAS setup with a Windows NT server would cost over $10,000.
But Bajzath’s immediate challenge was to think beyond the file-sharing that NAS appliances help with. After all, because of its networked architecture, only a SAN could scale up to consolidate the growing number of bytes of customer, product, and financial data flowing around Volvo.
But there are the cost implications. “While NAS is cheaper than the DAS equivalent, with SAN you pay between 40 percent and 60 percent more,” notes Bajzath. A SAN costs more than other storage models because it uses Fibre Channel (FC), a networking technology that connects storage devices. Faster than rival technologies, FC is the SAN standard. However, because it’s a niche product, it’s more expensive than, say, technologies that a NAS appliance plugs in to, such as the ubiquitous SCSI or Gigabit Ethernet.
Another thing to bear in mind is that, until recently, FC hasn’t had a neutral body overseeing it as other protocols have. So pioneering SAN users have been left on their own to address, among other issues, incompatibility between vendors’ storage systems. This is changing — albeit belatedly — through initiatives like the Supported Solutions Forum, which was launched in June this year under the auspices of the Storage Networking Industry Association. Even so, observes Chris Ober, a storage analyst at Australia-based Ideas International, the specter of vendor lock-in is a key reason why fewer than 20 percent of European companies have wanted to commit to a SAN.
Still, if finance managers are willing to help their colleagues in IT to get through the pain barrier, the potential return on investment of SAN is dazzling. For example, Volvo IT, which began implementing its SAN last year, now boasts a storage utilization rate — that is, the percentage of overall storage capacity occupied by data — of around 75 percent, up from less than 50 percent before. “And in terms of efficiency,” says Bajzath, “we used to need at least one administrator per [terabyte] of managed data under a traditional storage model. Now we manage over 4TB per administrator, and our target is 8TB.” (A terabyte, or TB, is approximately 1 trillion bytes.)
The benefits, however, don’t stop at improved efficiency. Innovative companies like Paris-based Atempo, one of the few non-U.S. vendors in the storage market, are using SANs to cast routine storage-related activities in a new light.
For its part, French pay-TV operator TPS (La Télévision Par Satellite) is using Atempo’s Time Navigator software to protect 3TB of valuable subscriber data. Because TPS’s customers increasingly want to make changes to their accounts at any time of the day, the window when their accounts could be backed up to tape overnight was closing fast. So Atempo has come up with a way to back up the changes made to the data — incrementally, throughout the day — without slowing down customers and TPS employees trying to update accounts.
Just as important, says Philippe Boyon, marketing vice president of Atempo, “we have implemented a new way of writing to tapes that improves restore times in the event of an incident, by a factor of between 5 and 10.”
Indeed, disaster-recovery concerns have been a big driver of SAN adoption. But will this be enough, along with the steady trickle of promising case studies, to drive mass adoption of networked storage? Industry analysts certainly think so. IDC predicts that by 2004, 64 percent of the disk-based storage sold in Europe will be either NAS or SAN.
Vendors, of course, agree that adoption is inevitable. Nonetheless, reckons Haralambos Hatzakis, it won’t happen overnight. Hatzakis, who sells medical-image management products to European hospitals for U.S.-based StorCOMM, says his customers are big data generators and in theory should be ideal candidates for networked storage. Considering that a basic computed axial tomography (CAT) scan of a patient creates about 1GB of data, a medium-size customer might easily produce 2TB of data a year.
But Hatzakis predicts it will be at least another year before a significant number will want to make the move to a SAN. “Hospitals are generally very skeptical about implementing the latest technologies,” he reflects.
One barrier, he says, is that many of his customers think it is an all-or-nothing proposition. However, that needn’t be the case. “Most European hospitals want to increase their storage capacity incrementally,” he says. “This should make a SAN a very attractive solution.”
HDS’s Reid goes so far as to say that an incremental rollout is not only possible, but advisable. “If you have 1,000 application servers that you could consolidate,” he explains, “you will probably find some will be worth rationalizing and others won’t be, given the higher up-front cost of SAN versus DAS.” And as with any new technology investment, he recommends that companies focus on getting a few quick wins before committing any money to new equipment and software.
The irony, of course, is that like all new technologies, networked storage promised to simplify things, not make them more complex. This may still happen, but HDS’s Reid says the last thing a company should do is sit on the fence waiting for the perfect product. “By delaying the decision [to move to networked storage],” he contends, “you only increase the amount of time your IT people spend firefighting instead of planning for the future.” No doubt managers like Volvo IT’s Bajzath will agree.
Dealing with Data
Direct attached storage (DAS). The traditional model for arranging storage whereby disk and tape drives are attached directly to a server or PC. While technological enhancements have dramatically increased the model’s capacity, it is slowly being superseded by more flexible and scalable network-based storage models.
Fibre Channel. The current industry standard for transferring data between storage units in a SAN at speeds of up to two gigabytes per second (2Gbps).
Gigabit Ethernet. Souped-up version of the ubiquitous local area networking (LAN) technology that shifts data at 1Gbps. Like the Fibre Channel, it addresses one of the main sticking points of IP-based storage — speed. Expect a 10Gbps version by the end of 2003.
Incremental backup. An operation that backs up all data that has been modified or added since a given date. Unlike full backups, it does not require taking a database offline, a big attraction in today’s 24×7 economy.
InfiniBand Network. Technology that promises data transfer speeds of up to 6Gbps. It’s still under development, however, and many wonder whether next-generation designs of rivals Fibre Channel and Gigabit Ethernet will overtake it in the meantime.
iSCSI. Small computer system interface (SCSI) connects storage devices to computer systems. While it lost out to the speedier Fibre Channel initially, iSCSI — an emerging protocol for transmitting SCSI commands over an IP network — may provide a cheaper alternative in the future.
Network attached storage (NAS). A special-purpose device that serves files to other computers over a LAN. By adhering to standard file protocols such as NFS and CIFS, it allows users to share files across different platforms such as Windows and Unix.
Storage area network (SAN). A storage-sharing architecture that allows many-to-many relationships between storage units and computer servers. This makes it more scalable and efficient than traditional DAS.
Openness isn’t a word that technology vendors are quick to embrace. Hence the considerable frustration companies experience trying to get products from two or more vendors to work together. The storage market has been no exception. But that’s changing.
One reason: virtualism. As far as servers are concerned, there is a single, albeit virtual, pool of storage from which they request data. But in fact, virtualization software is working behind the scenes to manage the individual storage units that make up the SAN and ensure every vendor’s products can be accommodated.
That can translate into a huge saving for companies, says Rodolphe Favrel, an IT manager at Alcatel Space, a $1.26 billion French space systems contractor. At the moment, the firm needs around three administrators to manage its 3TB of data. With DataCore’s SANsymphony virtualization software, Alcatel Favrel can increase capacity up to 11TB, which will need just one administrator to run.
Another thing to look out for is IP-based storage. With jazzed-up IP-based network protocols, including Gigabit Ethernet and iSCSI, companies can get the speed of Fibre Channel (FC) technology without having to have the specialist skills to manage it.
That day isn’t here yet, however. In the interim, IP-based storage may find a role connecting two geographically separate FC-based SANs across a company’s wide area network. Already, a new protocol that bundles FC data streams into IP-based packets has emerged to do this.
Anthony Sibillin is the technology editor of CFO Europe.