The good news: analysts expect storage prices to continue their double-digit declines, thanks to commoditization and competition. And given that storage is an area of the IT budget that has felt keen pressure lately, that should please most CFOs. The not-so-good news: storage requirements are increasing, thanks to new regulations, and new kinds of storage technologies are coming down the pike, making buying decisions more difficult.
There are two basic kinds of storage: primary systems that store data in random access memory and are closely coupled to the computing processors that use the data, and secondary systems that store data on hard-disk tapes and other external devices, sometimes many miles away from where it will eventually be crunched. Within each of these categories is a mind-numbing array of competing solutions and technologies, with enough acronyms—ATA, DMX, RAID, SNAP, and SCSI (pronounced “scuzzy”)—to merit their own dictionary. Above the technologies sit new storage architectures or procedures, such as “information life-cycle management” and “continuous data protection.”
As a result, IT departments that once bought storage based on a simple judgment—how many gigabytes do I need?—now face an array of strategic issues: How fast should I consolidate my storage to reduce management complexity and improve total cost of ownership? How can I provide different levels of quality of service for different data to match up cost of data with value? How can I use storage to meet my compliance needs? And what’s the right timing for all of this? “Deciding when to make a move is daunting, because the technology always gets better,” says David G. Hill, vice president of storage research at Boston-based consulting firm Aberdeen Group. “Do you choose storage solutions from large reputable vendors, paying a higher price and risking a lack of flexibility to take advantage of new developments as quickly as possible, or do you try to find best-of-breed components that add up to a better overall solution at lower cost, but with higher technical risk and less support? There is a balance between getting in too early and bearing the additional extra overhead costs of not making a transition soon enough.”
Some CFOs will ponder these conundrums; others will simply smile at the plummeting costs. “On a dollar-per-gigabyte basis, storage prices are declining 35 to 40 percent, and we expect that to continue in 2004,” says Stanley Zaffos, vice president and research director at Gartner.
Zaffos says that thanks to a range of improvements in technology and manufacturing—from the development of disks with fewer platters and fewer heads to less-expensive supporting components such as power supplies, frames, and even sheet metal—”some would argue that in the storage space, it’s Moore’s Law times two.”
Technical innovation abounds in the storage market, despite commoditization. For example, new serial ATA disk drives and iSCSI (Internet Small Computer System Interface) systems should give users “the ability to do more with a lot less money,” says Robert Gray, research vice president of storage systems at IDC. “These two technologies will begin to have a significant impact in the second half of 2004 as they become available from a range of suppliers,” he says. “The challenge, from a cost perspective, will be the cost of the professional services and support involved. That may take up whatever slack there is in hardware cost savings.”
Storage giants such as EMC, IBM, and HP will also continue a strong push into software. Indeed, EMC’s $1.45 billion acquisition of Documentum, announced last month, will transform it from what Aberdeen’s Hill says was “a hardware company with a lot of software into a true hardware and software company. It’s a sign that content management and storage will be much more closely linked going forward.”
These developments further drive what The Yankee Group describes as a continuously morphing storage environment. With many and smaller systems now connected by various networking schemes—some dedicated to specific roles, such as archiving to meet regulatory requirements, others designed to provide widespread corporate access—storage is being redefined. Many kinds of devices can now be hooked together via Internet Protocol, providing new flexibility. And new roles for storage are emerging. For example, The Yankee Group sees substantial growth ahead for fixed-content storage, which stores large objects that should not be altered or broken into pieces, such as software code, user manuals, or medical images. In the near term, some of these effects will be hard to detect. Customers won’t migrate en masse to new storage architectures but will adopt them piecemeal, knowing they can be connected over time.
Phil Goodwin, senior program director at Meta Group, predicts that today’s storage-management software will become significantly more capable and less fragmented. “Today, storage-management software is in its adolescence, so in order to manage storage across a network you have to buy switches from one vendor, a storage-area network (SAN) management tool from another, backup and recovery capabilities from a third, and then integrate the whole thing yourself,” explains Goodwin.
As valuable business data becomes more dispersed across global enterprises, storage networks offer a way for all employees to gain access to whatever data or applications they need. All these trends will converge around a single theme: storage is no longer about the box, but about the policies those boxes are marshaled to serve.