Converged storage solutions come in all shapes and sizes. They all bridge disparate technology islands and require a lot of specialized connectivity.
The problems that plague many IT storage environments had their roots 20 years ago, back when mainframes still roamed the Earth and very clear boundaries existed between servers, networks, and storage resources. Bridging those three technology islands required a lot of specialized connectivity, resulting in much of the “secret sauce” touted by storage solutions providers and their proprietary equipment stacks.
Additionally, each of these stacks was tailored to address a given application. The configuration and fine tuning that went into a stack for Exchange Server could be quite different than another stack dedicated to an Oracle database. A stack was expected to deliver a five- or six-year lifespan.
By the middle of the last decade, this had shrunk to only two or three years, and virtualization was starting to make serious inroads into data centers. Now, virtual server stacks can go from planning stages to full load in only days and could be “torn down” after just weeks. Lack of workload predictability and on-demand resources are today’s words to live by.
“In the old days, you’d have an application that was well-behaved and might do a lot of sequential work or a certain mix of reads and writes in certain block sizes,” says Russ Fellows, a senior partner at Evaluator Group. “On a one by one basis, the storage system could figure that out and optimize for it. But with virtualized servers, you’ve taken all these different workloads with all these different block sizes and read/write mixes, tossed them all together, and you send them down one pipe to the storage sub-system. Now you have this random mess, so you need a system that can handle these different characteristics. You’ve got to have good tiering, writable clones, wide striping—a whole list of features.”
Adapting to these new loads in capable virtualized environments hasn’t happened overnight, though. Sub-optimal storage architectures remain. Even into 2009, leading storage vendors such as HP fully backed a “unified storage” approach. This involved having a common fabric able to conjoin file-and block-based access into Fibre Channel, iSCSI, and NAS resources. Unified storage was a tremendous advance over the silo-type storage approach of the 1990s. It was easier to manage, more reliable, and was easier to scale.
However, HP estimates that soon five billion people and two trillion devices will be on the Internet. The massive and disparate data loads caused by this global usage base, including those described by Fellows, will demand the utmost efficiency and agility from IT-driven businesses of all sizes. IDC predicts that storage spending will reach $22.5 billion by 2014 just for 67 exabytes of expected file-based capacity. And rather than IT wondering if it’s deployed too much capacity, IDC argues that management will be wondering if it deployed enough—and if that capacity is being used effectively.
William Van Winkle has been a full-time tech writer and author since 1998. He specializes in a wide range of coverage areas, including unified communications, virtualization, Cloud Computing, storage solutions and more. William lives in Hillsboro, Oregon with his wife and 2.4 kids, and—when not scrambling to meet article deadlines—he enjoys reading, travel, and writing fiction.