When Virtual Servers and Old Storage Collide

When Virtual Servers and Old Storage Collide
By

According to David Floyer, CTO and co-founder Wikibon, one of the biggest mistakes organizations made in 2011 was deploying virtual servers without investing in modern storage architectures.

In effect, the problem boils down to trying to build a modern SUV out of an old Studebaker. Companies now understand the benefits of virtualization across cost containment, energy savings, consolidated management, and so on. However, making the move to virtualized servers isn’t cheap, so it’s common for organizations with existing investments in storage to adapt their old but perfectly functional storage infrastructures into their new virtualized environments ... or to at least try.

In practice, organizations often deploy their virtual servers, tie in their storage leftovers, then wonder months later why their ROI is so unsatisfactory.  Those storage assets were in effect holding their virtualization server benefits back. The problem is going virtual through the ‘server door’ but treating storage is an afterthought. This was fine in the past with predictable environments but not anymore with unpredictable workloads, resulting in worse utilization of storage architectures.”

When deploying virtual servers, a common best practice is to provision a large chunk of storage to the new virtualized server. This serves to establish plenty of swap space; memory being virtualized and paged to disk. Server admins want enough headroom to create a VM on the fly without having to bother the storage admin every time a new VM is needed. Thus the advent of virtualization has created a strong push to pre-provision loads of unused storage capacity.

This pre-provisioning works, but it’s expensive. According to HP Storage’s Craig Nunes, vice president of worldwide storage marketing, the rule of thumb with VMware is that “for every dollar of vSphere license out there, you might spend $3 to $5 on storage.” On top of regular data capacity needs, such additional provisioning can end up making virtualization both storage-intensive and quite expensive.

Wikibon’s Floyer notes four specific areas in which duct taping old storage to new virtualized server systems causes significant issues:

1.Additional I/O overheads caused by I/O going through the hypervisor.

2.A higher percentage of random I/O caused by mixed I/O coming from multiple virtual machines on the same physical machine. Random I/O creates more stress on the I/O infrastructure.

3.Increased variability of I/O demand, caused by mixed workloads from the virtual servers.

4.Increased complexity of I/O management and performance tuning, caused by an additional layer of virtual infrastructure management software.

“The net effect of these is to add a storage virtualization ‘tax’ of at least 25% on both the storage subsystem and the storage management resources,” says Floyer. “Modern virtualized storage subsystems, such as HP 3PAR and HP LeftHand, can reduce this tax by providing functionality to tightly integrate with the hypervisors and virtual management software.”

Thin Provisioning

Pre-provisioning is the old school, less efficient approach to blending traditional storage with virtualized servers. An admin might provision 1TB of capacity for a server and then only end up using 100GB, leaving 900GB of capacity locked away from use elsewhere and effectively gathering dust. In contrast, thin provisioning can give a virtual server all of the storage it needs, but the storage array isn’t actually provisioned until data gets written. This allows admins to keep storage utilization very high while dishing out virtual capacity to servers on an as-needed basis.

“On average, customers use about 20% of a traditional storage volume,” says HP’s Nunes. “With thin provisioning, even though the server thinks it has a terabyte, for example, the storage array only allocates disk capacity as you write to it. If you write 200GB, there will only be 200GB in your array used for that application. It’s like what VMware does for CPU utilization, only here it’s for storage utilization. Thin provisioning will get your storage utilization to 70% or 80%.”

Leveraging virtualization in a storage-heavy environment can erase one of the biggest cost drivers in new virtualization projects. Without so much capital layout for unneeded storage capacity, less capital is needed, time to recoup costs goes down, and ROI goes up. Operational expenses can drop because there’s less equipment to manage. When this isn’t done and pre-provisioning remains the accepted model, the opposite is obviously the case.

An adjunct to thin provisioning is thin reclamation, also sometimes called thin persistence. This relates to storage capacity being unnecessarily trapped. Under the conventional provisioning model, admins will stand up a virtual server one day, provision storage for it, and tear the server down some time later. However, there is no easy method for reclaiming the storage that had been allocated to that server. Thin reclamation provides a simple way to send the capacity from that erased VM back into the general unallocated storage pool.

HP estimates that thin provisioning combined with thin reclamation will result in at least 50% less total capacity usage compared with conventional provisioning. In fact, the company is willing to bet on it. “If you’re refreshing a platform with HP’s 3Par family, we will guarantee that we’ll replace that storage system with just half as much capacity as you’ve got configured there and get the job done,” says Nunes. “If the refresh requires more than 50%, that software, capacity, and support is on us.”

William Van Winkle has been a full-time tech writer and author since 1998. He specializes in a wide range of coverage areas, including unified communications, virtualization, Cloud Computing, storage solutions and more. William lives in Hillsboro, Oregon with his wife and 2.4 kids, and—when not scrambling to meet article deadlines—he enjoys reading, travel, and writing fiction.

(Shutterstock cover image credit: Binary Hard Drive)

Take your big ideas off the back burner with Converged Infrastructure

Comments