You’ve heard a lot about server virtualization over the past few years, and many enterprises have adopted virtual servers to improve their ability to manage runtime workloads and high utilization rates to cut total cost.
But as a sibling to server virtualization, storage virtualization has some strong benefits of its own, not the least of which is the ability to better support server virtualization and make it more successful.
We’ll look at how storage virtualization works, where it fits in, and why it makes a lot of sense. The cost savings metrics alone caught me by surprise, making me question why we haven’t been talking about storage and server virtualization efforts in the same breath over these past several years.
Here to help understand how to better take advantage of storage virtualization, we’re joined by Mike Koponen, HP’s StorageWorks Worldwide Solutions marketing manager. The discussion is moderated by BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (23:05 minutes).
Here are some excerpts:
Mike Koponen: Storage requirements aren’t letting up from regulatory requirements, expansion, 24×7 business environments, and the explosion of multimedia. Storage growth is certainly not stopping due to a slowed-down economy.
So enterprises need to boost efficiencies from their existing assets as well as the future assets they’re going to acquire and then to look for ways to cut capital and operating expenditures. That’s really where storage virtualization fits in.
We found that in a lot of businesses they may have as little as 20 percent utilization of their storage capacity. By going to storage virtualization, they can have a 300 percent increase in that existing storage asset utilization, depending upon how it’s implemented.
So storage virtualization is a way to increase asset utilization. It’s also a way to save on administrative cost, and it’s also a way to improve operational efficiencies, as businesses deal with the increasing storage requirements of their businesses. In fact, if businesses don’t reevaluate their storage infrastructures at the same time as they’re reevaluating their server infrastructures, they really won’t realize the full potential of a server virtualization.
In the past, customers would just continue to deploy servers with direct-attached storage (DAS). All of a sudden, they ended up with silos or islands of storage that were more complex to manage and didn’t have the agility that you would need to shift storage resources around from application to application.
Then people moved into deploying network storage or shared storage, storage area networks (SANs) or network-attached storage (NAS) systems and realized a gain in efficiency from that. But the same can happen. You can end up with islands of SAN systems or NAS systems. Then, to bump things up to the next level of asset utilization, network storage virtualization comes into play.
Now you can pool all those heterogeneous systems under one common management environment to make it easy to manage and provision these islands of storage that you wound up with.
Studies Show Swift Payback
A recent white paper recently done by IDC focuses on the business value of storage virtualization. It looked at a number of factors — reduced IT labor, reduced hardware and software cost, reduced infrastructure cost, and user productivity improvements. Virtualized storage had a range of payback anywhere from four to six months, based on the type of virtualized storage that was being deployed.
There are different needs or requirements that drive the use of storage virtualization and also different benefits. It may be flexible allocation of tiered storage, so you can move data to different tiers of storage based upon its importance and upon how fast you want to access it. You can take less business-critical information that you need to access less frequently and put it on lower-cost storage.
The other might be that you just need more efficient snap-shotting, a replication of things, to provide the right degree of data protection to your business. It’s a function of understanding what the top business needs are and then finding the right type of storage virtualization that matches those.
In order to take advantage of the advanced capabilities of server virtualization, such as being able to do live migration of virtual machines and to put in place high availability infrastructures, advanced server virtualization require some form of shared storage.
So in some sense, it’s a base requirement that you need shared storage. But what we’ve experienced is that when you do server virtualization, it places some unique requirements on your storage infrastructure in terms of high availability and performance loads.
Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So you can quickly consume a lot of storage if you don’t have an efficient storage management scheme in place.
And there’s manageability too. Virtual server environments are extremely flexible. It’s much easier to deploy new applications. You need a storage infrastructure that is equally as easy to manage, so that you can provision new storage just as quickly as you can provision new servers.
As a result, you certainly get an increased degree of data protection by being able to meet backup windows and not having to compromise the amount of information you back up, because you’re trying to squeeze more backups through a limited number of physical servers. When you do server virtualization, you’re reducing the number of physical servers and running more virtual ones on top of that reduced number.
You might be trying to move same number of backups through a fewer number of physical servers. You also then end up with this higher degree of data protection, because with a virtualized server storage environment you can still achieve the volume of backups you need in a shorter window.
From an HP portfolio standpoint, we have some innovative products like the HP LeftHand SAN system that’s based on a clustered storage architecture, where data is striped across the arrays and the cluster. If a single array goes down in the cluster, the volume is still online and available to your virtual server environment, so that high degree of application availability is maintained.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.