If it sounds too good to be true, it can still be true – PernixData FVP

 

If you said your mission was to “bring scale out storage to every virtualized datacenter”, you can bet that your Twitter follower count would drop immediately, and your peers would think you had gone off the deep end.  That is, unless your name was Satyam Vaghani, and you had already invented VMFS, brought VAAI to fruition, and helped introduced the concept of VVOL’s at VMware.  If you were Satyam, and you said that, people would just throw money at you.

 

Fast-forward to today, and that money has been poured into a startup called PernixData, which is about to unleash its Flash Virtualization Platform (FVP) onto the world.  I hear you groaning over there.  “Ohh, not another flash startup. . .”  Aren’t there tons of flash startups these days, all promising to revolutionize storage, and handle flash “like no one else”?  Indeed.  But you’re going to want to pay attention to this one.

 

There is no debate anymore about whether flash should be implemented in the datacenter.  The more interesting debate happens when we talk about about HOW to implement that flash.  You have several choices when it comes to using flash to bring storage performance to your applications.  You can buy some flash from your traditional storage vendor at breathtaking markups, and watch them shoehorn that into an existing, legacy array.  But then you have to live with paying a huge premium for performance you’ll never see.  Legacy arrays weren’t built with flash in mind, and they all will very quickly reach their limit when you start adding flash.  While you will no doubt see a performance boost, it doesn’t scale.  A few months later, after adding more hosts, and more VM’s, you will inevitably hit a wall again.  Then what?

 

You can go to one of the recent startups, like PureStorage, and get a nice array of SSD’s with great features, and replace your current storage array for close to the same price as spinning disks.  You can go to XtremIO (now owned by EMC), or NetApp, and buy one of their flash arrays, and either may suit your needs just fine.  Violin or Nimbus will gladly sell you an all flash array, but as you will see, there are some drawbacks to these approaches.

 

Vaghani believes that SAN attached storage is too far from the application.  Consider that a piece of flash can process an I/O in less than 100 microseconds.  Would any rational person want to add 400-800% latency on top of that, just to traverse a network?  There is a valid reason for doing so, and that is so you don’t have to change your existing data storage strategy.

 

If you’re using EMC already, and you want speed, buying some XTrem, and tiering it with your VMAX  is not a bad decision.  You use all the same tools, and the same methodology for storing, and protecting your data, as you you already use.  No need to learn another storage operating platform, or change your data protection strategy, just to get a speed boost.  But with that particular product, you’re only accelerating your reads.

 

Per Vaghani’s theory, and just basic physics, to get the best performance, flash should be as close to the application as possible.  So you could just go out and buy some SSD’s, or PCIe flash cards, and pop them right into the server.  That way, you are certain to get every technologically possible IOP out of your new, expensive flash.  Then there is that pesky problem of trying to figure out how to use this new storage without having to re-architect the way data gets protected and stored.  Often times, this means making changes to your applications, which should be loads of fun, and super-easy.
So what do you do?  Create a new VMDK on the new flash?  How do you protect it?  If it’s local, how do you vMotion?  Uggh. . .

 

If only there was a way to change the performance of the existing storage infrastructure, without changing. . . the existing storage infrastructure.  Introducing PernixData FVP.

 


FVP takes whatever existing flash you have inside your VMware host, and uses it to accelerate I/O from your back end storage arrays.  It integrates seamlessly into the VMware kernel, and aggregates flash storage from all hosts in a cluster into a single pool of flash resources.  FVP then hijacks the I/O from a VM before it goes out to the storage, and decides whether that I/O should be served from the pool of flash, or by the storage array behind the flash.  So it’s native to the hypervisor, and can be used to accelerate VM’s on a per VM or per VMDK basis. 

 

Although FVP has a clustered file system, it does not have any centralized management or metadata functions.  All nodes are autonomous, and do not need to communicate to other nodes to declare ownership of a block, or for any other reason.  This means clustering does not have an impact on the level of performance you will see from your flash devices.  Other clustered flash solutions on the market have some piece of their management functions centralized, so all hosts must communicate with the central authority, over that slow network we were talking about earlier, resulting in the same type of latency we would see using SAN-based flash.  Essentially what we have is what PernixData calls a Data-In-Motion tier, or as Enrico Signoretti says, a CAN (cache area network).

 

FVP offers both write-through, and write-back modes.  Write-through means FVP will not intercept, or accelerate write operations.  It passes those on to the storage array where the VMDK is housed.  In write-back mode, writes will be accelerated, and distributed to other nodes in the pool for data protection.  Reads are accelerated in both modes, and all this is completely configurable per VM.  You can select whether you want to have up to 2 additional replicas elsewhere in the cluster, or no replicas to completely maximize performance on a per VM basis.  The amount of capacity used in the flash pool is configurable by VM as well.  This level of flexibility is unmatched, anywhere in the market.

 

While there are one or two products out there offering write-back capability, they seem to ignore the fact that the virtual environment is highly dynamic, and VM’s frequently relocate to different hosts.  You didn’t think a former VMware guy would ignore vMotion, did you?

 

Once a VM vMotions to another host, the warm data residing in cache on the old host is compared to the data in cache on the VM’s new host.  Data that does not exist on the new host is transferred over the vMotion network, to the new host.  There may be a slight performance impact while the data is transferred, but the impact is minimal, compared to warming cache all over again.  Also, the severity of the impact is limited to the original, pre-FVP response time of your back end storage array.  So for a few seconds, you can reminisce about how slow things were back in the day, before you got FVP.  In the demo video below from Storage Field Day 3, performance started ramping almost immediately, as the data was being copied over.

 

In addition to vMotion, FVP also supports Storage DRS, HA, SIOC, VADP, snapshots, and VAAI.  It doesn’t get into the way of anything you are currently doing.  In fact, it is so transparent, you can actually yank out your SSD’s on a host WHILE a VM is running, and nothing will happen.  Try that with a LUN.

 

FVP is a truly comprehensive solution for VMware customers, and can be deployed in minutes, with no changes whatsoever to your infrastructure, or the way you currently handle data.  Simply add any flash, and run VMware Update Manager, and in minutes, you are reaping the benefits.  While PernixData does plan to support more than just VMware in the future, it was an obvious decision to start there.  Without exception, all vendors presenting at SFD3 mentioned VMware as being best-in-class, and I am inclined to agree.

 

Check out the vids below for some performance numbers, and a demo.  In addition to being a passionate, technically brilliant advocate for his product, Satyam could also moonlight as a stand-up comedian, so even though I am not in the videos, you will not be bored.  Apparently at dinner the night before, the Dutch Storage Syndicate (Arjan, Ilja, Marco, and Roy) poisoned me in retaliation for Justin Bieber’s disrespect of Anne Frank, so I had to watch from upstairs.  And I’m not even Canadian!

 

 

Although Gestalt IT covers delegate’s expenses for Field Days, delegates are not obligated to write, review, or produce content on any of the products or vendors we see at these events.

One comment Add yours

Leave a Reply

Your email address will not be published. Required fields are marked *