In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October.
I liked the announcement e-mail from Jeff Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0:
FVP version 1.0 took the world by storm a year ago with the following unique features:
- Read and write acceleration with fault tolerance
- Clustered platform, whereby any VM can remotely access data on any host
- 100% seamless deployment inside the hypervisor using public APIs certified by VMware.
Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities:
- Distributed Fault Tolerant Memory (DFTM) – Listen to PernixData Co-founder and CTO, Satyam Vaghani, describe how we turn RAM into an enterprise class medium for storage acceleration in this recent VMUG webcast
- Optimize any storage device (file, block or direct attached)
- User defined fault domains
- Adaptive network compression
We will take a look at PernixData FVP 2.0, how to upgrade from version 1.5 and explore the newly introduced features…
Upgrade from FVP 1.5
In order to test FVP 2.0, we first have to upgrade the existing install base. The base components haven’t changed and still consist of the management software and the vSphere host extension.
We are using the FVP host extension version 2.0.0 (duh) build 31699 and management server version 2.0.0 build 6701.0.
1. Before you upgrade!
Note that before upgrading, just must possibly change your write policy status to Write Through. This change is not(!) instant. You should monitor the ‘Usage‘ tab to check the ‘Requested Write Policy‘ column if all your accelerated VM’s are transitioned to Write Through mode!!
After that, PernixData states that when upgrading from 1.5, you should first upgrade the management server before upgrading the host extension on your vSphere hosts.
2. Upgrading management server
Upgrading the management server is dead easy, won’t bother you with the next > next > finish windows. 🙂 During/after upgrading, while viewing your PernixData tab, you can get an error in you vSphere (web)client like:
Don’t worry… Because of the upgraded management server, your vSphere client plugin is just outdated. Because the FVP extension is still active, acceleration should be ongoing during the management server upgrade! Restart your client (or browser) and upgrade to the new FVP 2.0 plugin when using the thick client. Not necessary when using the web client (which you should probably use when running vSphere 5.5) of course.
3. Upgrading host extension
In order to install the FVP 2.0 extension, you must uninstall the FVP1.5 host extension. Therefor using VUM is not supported for upgrading the FVP extension. Clean installs are perfectly done by VUM of course.
After upgrading, a reboot is not necessary.
PernixData provides instructions in their upgrade guide. Follow these instructions per host:
1. Put the host in maintenance mode.
2. Login to the host ESXi shell or via SSH as a root user.
3. Using the command below, copy and then execute the uninstall script to remove the existing FVP host
extension module: cp /opt/pernixdata/bin/prnxuninstall.sh /tmp/ && /tmp/prnxuninstall.sh
The uninstall process may take a few minutes.
4. Using the esxcli command below install the PernixData FVP Host Extension Module for version 2.0. Example: If you copied the file host extension to the /tmp directory on your ESXi server then you would execute the command below: esxcli software vib install -d /tmp/PernixData-host-extension-vSphere5.5.0_2.0.0.0-31699.zip
5. Using the esxcli command below, backup the ESXi configuration to the boot device: /sbin/auto-backup.sh
6. Remove the host from maintenance mode
Aaaannnd… we’re done! Moving on to do some testing!
DFTM
Or Distributed Fault Tolerant Memory… A key feature in the 2.0 release. Previous FVP versions made it possible to accelerate write caching with fault tolerance on supported SSD or PCIe flash devices. Now, we can use a RAM repository for cached blocks with fault tolerance!! RAM!! Should be fast!
RAM is added with a minimum of 4GB per host and scales up to 1TB per host in increments of 4GB. 1TB should be sufficient in most cases 🙂
So, in our nano lab, let’s select 8GB of RAM on one of our hosts:
Do note that RAM and a flash device on the same host cannot be selected together! You can have either your flash device or RAM. Or… can you?!? You can configure multiple FVP clusters, one containing your flash devices and one containing RAM. Frank Denneman did a nice write-up on such a scenario >> link.
We did a test on FVP1.5 using VMware IOanalyzer in blog post part 1. Although this test (max write IOPS workload) isn’t really representative for a real-life workload, it does show the performance gain of the VM(s) compared to non-accelerated VM’s. To keep a clear overview, we will run the same test on a FVP2.0 accelerated VM on SSD and on an accelerated VM on RAM.
These screenshots were taken during the individual tests on our nano lab:
FVP1.5 – SSD – latency
FVP2.0 – SSD – IOPS
FVP2.0 – SSD – Latency
FVP2.0 – RAM – IOPS*
FVP2.0 – RAM – Latency (note that network acceleration is still handled by a SSD)
To summarize some of the numbers:
max IOPS | Latency | |
FVP1.5 SDD acceleration | ~56.000 | 0.22ms |
FVP2.0 SDD acceleration | ~34.000 | 0.12ms |
FVP2.0 RAM acceleration* | ~150.000* ~35.000* |
0.04ms |
* Note: I did notice some strange numbers here. Testing began at a somewhat 200.000 IOPS(!) to drop to ~35.000. Considering we’re using a nano lab, with not the fastest 1.35V DDR3 1600Mhz RAM on an Intel NUC, this could be a platform not ideal for testing RAM on FVP. We will configure and test another FVP cluster on other machinery!
Conclusion:
Needless to say; the VM accelerated on RAM is clearly the winner on latency!! During tests we never encountered a latency higher than 0.04ms!! It looks like overall latency on FVP2.0 is lower than FVP1.5. Having said that, I couldn’t get the number of IOPS on SDD at the same level as it used to be on FVP1.5. Same test, same host, 100% hitrate. A changed algorithm to crunch down the numbers maybe? More focus on latency? Who knows… But it was something that caught the eye.
Due to our testing platform we’re not convinced FVP2.0 on RAM has shown it’s full potential. We will therefor test on another platform to get a clear view on what FVP2.0 is capable of doing when cache resides on RAM.
Any storage device
With iSCSI, FC and FCoE already supported in previous versions, the only missing protocol was NFS. With FVP 2.0 also supporting NFS there shouldn’t be any boundaries on datastores to accelerate. A quick looks show us it is now possible to select NFS datastores together with my existing (iSCSI) datastores.
I could not spot significant differences in performance between file- or blockbased storage when accelerated by FVP using write back, which is pretty logical… 🙂
User defined fault domains
Fault Domains allow us to take control of where cache data is replicated to when using Write Back on peers. The options you get to choose from are up to 2 peers in the same fault domain or peers in (multiple) different fault domains.
Think about a stretched cluster environment where it would seem logical to configure the fault tolerant cache to reside on hosts within the same site due to the lower latency. Or maybe peers in the same fault domain as well in a different fault domain is the way to go if higher latency on your peers isn’t a big deal…
Adaptive network compression
This new FVP compression tech is only used when using 1Gbit network interfaces for your FVP acceleration traffic (vMotion network by default). It won’t even work on a 10Gbit network because the gain would be close to nothing.
I could go into detail, but what better way to show the inside out of Adaptive network compression then insider Frank Denneman’s article, found here: http://frankdenneman.nl/2014/10/03/whats-new-pernixdata-fvp-2-0-adaptive-network-compression/
Licensing
FVP can be delivered in 5 types of licensing. Note that user defined fault domains for write back are only available in the Enterprise or Subscription version. The overview below is found on the PernixData website:
- FVP Enterprise: FVP Enterprise is designed for the most demanding applications in the data center. Deployment can be on flash, RAM or a combination of the two. FVP Enterprise also introduces topology aware Write Back acceleration via Fault Domains that allows enterprises to align FVP with their data center design best practices. In addition, FVP Enterprise comes with sophisticated, built-in resource management that makes the best possible use of available server resources. With FVP Enterprise, there is no limit placed on the number of hosts or VMs supported in an FVP Cluster™.
- FVP Subscription: A version of FVP Enterprise that is purchased using a subscription model, making it ideal for service provider environments.
- FVP Standard: FVP Standard is designed for the most common virtualized applications within the data center. It supports deployments via all flash or all RAM. No limit is placed on the number of hosts or VMs in an FVP cluster. FVP Standard is purchased as a perpetual license only.
- FVP VDI: A version of FVP exclusively for virtual desktop infrastructures (priced on a per desktop basis.)
- FVP Essentials Plus: A bundled version of FVP Standard that supports 3 hosts and accelerates up to 100 VMs (in alignment with vSphere Essentials Plus). This product replaces the FVP SMB Edition.
What’s next?
Well, FVP2.0 is major step for PernixData and it should be highly usable as addition on any type of storage now. But what’s next for PernixData? I know, version 2.0 is just released, but I keep wondering what direction of development they will bring us in the future. I’ve discussed VMware’s VAIO in part 1, will this be something PernixData will hook into? What is there more to gain on flash virtualization??
I’m sure the clever minds at Pernix will have an answer to that. Time will tell. For now, let’s enjoy this product!!!
Nice article Niels!
Like you said, it will be interesting to see what VAIO will bring to the battlefield. I can imagine it opens doors for new players and current ‘competitors’ with non-kernel (virtual appliance) solutions.
It’s a bit like Tintri and VVols; Tintri distinguished itself by VM-aware storage, with the introduction of VVols a lot of other vendors jumped into the VM-aware bandwagon.
A little competition isn’t going to hurt ‘us customers’ and will probably ensure even more cool features for the future.
Pingback: Reflections on VMworld 2014 | www.vExperienced.co.uk