Part 3: Testing PernixData FVP 2.0

Part 3: Testing PernixData FVP 2.0

A while ago I did a write-up about PernixData FVP and their new 2.0 release. In blogpost “Part 2: My take on PernixData FVP2.0” I ran a couple of tests which were based on a Max IOPS load using I/O Analyzer.

This time ’round, I wanted to run some more ‘real-life’ workload tests in order to show the difference between a non-accelerated VM, a FVP accelerated VM using SSD and a FVP accelerated VM using RAM. So I’m not per se in search of  mega-high IOPS numbers, but looking to give a more realistic view on what PernixData FVP can do for your daily workloads. While testing I proved to myself it’s still pretty hard to simulate a real-life work-load but had a go at it nonetheless…  🙂

Equipment

As stated in previous posts, it is important to understand I ran these test on a homelab. Thus not representing decent enterprise server hardware. That said, it should still be able to show the differences in performance gain using FVP acceleration. Our so-called ‘nano-lab’ consists of:

vSphere Flash Read Cache (vFRC)

Recently I started studying for my second VMware VCAP exam. It made sense to plan the updated VCAP550-DCA exam, since vSphere 5.5 is among us for quite some time now.

vFRC

While doing so, you automatically run into vSphere Flash Read Cache (vFRC). A feature provided since vSphere 5.5 (only in the Enterprise+ bundle), but I never got to configure and/or test it. Hence this little write-up…

Running through the blueprint for VCAP550-DCA, you’ll see it listed under section 1, objective 1.1:

Section 1 – Implement and Manage Storage
Objective 1.1 – Implement Complex Storage Solutions

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS re-signaturing
  • Understand and apply LUN masking using PSA-related commands
  • Configure Software iSCSI port binding
  • Configure and manage vSphere Flash Read Cache
  • Configure Datastore Clusters
  • Upgrade VMware storage infrastructure

Let’s check it out!!

My NetApp Flashpool implementation

The other day I was designing and implementing an all new NetApp FAS3250 setup running Clustered ONTAP 8.2 supporting a vSphere environment. This setup contains a bunch of SAS 10K disks and a DS2246 shelf filled with 24x 200GB SSD’s.

Because of the requirements stated by the customer, most of the SSD’s are used for a SSD-only aggregate. But to accelerate the SAS disks, we opted to use 6 SSD’s to create a Flashpool. I guess Flashpool doesn’t need any further detailed introduction. It is a mechanism used by NetApp to utilize SSD’s to automatically cache random reads and random overwritten writes in a dedicated Flashpool aggregate. Note the bold ‘overwritten’! This cached data is available during a takeover or giveback.

netapp-flashpool

 

Although the implementation of a Flashpool is pretty straight forward, there are a few things I would like to point out. Things I encountered during the implementation: