Part 2: My take on PernixData FVP 2.0

In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October.

pernixdata

I liked the announcement e-mail from Jeff  Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0:


FVP version 1.0 took the world by storm a year ago with the following unique features:

  • Read and write acceleration with fault tolerance
  • Clustered platform, whereby any VM can remotely access data on any host
  • 100% seamless deployment inside the hypervisor using public APIs certified by VMware.

Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities:

  • Distributed Fault Tolerant Memory (DFTM) – Listen to PernixData Co-founder and CTO, Satyam Vaghani, describe how we turn RAM into an enterprise class medium for storage acceleration in this recent VMUG webcast
  • Optimize any storage device (file, block or direct attached)
  • User defined fault domains
  • Adaptive network compression

 

We will take a look at PernixData FVP 2.0, how to upgrade from version 1.5 and explore the newly introduced features…

(more…)

Read More

Part 1: My take on PernixData FVP

Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned.

pernixdata

First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon.

What will FVP do for you? PernixData states:

Decouple storage performance from capacity

So what does that mean? Well, it means we no longer must try to fulfill storage performance requirements by offering more spindles in order to reach the much demanded IOPS. Next to that we must try to keep a low as possible latency. Doing so, what better place for flash to reside on the server! Keeping the I/O path as short as possible is key!!
When storage performance is no longer an issue, capacity requirements are easily met.

(more…)

Read More

vSphere Flash Read Cache (vFRC)

Recently I started studying for my second VMware VCAP exam. It made sense to plan the updated VCAP550-DCA exam, since vSphere 5.5 is among us for quite some time now.

vFRC

While doing so, you automatically run into vSphere Flash Read Cache (vFRC). A feature provided since vSphere 5.5 (only in the Enterprise+ bundle), but I never got to configure and/or test it. Hence this little write-up…

Running through the blueprint for VCAP550-DCA, you’ll see it listed under section 1, objective 1.1:

Section 1 – Implement and Manage Storage
Objective 1.1 – Implement Complex Storage Solutions

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS re-signaturing
  • Understand and apply LUN masking using PSA-related commands
  • Configure Software iSCSI port binding
  • Configure and manage vSphere Flash Read Cache
  • Configure Datastore Clusters
  • Upgrade VMware storage infrastructure

Let’s check it out!!

(more…)

Read More

My NetApp Flashpool implementation

The other day I was designing and implementing an all new NetApp FAS3250 setup running Clustered ONTAP 8.2 supporting a vSphere environment. This setup contains a bunch of SAS 10K disks and a DS2246 shelf filled with 24x 200GB SSD’s.

Because of the requirements stated by the customer, most of the SSD’s are used for a SSD-only aggregate. But to accelerate the SAS disks, we opted to use 6 SSD’s to create a Flashpool. I guess Flashpool doesn’t need any further detailed introduction. It is a mechanism used by NetApp to utilize SSD’s to automatically cache random reads and random overwritten writes in a dedicated Flashpool aggregate. Note the bold ‘overwritten’! This cached data is available during a takeover or giveback.

netapp-flashpool

 

Although the implementation of a Flashpool is pretty straight forward, there are a few things I would like to point out. Things I encountered during the implementation:

(more…)

Read More