VMware Virtual SAN 6.0 benchmark

Last week I was going through ‘What’s New: VMware Virtual SAN 6.0‘, it seems like VSAN 6.0 is bigger, better and faster. The latest installment of VMware’s distributed storage platform provides a significant IOPS boost, up to twice the performance in hybrid mode. The new VirstoFS on-disk format is capable of high performance snapshots and clones. Time to put it to the test.

 

Disclaimer: this benchmark has been performed on a home lab setup, components used are not listed in the VSAN HCL. My goal is to confirm an overall IOPS and snapshot performance increase by comparing VSAN 5.5 with 6.0. I did so by running a synthetic IOmeter workload.

VMware has a really nice blogpost on more advanced VSAN performance testing utilizing IOmeter.

 

Hardware

My lab consists of 3 Shuttle SZ87R6 nodes, connected by a Cisco SG300.

 Chipset  Z87
 Processor  Intel Core i5-4590S
 Memory  32 GB
 NIC 1  1 GE (management)
 NIC 2  1 GE (VSAN)
 HDD 1  Samsung 840 Evo (120GB)
 HDD 2  HGST Travelstar 7K1000 (1TB)

 
 

ESXi/VSAN versions

  • ESXi 5.5 Update 2 (build 2068190)
  • ESXi 6.0 (build 2494585)

Stretched Cluster on IBM SVC (Part 3)

This is part 3 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3    (HA, PDL, APD)

 

I explained how a SVC Split Cluster reacts to certain failure conditions in part 2. Now that we know how the storage layer behaves, let’s take a closer look at how this all ties in with the VMware layer. This is by no means a complete guide to every setting/configuration option involved, more of an excerpt of the ones I consider to be important. This post is based on vSphere 5.5.

VMware Stretched Cluster isn’t a feature you enable by ticking some boxes, it’s a design built around the workings of HA, DRS and a couple of other mechanisms.

First, I would like to briefly explain the concepts APD (All Paths Downs) and PDL (Permanent Device Loss).

 

APD

In an All Paths Down scenario, the ESXi host loses all paths to the storage device. The host is unable to communicate with the storage array. Examples of failures that can trigger APD are a failing HBA or a failing SAN.

APD All Paths Down

figure 1. APD

Part 2: My take on PernixData FVP 2.0

In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October.

pernixdata

I liked the announcement e-mail from Jeff  Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0:


FVP version 1.0 took the world by storm a year ago with the following unique features:

  • Read and write acceleration with fault tolerance
  • Clustered platform, whereby any VM can remotely access data on any host
  • 100% seamless deployment inside the hypervisor using public APIs certified by VMware.

Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities:

  • Distributed Fault Tolerant Memory (DFTM) – Listen to PernixData Co-founder and CTO, Satyam Vaghani, describe how we turn RAM into an enterprise class medium for storage acceleration in this recent VMUG webcast
  • Optimize any storage device (file, block or direct attached)
  • User defined fault domains
  • Adaptive network compression

 

We will take a look at PernixData FVP 2.0, how to upgrade from version 1.5 and explore the newly introduced features…

Part 1: My take on PernixData FVP

Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned.

pernixdata

First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon.

What will FVP do for you? PernixData states:

Decouple storage performance from capacity

So what does that mean? Well, it means we no longer must try to fulfill storage performance requirements by offering more spindles in order to reach the much demanded IOPS. Next to that we must try to keep a low as possible latency. Doing so, what better place for flash to reside on the server! Keeping the I/O path as short as possible is key!!
When storage performance is no longer an issue, capacity requirements are easily met.

SDDC: VMware announces EVO:RAIL

With VMworld 2014 just kicking off, VMware already made an awesome, and somewhat unexpected,  announcement being VMware EVO:RAIL!!!
EVO:RAIL seems to be the first product in the EVO family, so it looks like VMware probably got more surprises up their sleeves in the near future.

Let’s take a quick look at this new product range and what it’s first family member RAIL is all about…

evorail

 What is EVO:RAIL