Exploring Hedvig

Exploring Hedvig

We had the chance to sit down and have a chat with the Hedvig EMEA guys last week. They gave us a very good presentation on what Hedvig can bring and what they are working on. As we recently got to know Hedvig and their software defined storage solution, we were pretty amazed with their view on SDS and their long list of supported platforms and enterprise storage features and services. Although it is pretty hard to explain all the goods Hedvig brings in one post, we will give it a try! 🙂

 

hedvig-cloudfixNot too long ago, Hedvig Inc came out of stealth after a period of developing since June of 2012. They are opting for a slightly different approach with the general availability (GA) status compared to other SDS start-ups. When their software will be GA with version 1.0, it will be a fully developed, full feature solution which is already running production at several enterprise early adopter customers! It is likely version 1.0 is released next week (week 23)!!

Okay, so let us focus on what makes Hedvig unique. They introduce themselves using the quote below.

Put simply: Hedvig gets better and smarter as it scales. Hedvig defies conventional wisdom, transforming commodity hardware into the most advanced storage solution available today. Hedvig accelerates data to value by collapsing disparate storage systems into a single platform, creating a virtualized storage pool that provisions storage with a few clicks, scales to petabytes, and runs seamlessly in both private and public clouds.

VMTurbo? VMTurbo!

VMTurbo? VMTurbo!

I have to admit I never used VMTurbo before. Heard a lot about them, read a fair deal about them, but never actually deployed their solutions myself. Let’s change that! And while I’m at it, let me write down my 2 cents on VMTurbo in this post…

vmturbo

So, VMTurbo first launched it’s offering in August 2010 and kept growing steadily. Their solution providing you the tools you need to utilize your virtual infrastructure at a much more efficient way compared to the resource monitoring and management your hypervisor brings you by default.

It will bring you great advantages like; less effort needed from your administrators to manually monitor and configure resources for your virtual infrastructure running your applications. And, it will create a more predicable performance for your applications! You will also benefit from being able to serve more workloads on your virtual infrastructure with less compute resources.

 

Quick deployment

Needless to say, I’m testing VMTurbo on a vSphere cluster. My homelab to be exact.
Using the OVA template, you should be up and running within minutes. I had to tone down the resources, used by the appliance, a notch. By default it’s willing to claim 4 vCPU’s and 16GB memory. No worries there in a common cluster, but within my lab environment, memory is scarce. 🙂

VMware Virtual SAN 6.0 benchmark

Last week I was going through ‘What’s New: VMware Virtual SAN 6.0‘, it seems like VSAN 6.0 is bigger, better and faster. The latest installment of VMware’s distributed storage platform provides a significant IOPS boost, up to twice the performance in hybrid mode. The new VirstoFS on-disk format is capable of high performance snapshots and clones. Time to put it to the test.

 

Disclaimer: this benchmark has been performed on a home lab setup, components used are not listed in the VSAN HCL. My goal is to confirm an overall IOPS and snapshot performance increase by comparing VSAN 5.5 with 6.0. I did so by running a synthetic IOmeter workload.

VMware has a really nice blogpost on more advanced VSAN performance testing utilizing IOmeter.

 

Hardware

My lab consists of 3 Shuttle SZ87R6 nodes, connected by a Cisco SG300.

 Chipset  Z87
 Processor  Intel Core i5-4590S
 Memory  32 GB
 NIC 1  1 GE (management)
 NIC 2  1 GE (VSAN)
 HDD 1  Samsung 840 Evo (120GB)
 HDD 2  HGST Travelstar 7K1000 (1TB)

 
 

ESXi/VSAN versions

  • ESXi 5.5 Update 2 (build 2068190)
  • ESXi 6.0 (build 2494585)

Stretched Cluster on IBM SVC (Part 3)

This is part 3 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3    (HA, PDL, APD)

 

I explained how a SVC Split Cluster reacts to certain failure conditions in part 2. Now that we know how the storage layer behaves, let’s take a closer look at how this all ties in with the VMware layer. This is by no means a complete guide to every setting/configuration option involved, more of an excerpt of the ones I consider to be important. This post is based on vSphere 5.5.

VMware Stretched Cluster isn’t a feature you enable by ticking some boxes, it’s a design built around the workings of HA, DRS and a couple of other mechanisms.

First, I would like to briefly explain the concepts APD (All Paths Downs) and PDL (Permanent Device Loss).

 

APD

In an All Paths Down scenario, the ESXi host loses all paths to the storage device. The host is unable to communicate with the storage array. Examples of failures that can trigger APD are a failing HBA or a failing SAN.

APD All Paths Down

figure 1. APD

Blog growth and VMware vExpert 2015!

Blog growth and VMware vExpert 2015!

Today we achieved the VMware vExpert 2015 status!! All three of us!! We are very happy with the fact we are being recognized as contributors to the VMware community.

vexpert15-vexpert

It is funny to see ourselves enthusiastic about blogging as we are! We used to make kinda fun of bloggers… don’t know why exactly, probably jealousy speaking back then. 😉
However, as we started Cloudfix in April 2014, we rapidly became more and more involved with each other as members of Cloudfix and the VMware community. Our hangouts chat, containing the three of us, is mega often used! We constantly discuss tech and other stuff with each other. Next to that, we track each others progress in our professional careers in an non-healthy competitive environment. I think we supplement each other. 🙂

So, setting up Cloudfix was a great idea and it’s really fun to do. It is time consuming, but worth it. We saw a growth in numbers of visitors we did not expect in the beginning. Heck, it even got better beginning 2015!! Pretty awesome for a beginning blog like ours…

vSphere 6: vMotion enhancements

vSphere 6: vMotion enhancements

Don't we all remember when we witnessed our first vMotion and realized which awesome things this made possible in virtualization?!  In vSphere 6 vMotion even got better! vMotion version history…