Jumbo frames and the risks involved

Even though the jumbo frame and the possible gain and risk trade-offs discussion is not new, we found ourselves discussing it yet again. Because we had different opinions, it seems like a good idea to elaborate on this topic.

Let’s have a quick recap on what jumbo frames actually are. Your default MTU (Maximum Transmission Unit) for a ethernet frame is 1500. A MTU of 9000 is referred to as a jumbo frame.

Jumbo frames or 9000-byte payload frames have the potential to reduce overheads and CPU cycles.

Typically, jumbo frames are considered for IP storage networks or vMotion networks. A lot of performance benchmarking is already described on the web. It is funny to see a variety of opinions whether to adopt jumbo frames or not. Check this blogpost and this blogpost on jumbo frames performance compared to a standard MTU size. The discussion if ‘jumbo frames provide a significant performance advantage’ is still up in the air.

There are other techniques to improve network throughput and lower CPU utilization next to jumbo frames. A modern NIC will support the Large Segment Offload (LSO) and Large Receive Offload (LRO) offloading mechanisms. Note: LSO is also referenced as TSO (TCP Segmentation Offload). Both are configurable. LSO/TSO is enabled by default if the used NIC hardware supports it. LRO is enabled by default when using VMXNET virtual machine adapters.

Risks?

Let’s put the performance aspects aside, and let us look into the possible risks involved when implementing jumbo frames. The thing is, in order to be effective, jumbo frames must be enabled end to end in the network path. The main risk when adopting jumbo frames, is that if one component in the network path is not properly configured for jumbo frames, a MTU mismatch occurs.
(more…)

Read More

Synology DSM6.0 VLAN support

I’ve noticed some distress on the web because, with the release of Synology DSM version 6.0, it is no longer possible to use the vconfig command. This command was used to configure VLAN tagging on your interfaces.

It is however still perfectly possible to create multiple sub-interfaces on a physical interface or bond without using the vconfig command. All you need to do is create additional config-files for each of you sub-interfaces. Each sub-interface represents a VLAN ID. The config-files are found in /etc/sysconfig/network-scripts/.

Note: shell access is required to your Synology. So you should enable SSH for instance.

In the example below, you will see my Synology has a bond using eth0 and eth1. My setup required to have some additional VLAN tagged sub-interfaces on top of my physical bond interface.

synologyVLAN
As you can see, I have a sub-interface for VLAN 100, 120, 130 and 20. You only need to copy a config-file using the naming format ifcfg-<phy int>.<vlan-id>, and adjust it to your needs. A (copied) config-file looks like this:

(more…)

Read More

Exploring Hedvig

We had the chance to sit down and have a chat with the Hedvig EMEA guys last week. They gave us a very good presentation on what Hedvig can bring and what they are working on. As we recently got to know Hedvig and their software defined storage solution, we were pretty amazed with their view on SDS and their long list of supported platforms and enterprise storage features and services. Although it is pretty hard to explain all the goods Hedvig brings in one post, we will give it a try! 🙂

 

hedvig-cloudfixNot too long ago, Hedvig Inc came out of stealth after a period of developing since June of 2012. They are opting for a slightly different approach with the general availability (GA) status compared to other SDS start-ups. When their software will be GA with version 1.0, it will be a fully developed, full feature solution which is already running production at several enterprise early adopter customers! It is likely version 1.0 is released next week (week 23)!!

Okay, so let us focus on what makes Hedvig unique. They introduce themselves using the quote below.

Put simply: Hedvig gets better and smarter as it scales. Hedvig defies conventional wisdom, transforming commodity hardware into the most advanced storage solution available today. Hedvig accelerates data to value by collapsing disparate storage systems into a single platform, creating a virtualized storage pool that provisions storage with a few clicks, scales to petabytes, and runs seamlessly in both private and public clouds.

(more…)

Read More

VMware Virtual SAN 6.0 benchmark

Last week I was going through ‘What’s New: VMware Virtual SAN 6.0‘, it seems like VSAN 6.0 is bigger, better and faster. The latest installment of VMware’s distributed storage platform provides a significant IOPS boost, up to twice the performance in hybrid mode. The new VirstoFS on-disk format is capable of high performance snapshots and clones. Time to put it to the test.

 

Disclaimer: this benchmark has been performed on a home lab setup, components used are not listed in the VSAN HCL. My goal is to confirm an overall IOPS and snapshot performance increase by comparing VSAN 5.5 with 6.0. I did so by running a synthetic IOmeter workload.

VMware has a really nice blogpost on more advanced VSAN performance testing utilizing IOmeter.

 

Hardware

My lab consists of 3 Shuttle SZ87R6 nodes, connected by a Cisco SG300.

 Chipset Z87
 Processor Intel Core i5-4590S
 Memory 32 GB
 NIC 1 1 GE (management)
 NIC 2 1 GE (VSAN)
 HDD 1 Samsung 840 Evo (120GB)
 HDD 2 HGST Travelstar 7K1000 (1TB)

 
 

ESXi/VSAN versions

  • ESXi 5.5 Update 2 (build 2068190)
  • ESXi 6.0 (build 2494585)

(more…)

Read More

Stretched Cluster on IBM SVC (Part 3)

This is part 3 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3    (HA, PDL, APD)

 

I explained how a SVC Split Cluster reacts to certain failure conditions in part 2. Now that we know how the storage layer behaves, let’s take a closer look at how this all ties in with the VMware layer. This is by no means a complete guide to every setting/configuration option involved, more of an excerpt of the ones I consider to be important. This post is based on vSphere 5.5.

VMware Stretched Cluster isn’t a feature you enable by ticking some boxes, it’s a design built around the workings of HA, DRS and a couple of other mechanisms.

First, I would like to briefly explain the concepts APD (All Paths Downs) and PDL (Permanent Device Loss).

 

APD

In an All Paths Down scenario, the ESXi host loses all paths to the storage device. The host is unable to communicate with the storage array. Examples of failures that can trigger APD are a failing HBA or a failing SAN.

APD All Paths Down

figure 1. APD

(more…)

Read More

Part 3: Testing PernixData FVP 2.0

A while ago I did a write-up about PernixData FVP and their new 2.0 release. In blogpost “Part 2: My take on PernixData FVP2.0” I ran a couple of tests which were based on a Max IOPS load using I/O Analyzer.

This time ’round, I wanted to run some more ‘real-life’ workload tests in order to show the difference between a non-accelerated VM, a FVP accelerated VM using SSD and a FVP accelerated VM using RAM. So I’m not per se in search of  mega-high IOPS numbers, but looking to give a more realistic view on what PernixData FVP can do for your daily workloads. While testing I proved to myself it’s still pretty hard to simulate a real-life work-load but had a go at it nonetheless…  🙂

Equipment

As stated in previous posts, it is important to understand I ran these test on a homelab. Thus not representing decent enterprise server hardware. That said, it should still be able to show the differences in performance gain using FVP acceleration. Our so-called ‘nano-lab’ consists of:

(more…)

Read More

Part 2: My take on PernixData FVP 2.0

In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October.

pernixdata

I liked the announcement e-mail from Jeff  Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0:


FVP version 1.0 took the world by storm a year ago with the following unique features:

  • Read and write acceleration with fault tolerance
  • Clustered platform, whereby any VM can remotely access data on any host
  • 100% seamless deployment inside the hypervisor using public APIs certified by VMware.

Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities:

  • Distributed Fault Tolerant Memory (DFTM) – Listen to PernixData Co-founder and CTO, Satyam Vaghani, describe how we turn RAM into an enterprise class medium for storage acceleration in this recent VMUG webcast
  • Optimize any storage device (file, block or direct attached)
  • User defined fault domains
  • Adaptive network compression

 

We will take a look at PernixData FVP 2.0, how to upgrade from version 1.5 and explore the newly introduced features…

(more…)

Read More

Part 1: My take on PernixData FVP

Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned.

pernixdata

First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon.

What will FVP do for you? PernixData states:

Decouple storage performance from capacity

So what does that mean? Well, it means we no longer must try to fulfill storage performance requirements by offering more spindles in order to reach the much demanded IOPS. Next to that we must try to keep a low as possible latency. Doing so, what better place for flash to reside on the server! Keeping the I/O path as short as possible is key!!
When storage performance is no longer an issue, capacity requirements are easily met.

(more…)

Read More

SDDC: VMware announces EVO:RAIL

With VMworld 2014 just kicking off, VMware already made an awesome, and somewhat unexpected,  announcement being VMware EVO:RAIL!!!
EVO:RAIL seems to be the first product in the EVO family, so it looks like VMware probably got more surprises up their sleeves in the near future.

Let’s take a quick look at this new product range and what it’s first family member RAIL is all about…

evorail

 What is EVO:RAIL

(more…)

Read More

SDS: Software Defined Storage solutions

SDS

Software Defined everythingStill a very hot item in the ever strong developing IT landscape.

In the year 2012 ‘Software Defined’ looked like just another buzzword, but the market is changing direction more and more towards software defined solutions. The Software Defined Data Center (SDDC) is nowadays supported by Software Defined Storage (SDS), Software Defined Network (SDN) and security solutions. All very cool stuff!! I want to take a closer look at SDS in this blog post.

So…What is Software Defined Storage? Everybody has an idea on what it should be. My idea of SDS matches the quote below:

Software-defined storage (SDS) is the process of using software-based techniques to create, deploy and manage storage resources and infrastructure. It enables abstracting or separating storage services from hardware devices by using software or programmatic access to extract and manipulate storage resources

(more…)

Read More