Distributed Storage Network Topology

This is a short write-up about why you should consider a certain network topology when adopting scale-out storage technologies in a multi-rack environment. Without going into too much detail, I want to accentuate the need to follow the scalable distributed storage model when it comes to designing your Ethernet storage network. To be honest, it is probably the other way around. The networking experts in this world introduced scalable network architectures, while maintaining consistent and predictable latency, for a long time now. The storage world is just catching up.

Today, we have the ability to create highly scalable distributed storage infrastructures, following Hyper-Converged Infrastructures (HCI) innovations. Because the storage layer is distributed across ESXi hosts, a lot of point-to-point Ethernet connections between ESXi hosts will be utilized for storage I/O’s. Typically, when a distributed storage solution (like VMware vSAN) is adopted, we tend to create a pretty basic layer-2 network. Preferably using 10GbE or more NIC’s, line-rate capable components in a non-blocking network architecture with enough ports to support our current hosts. But once we scale to an extensive number of ESXi hosts and racks, we face challenges on how to facilitate the required network interfaces to connect to our ESXi hosts and how to connect the multiple Top of Rack (ToR) switches to each other. That is where the so-called spine and leaf network architecture comes into play.

Spine-Leaf

Each leaf switch, in a spine-leaf network architecture, connects to every spine switch in the fabric. Using this topology, the connection between two ESXi hosts will always traverse the same number of network hops when the hosts are distributed across multiple racks. Such a network topology provides a predictable latency, thus consistent performance, even though you keep scaling out your virtual datacenter. It is the consistency in performance that makes the spine/leaf network architecture so suitable for distributed storage solutions.

An exemplary logical spine-leaf network architecture is shown in the following diagram:

(more…)

Read More

Lab test: vSphere Fault Tolerance performance impact

Triggered by some feedback on the VMware reddit channel, I was wondering what is holding us back in adopting the vSphere Fault Tolerance (FT) feature. Comments on Reddit stated that although the increased availability is desirable, the performance impact is holding them back to actually use it in production environments.

Use cases for FT could be, according to the vSphere 6 documentation center:

  • Applications that need to be available at all times, especially those that have long-lasting client connections that users want to maintain during hardware failure.
  • Custom applications that have no other way of doing clustering or other forms of application resiliency.
  • Cases where high availability might be provided through custom clustering solutions, which are too complicated to configure and maintain.

However, the stated use cases only focus on availability and do not seem to incorporate a performance impact when enabling FT. Is there a sweet-spot for applications that do need high resiliency, but do not require immense performance and could coop with a latency impact due to FT? It really depends on the application workload. A SQL server typically generates more FT traffic then for instance a webserver that primarily transmits. So the impact of enabling FT will impact some workloads more then other.

Requirements

Since the introduction of vSphere 6: Multi-Processor Fault Tolerance (SMP-FT), the requirements for FT are a bit more flexible. The compute maximums for a FT enabled VM are 4 vCPUs and 64GB memory. The use of eager zero thick disks is no longer a requirement. So thin, lazy zeroed thick and eager zero thick provisioned disks are all supported in SMP-FT!
(more…)

Read More

vSphere and NFV tuning considerations

In my current role, I am involved in a lot of discussions around network functions virtualization, a.k.a. NFV. Talking about NFV in this post, I mean telco applications. By that I mean applications specifically designed for and used by Communications Service Providers (CSPs) as core applications to, for instance, enable your (mobile) phone to actually be able to call another phone. 🙂

NFV with regards to telco applications is not that mainstream so it seems. The old school native way, having telco specific hardware running line cards, payload servers, etc., obviously is not sustainable looking at the current way we like to do ICT.  On the other hand, it looks like telco application vendors are still finding their way on how to properly adopt virtualization as a technology. So it looks like the level of virtualization adoption for network functions is a few years back in comparison to IT application server virtualization.

But development is rapid, and so it is for NFV. There already is a NFV Architecture Framework created by ETSI. ETSI was selected in November 2012 to be the home of the Industry Specification Group for NFV. The framework is a high-level functional architecture and design philosophy for virtualized network functions and the underlying virtualization infrastructure as shown in the following diagram:

ETSI-NFV

Although there are words that NFV is mostly deployed using a KVM hypervisor working closely with OpenStack as the API framework for NFV,  VMware is looking to hook into the needs of the communications service providers to properly ‘do’ NFV using VMware solutions. Hence the vCloud for NFV suite.

VMware vCloud NFV is a Network Functions Virtualization (NFV) services delivery, operations and management platform, developed for Communications Service Providers (CSPs) who want to reduce infrastructure CapEx and OpEx costs, improve operational agility and monetize new services with rapid time to market.

vcloudnfv

Let’s have a closer look at tuning considerations for vSphere to properly run NFV workloads!
(more…)

Read More