Distributed Storage Network Topology

This is a short write-up about why you should consider a certain network topology when adopting scale-out storage technologies in a multi-rack environment. Without going into too much detail, I want to accentuate the need to follow the scalable distributed storage model when it comes to designing your Ethernet storage network. To be honest, it is probably the other way around. The networking experts in this world introduced scalable network architectures, while maintaining consistent and predictable latency, for a long time now. The storage world is just catching up.

Today, we have the ability to create highly scalable distributed storage infrastructures, following Hyper-Converged Infrastructures (HCI) innovations. Because the storage layer is distributed across ESXi hosts, a lot of point-to-point Ethernet connections between ESXi hosts will be utilized for storage I/O’s. Typically, when a distributed storage solution (like VMware vSAN) is adopted, we tend to create a pretty basic layer-2 network. Preferably using 10GbE or more NIC’s, line-rate capable components in a non-blocking network architecture with enough ports to support our current hosts. But once we scale to an extensive number of ESXi hosts and racks, we face challenges on how to facilitate the required network interfaces to connect to our ESXi hosts and how to connect the multiple Top of Rack (ToR) switches to each other. That is where the so-called spine and leaf network architecture comes into play.

Spine-Leaf

Each leaf switch, in a spine-leaf network architecture, connects to every spine switch in the fabric. Using this topology, the connection between two ESXi hosts will always traverse the same number of network hops when the hosts are distributed across multiple racks. Such a network topology provides a predictable latency, thus consistent performance, even though you keep scaling out your virtual datacenter. It is the consistency in performance that makes the spine/leaf network architecture so suitable for distributed storage solutions.

An exemplary logical spine-leaf network architecture is shown in the following diagram:

(more…)

Read More

VXLAN Offloading Support

Modern physical NICs (pNIC) have several offloading capabilities. If you are running VMware NSX, which is using VXLAN, you could benefit from the VXLAN offloading feature. Using VXLAN offloading allows you to use TCP offloading mechanisms like TCP Segment Offload (TSO) and Checksum Segment Offload (CSO) because the pNIC is able to ‘look into’ encapsulated VXLAN packets. That results in lower CPU utilization and a possible performance gain. But how to determine what is actually supported by your pNIC and the used driver in ESXi?

It is recommended to follow these three steps to fully verify if the VXLAN offload feature you are looking for is supported and enabled.

Step 1: Check the support of the pNIC chipset
Step 2: Check the support of the driver module
Step 3: Check if the driver module needs configuration

The first step is to check the vendor information about the supported features on their pNIC product. Let’s take the combination of a 10GbE Broadcom QLogic 57810 NIC and the VXLAN offload feature as an example. Looking at the datasheet of the QLogic 57810 NIC, it clearly states that VXLAN offloading is supported.

(more…)

Read More