Understanding the ESXi Network IOChain

In this blog post, we go into the trenches of the (Distributed) vSwitch with a focus on vSphere ESXi network IOChain. It is important to understand the core constructs of the vSphere networking layers for i.e. troubleshooting connectivity issues. In a second blog post on this topic, we will look closer into virtual network troubleshoot tooling.

IOChain

The vSphere ESXi network IOChain is a framework that provides the capability to insert functions into the network data-path regardless of the usage of a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS). The IOChain is a group of functions that provides connectivity between ports and the vSwitch. A port has two IOChains based on the direction to and from the vSwitch. Meaning each port in a set is associated with it an input and an output IOChain. This allows for a modular approach by only including optional elements in an IOChain as configured by the user.

Examples of optional elements in an IOChain are VLAN support, NIC teaming, and traffic shaping. Looking at the high-level components in an ESXi network IOChain, we differentiate between the port group, the vSwitch (VSS or VDS) and the uplink level.

Port group level

This is where an optional configured VLAN is interpreted by the VLAN filter, allowing for VLAN dot1q tags for your port group. The security settings Promiscuous mode, MAC address changes, and Forged transmits are also set at the port group level. The user can also optionally configure traffic shaping, either egress only when using a VSS or bi-directional traffic shaping when using a VDS.

vSwitch (VSS or VDS) level

Incoming packets at the vSwitch level are forwarded to their destination using the forwarding engine. Incoming packets at the vSwitch level are forwarded to their destination using the forwarding engine. The forwarding engine contains port information paired with MAC address information. It’s job is to send the traffic to its proper destination. That can be either a VM residing on the same ESXi host or an external host.

The teaming engine is responsible for balancing network packets over the uplink interfaces. The way it does so is depended on the chosen teaming configuration by the user. The traffic shaper module is added to the IOChain if enabled in the port group level.

Uplink level

At this level, the traffic sent from the vSwitch to an external host finds its way to the driver module. This is where all the hardware offloading is taking place. The Supported hardware offloading features depends strongly on the physical NIC in combination with a specific driver module. Typically supported hardware offloading functions that in NICs are TCP Segment Offload (TSO), Large Receive Offload (LRO) or Checksum Offload (CSO). Network overlay protocol offloading like with VXLAN and Geneve, as used in NSX-v and NSX-T respectively, are widely supported on modern NICs.

Next to hardware offloading, the buffer mechanisms come into play in the Uplink level. I.e., when processing a burst of network packets, ring buffers come into play. Finally, the bits transmit onto the DMA controller to be handled by the CPU and physical NIC onwards to the Ethernet fabric.

Standard vSwitch

The following diagram puts all components together to form the IO chain for vSphere networking using a standard vSwitch: (more…)

Read More

vSphere Networking : Bandwidth Reservations

To enforce bandwidth availability, it is possible to reserve a portion of the available uplink bandwidth using Network I/O Control (NIOC). It may be necessary to configure bandwidth reservations to meet business requirements with regards to network resources availability. In the system traffic overview, under the resource allocation option in the Distributed vSwitch settings, you can configure reservations. Reservations are set per system traffic type or per VM.

Strongly depending on your IT architecture, it could make sense to reserve bandwidth for specific business critical workload, vSAN network or IP storage network backend. However, be aware that network bandwidth allocated in a reservation cannot be consumed by other network traffic types. Even when a reservation is not used to the fullest, NIOC does not redistribute the capacity to the bandwidth pool that is accessible to different network traffic types or network resource pools.

Since you cannot overcommit bandwidth reservations by default, it means you should be careful when applying reservations to ensure no bandwidth is gone to waste. Thoroughly think through the minimal amount of reservation that you are required to guarantee for network traffic types.

For NIOC to be able to guarantee bandwidth for all system traffic types, you can only reserve up to 75% of the bandwidth relative to the minimum link speed of the uplink interfaces.

When configuring a reservation, it guarantees network bandwidth for that network traffic type or VM. It is the minimum amount of bandwidth that is accessible. Unlike limits, a network resource can burst beyond the configured value for its bandwidth reservation, as it doesn’t state a maximum consumable amount of bandwidth.

You cannot exceed the value of the maximum reservation allowed. It will always keep aside 25% bandwidth per physical uplink to ensure the basic ESXi network necessities like Management traffic. As seen in the screenshot above, a 10GbE network adapter can only be configured with reservations up to 7.5 Gbit/s.

Bandwidth Reservation Example

(more…)

Read More

vSphere Networking : Traffic Marking

vSphere network quality control features like the Network I/O Control (NIOC) feature is focused on the virtual networking layer within in a VMware virtual data center. But what about the physical network layer and how the two can cooperate?

In converged infrastructures or enterprise networking environments, Quality of Service (QoS) is commonly configured in the physical network layers. QoS is the ability to provide different priorities to network flows, or to guarantee a certain level of performance to a network flow by using tags. In vSphere 6.7, you have the ability to create flow-based traffic marking policies to mark network flows for QoS.

Quality of Service

vSphere 6.7 supports Class of Service (CoS) and Differentiated Services Code Point (DSCP). Both are QoS mechanisms used to differentiate traffic types to allow for policing network traffic flows.

As related to network technology, CoS is a 3-bit field that is present in an Ethernet frame header when 802.1Q VLAN tagging is present. The field specifies a priority value between 0 and 7, more commonly known as CS0 through CS7, that can be used by quality of service (QoS) disciplines to differentiate and shape/police network traffic. Source: https://en.wikipedia.org/wiki/Class_of_service

One of the main differentiators is that CoS operates at data link layer in an Ethernet based network (layer-2). DSCP operates at the IP network layer (layer-3).

Differentiated services or DiffServ is a computer networking architecture that specifies a simple and scalable mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services field (DS field) in the IP header for packet classification purposes. Source: https://en.wikipedia.org/wiki/Differentiated_services

When a traffic marking policy is configured for CoS or DSCP, its value is advertised towards the physical layer to create an end-to-end QoS path.

Traffic marking policies are configurable on Distributed port groups or on the DvUplinks. To match certain traffic flows, a traffic qualifier needs to be set. This can be realized using very specific traffic flows with specific IP address and TCP/UDP ports or by using a selected traffic type. The qualifier options are extensive. (more…)

Read More

TCP Segmentation Offload in ESXi explained

TCP Segmentation Offload (TSO) is the equivalent to TCP/IP Offload Engine (TOE) but more modeled to virtual environments, where TOE is the actual NIC vendor hardware enhancement. It is also known as Large Segment Offload (LSO). But what does it do?

When a ESXi host or a VM needs to transmit a large data packet to the network, the packet must be broken down to smaller segments that can pass all the physical switches and possible routers in the network along the way to the packet’s destination. TSO allows a TCP/IP stack to emit larger frames, even up to 64 KB, when the Maximum Transmission Unit (MTU) of the interface is configured for smaller frames. The NIC then divides the large frame into MTU-sized frames and prepends an adjusted copy of the initial TCP/IP headers. This process is referred to as segmentation.

When the NIC supports TSO, it will handle the segmentation instead of the host OS itself. The advantage being that the CPU can present up to 64 KB of data to the NIC in a single transmit-request, resulting in less cycles being burned to segment the network packet using the host CPU. To fully benefit from the performance enhancement, you must enable TSO along the complete data path on an ESXi host. If TSO is supported on the NIC it is enabled by default.

The same goes for TSO in the VMkernel layer and for the VMXNET3 VM adapter but not per se for the TSO configuration within the guest OS. To verify that your pNIC supports TSO and if it is enabled on your ESXi host, use the following command: esxcli network nic tso get. The output will look similar the following screenshot, where TSO is enabled for all available pNICs or vmnics.

(more…)

Read More

Virtual Networking: Poll-mode vs Interrupt

The VMkernel is relying on the physical device, the pNIC in this case, to generate interrupts to process network I/O. This traditional style of I/O processing incurs additional delays on the entire data path from the pNIC all the way up to within guest OS. Processing I/Os using interrupt based mechanisms allows for CPU saving because multiple I/Os are combined in one interrupt. Using poll mode, the driver and the application running in the guest OS will constantly spin waiting for an I/O to be available. This way, an application can process the I/O almost instantly instead of waiting for an interrupt to occur. That will allow for lower latency and a higher Packet Per Second (PPS) rate.

An interesting fact is that the world is moving towards poll-mode drivers. A clear example of this is the NVMe driver stack.

The main drawback is that the poll-mode approach consumes much more CPU time because of the constant polling for I/O and the immediate processing. Basically, it consumes all the CPU you offer the vCPUs used for polling. Therefore, it is primarily useful when the workloads running on your VMs are extremely latency sensitive. It is a perfect fit for data plane telecom applications like a Packet GateWay (PGW) node as part of a Evolved Packet Core (EPC) in a NFV environment or other real-time latency sensitive workloads.

Using the poll-mode approach, you will need a pollmode driver in your application which polls a specific device queue for I/O. From a networking perspective, Intel’s Data Plane Development Kit (DPDK) delivers just that. You could say that the DPDK framework is a set of libraries and drivers to allow for fast network packet processing.

Data Plane Development Kit (DPDK) greatly boosts packet processingperformance and throughput, allowing more time for data plane applications. DPDK can improve packet processing performance by up to ten times. DPDK software running on current generation Intel®Xeon® Processor E5-2658 v4, achieves 233 Gbps (347 Mpps) of LLC forwarding at 64-byte packet sizes. Source: http://www.intel.com/content/www/us/en/communications/data-planedevelopment-kit.html

DPDK in a VM

Using a VM with a VMXNET3 network adapter, you already have the default paravirtual network connectivity in place. The following diagram shows the default logical paravirtual device connectivity.

(more…)

Read More

Distributed Storage Network Topology

This is a short write-up about why you should consider a certain network topology when adopting scale-out storage technologies in a multi-rack environment. Without going into too much detail, I want to accentuate the need to follow the scalable distributed storage model when it comes to designing your Ethernet storage network. To be honest, it is probably the other way around. The networking experts in this world introduced scalable network architectures, while maintaining consistent and predictable latency, for a long time now. The storage world is just catching up.

Today, we have the ability to create highly scalable distributed storage infrastructures, following Hyper-Converged Infrastructures (HCI) innovations. Because the storage layer is distributed across ESXi hosts, a lot of point-to-point Ethernet connections between ESXi hosts will be utilized for storage I/O’s. Typically, when a distributed storage solution (like VMware vSAN) is adopted, we tend to create a pretty basic layer-2 network. Preferably using 10GbE or more NIC’s, line-rate capable components in a non-blocking network architecture with enough ports to support our current hosts. But once we scale to an extensive number of ESXi hosts and racks, we face challenges on how to facilitate the required network interfaces to connect to our ESXi hosts and how to connect the multiple Top of Rack (ToR) switches to each other. That is where the so-called spine and leaf network architecture comes into play.

Spine-Leaf

Each leaf switch, in a spine-leaf network architecture, connects to every spine switch in the fabric. Using this topology, the connection between two ESXi hosts will always traverse the same number of network hops when the hosts are distributed across multiple racks. Such a network topology provides a predictable latency, thus consistent performance, even though you keep scaling out your virtual datacenter. It is the consistency in performance that makes the spine/leaf network architecture so suitable for distributed storage solutions.

An exemplary logical spine-leaf network architecture is shown in the following diagram:

(more…)

Read More