The vMotion Process Under the Hood

The VMware vSphere vMotion feature is one of the most important capabilities in today’s virtual infrastructures. Since its inception in 2002 and the release in 2003, it allows us to migrate the active state of a virtual machines from one physical ESXi host to another. Today, the ability to seamlessly migrate virtual machines is an integral part of nearly every virtualization deployment. The portability of workloads is the foundation for true hybrid cloud experience by being able to move them between on-premises and public clouds using VMware Hybrid Cloud Extension (HCX). vSphere vMotion still is and always will be one of the most momentous game-changers in the IT industry.

A lot has been developed on the vMotion internals over the years to support new technologies.

This blog post will focus on compute-only migrations, which is the standard vMotion that migrates the active compute state from a source to a destination ESXi host. We also have the possibility to perform a Storage vMotion that, when combined with compute vMotion, is considered to be a XvMotion. Other flavors are Long Distance vMotion and Cross vCenter vMotion which both are primarily vCenter Server operations on top of the ESXi side of the vMotion process.

Reading this article will give you a better understanding of the ‘magic’ that is happening under the hood when you initiate a virtual machine migration.

vMotion Process

When a virtual machine migration is started, the vCenter Server instance will execute a so-called long-running migration task to process the migration. The first step is to perform a compatibility check. Is it possible to run the virtual machine on the destination host? Think about possible constraints that could prevent a live-migration. Next is to tell the source and destination ESXi hosts what is happening. A migration specification is created that contains the following information:

  • The virtual machine that is being live-migrated
  • Configuration of that virtual machine (virtual hardware, VM options, etc.)
  • Source ESXi host
  • Destination ESXi host
  • vMotion network details

The migration specification is shared with the source and destination ESXi hosts by the vCenter Server instance, making sure that all necessary information is exchanged to start the migration process. The vCenter Server communicates with ESXi hosts using the Virtual Provisioning X Daemon (VPXD) that calls out the Virtual Provisioning X Agent (VPXA) that is running on the ESXi hosts. VPXA listens to messages from VPXD, it receives the migration spec and passes that on to the VMX process via hostd. The Host Daemon (hostd) maintains host-specific information and access for management including virtual machine telemetry like the VMstate. When a migration is started, hostd puts the virtual machine in an intermediate state so the virtual machine its configuration cannot be changed during the migration.

The Virtual Machine Monitor (VMM) process is in charge of managing the virtual machine memory and transfers virtual machine storage and network I/O requests to the VMkernel. All other, non-critical to performance, I/O requests are forwarded by VMM to VMX. The Virtual Machine Extension (VMX) process runs in the VMkernel and is responsible for handling I/O to devices that are not critical to performance. Note that VMM is only used at the source ESXi hosts during the migration, because that is where the active memory of the virtual machine resides.

After this is done, the VMkernel migrate module on the source ESXi will open sockets on the vMotion enabled network to set up communication with the destination ESXi host.

Prepare Phase to Pre-Copy Phase

By now, all processes and communication paths are ready for the live migration to start. The prepare phase is all about making sure that the destination ESXi host pre-allocates the compute resources for the to-be migrated virtual machine. Also, the virtual machine is created on the destination ESXi host, but it is masked. All the information about the virtual machine configuration is already know as that is included in the migration spec.

With the prepare phase done, the process moves to the pre-copy phase where the memory is transferred from the source to the destination ESXi host. There is a need to trace all the virtual machine memory pages on the source ESXi host. By doing that, the vMotion process knows what memory pages are overwritten during migration, referred to as dirty pages, as it needs to re-send these memory pages to the destination host.

Page Tracing

During the pre-copy phase, the vCPU’s, in use by the virtual machine, are briefly stunned to install the page tracers. The VMkernel migration module now asks VMM to start the page tracing as VMM owns the memory page table state of the virtual machine. The following diagram shows what happens when the guest OS is writing data to memory during a vMotion:

Iterative Memory Pre-Copy

Page tracing is a continuous cycle. It will work towards memory pre-copy convergence by using multiple iterations. The first iteration (precopy phase-1) copies the virtual machine memory. The following iterations (precopy phase 0 to n) work on copying the dirty memory pages. To give you an example, this is what the iterations could look like as we live-migrate a virtual machine with 24GB of memory:

Phase -1:  Copy the 24GB of virtual machine memory and trace pages. As we send the memory, it dirties 8GB.
Phase 0:  Re-transmit the dirtied 8GB. In the process, the memory dirties another 3GB.
Phase 1:  Send the 3GB. While that transfer is happening, the virtual machine dirties 1GB.
Phase 2:  Send the remaining 1GB.

As the memory pages are copied from the source to the destination ESXi host, we need to determine when all memory is copied to its destination. VMM asks the VMkernel if the pre-copy process can be terminated after each iteration. This is only possible when all memory changes (dirty pages) are copied to the destination host. Part of the iterative memory pre-copy algorithm is to match all destination memory pages to its source. Starting at page zero all the way to the maximum or last memory page number, all memory pages are sequentially checked to see if the destination pages are in sync with the source pages.

To determine if we can terminate pre-copy, we need to verify whether we can complete the last memory page copy in a window of < 500ms. We can calculate this using information in the migration tax:

  • The migration transmit rate; at what speed (GbE) are we copying memory data betweens the hosts?
  • The dirty page rate (GB/s); how many memory pages are being over-written by the guest OS?
  • How many pages do we have left to transmit to the destination host?

If no, the next iteration happens. If the outcome is yes, the VMkernel migrate module will terminate the pre-copy process.

Now what will happen if the dirty page rate is higher than the migration transmit rate? If that is the case, there is no point in doing another iteration because we can never reach memory pre-copy convergence and the migration would come to a halt. This is why we introduced Stun During Page Send (SDPS) with vSphere 5.0. Basically, SDPS is a way for the VMkernel to tell VMM to not run the scheduled instructions but to introduce a really short ‘sleep’. This may sound like an impact on workload performance, but this happens at a fine-grained level. We are talking microseconds here and it is because of these very small timeframes we can converge and the vMotion process will succeed.

SDPS is executed with each iteration if the dirty page rate > transmit rate. Subsequent iterations only copy the dirty memory pages that were modified during the previous iteration. A shorter duration in iteration gives the guest OS less opportunity to modify or dirty its memory pages, thereby shortening the next pre-copy iteration. Although there is a form of performance cost involved, typically SDPS is not noticeable to the workload. The goal is always to leave the guest OS un-aware of the migration happening.

Switchover

With the memory pre-copy migration terminated by VMM, all memory pages reside on the destination ESXi host. VMM now sends a remote procedure call (RPC) to VMX that it can suspend the source virtual machine. VMX will enter the checkpoint phase where it suspends the virtual machine and sends the checkpoint data to the destination ESXi host.

In the process, the virtual machine on the destination ESXi host will be de-masked, and the state is restored using the checkpoint data. What basically happens is that the virtual machine on the destination is powered on, but the boot process is interrupted and pointed to the memory pages that are migrated from the source ESXi host. All this typically happens in 100-200ms. That is the stuntime in which the virtual machine is not in running state. The duration of the stun time depends on a variety of factors like host hardware and dynamic guest workloads.

The virtual machine is now live-migrated!

To Conclude…

Although I’ve tried to explain the vMotion process in-depth, there are far more details to share on what happens in the background. I hope you appreciate this blog post. A big thank you goes out to the VMware vMotion engineering team for providing invaluable information.

If you are attending VMworld, be sure to visit the HBI2333BU – ‘How to Get the Most Out of vSphere vMotion’ session! With workloads increasingly growing, what can be done to increase vMotion performance? In this session, you will learn even more detailed information about the vMotion process, and get best practices on how to lower migration times and debugging. Find out how to tune vMotion to get line-rate performance using 100GbE NICs.

Other resources to learn

Enhanced vMotion Compatibility (EVC) Explained
VMotion, the story and confessions

Read More

ESXi Network Troubleshooting Tools

In the previous post about the ESXi network IOchain we explored the various constructs that belong to the network path. This blog post builds on top of that and focuses on the tools for advanced network troubleshooting and verification. Today, vSphere ESXi is packaged with a extensive toolset that helps you to check connectivity or verify bandwidth availability. Some tools are not only applicable for inside your ESXi box, but also very usable for the physical network components involved in the network paths.

Access to the ESXi shell is a necessity as the commands are executed here. A good starting point for connectivity troubleshooting is the esxtop network view. Also, the esxcli network commandlet provides a lot of information. We also have (vmk)ping, traceroute at our disposal. However, if you are required to dig deeper into an network issue, the following list of tools might help you out:

  • net-stats
  • pktcap-uw
  • nc
  • iperf

Net-stats

We’ll start of with one of my favorites; net-stats. This command can get you a lot of deep dive insights on what is happening under the covers of networking on a ESXi host as it can collect port stats and . The command is quite extensive as it allows for a lot of options. The net-stats -h command displays all flags. The most common one being the list option. Use net-stats -l to determine the switchport numbers and MAC addresses for all VMkernel interfaces, vmnic uplinks and vNIC ports. This information is also used for input for other tools described in the blog post.

To give some more examples, net-stats can also provide in-depth details on what worldlets (or CPU threads, listed as “sys”) are spun up for handling network IO by issuing net-stats with the following flags: net-stats -A -t vW. Output provided by these options help in verifying if NetQueue or Receive Side Scaling (RSS) is active for vmnic’s by mapping the “sys” output to the worldlet name using i.e. the vsi shell (vsish -e cat /world/<world id>/name).

Using different options, net-stats provides great insights on network behaviour.

Pktcap-uw

(more…)

Read More

Understanding the ESXi Network IOChain

In this blog post, we go into the trenches of the (Distributed) vSwitch with a focus on vSphere ESXi network IOChain. It is important to understand the core constructs of the vSphere networking layers for i.e. troubleshooting connectivity issues. In a second blog post on this topic, we will look closer into virtual network troubleshoot tooling.

IOChain

The vSphere ESXi network IOChain is a framework that provides the capability to insert functions into the network data-path regardless of the usage of a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS). The IOChain is a group of functions that provides connectivity between ports and the vSwitch. A port has two IOChains based on the direction to and from the vSwitch. Meaning each port in a set is associated with it an input and an output IOChain. This allows for a modular approach by only including optional elements in an IOChain as configured by the user.

Examples of optional elements in an IOChain are VLAN support, NIC teaming, and traffic shaping. Looking at the high-level components in an ESXi network IOChain, we differentiate between the port group, the vSwitch (VSS or VDS) and the uplink level.

Port group level

This is where an optional configured VLAN is interpreted by the VLAN filter, allowing for VLAN dot1q tags for your port group. The security settings Promiscuous mode, MAC address changes, and Forged transmits are also set at the port group level. The user can also optionally configure traffic shaping, either egress only when using a VSS or bi-directional traffic shaping when using a VDS.

vSwitch (VSS or VDS) level

Incoming packets at the vSwitch level are forwarded to their destination using the forwarding engine. Incoming packets at the vSwitch level are forwarded to their destination using the forwarding engine. The forwarding engine contains port information paired with MAC address information. It’s job is to send the traffic to its proper destination. That can be either a VM residing on the same ESXi host or an external host.

The teaming engine is responsible for balancing network packets over the uplink interfaces. The way it does so is depended on the chosen teaming configuration by the user. The traffic shaper module is added to the IOChain if enabled in the port group level.

Uplink level

At this level, the traffic sent from the vSwitch to an external host finds its way to the driver module. This is where all the hardware offloading is taking place. The Supported hardware offloading features depends strongly on the physical NIC in combination with a specific driver module. Typically supported hardware offloading functions that in NICs are TCP Segment Offload (TSO), Large Receive Offload (LRO) or Checksum Offload (CSO). Network overlay protocol offloading like with VXLAN and Geneve, as used in NSX-v and NSX-T respectively, are widely supported on modern NICs.

Next to hardware offloading, the buffer mechanisms come into play in the Uplink level. I.e., when processing a burst of network packets, ring buffers come into play. Finally, the bits transmit onto the DMA controller to be handled by the CPU and physical NIC onwards to the Ethernet fabric.

Standard vSwitch

The following diagram puts all components together to form the IO chain for vSphere networking using a standard vSwitch: (more…)

Read More

vSphere Networking : Bandwidth Reservations

To enforce bandwidth availability, it is possible to reserve a portion of the available uplink bandwidth using Network I/O Control (NIOC). It may be necessary to configure bandwidth reservations to meet business requirements with regards to network resources availability. In the system traffic overview, under the resource allocation option in the Distributed vSwitch settings, you can configure reservations. Reservations are set per system traffic type or per VM.

Strongly depending on your IT architecture, it could make sense to reserve bandwidth for specific business critical workload, vSAN network or IP storage network backend. However, be aware that network bandwidth allocated in a reservation cannot be consumed by other network traffic types. Even when a reservation is not used to the fullest, NIOC does not redistribute the capacity to the bandwidth pool that is accessible to different network traffic types or network resource pools.

Since you cannot overcommit bandwidth reservations by default, it means you should be careful when applying reservations to ensure no bandwidth is gone to waste. Thoroughly think through the minimal amount of reservation that you are required to guarantee for network traffic types.

For NIOC to be able to guarantee bandwidth for all system traffic types, you can only reserve up to 75% of the bandwidth relative to the minimum link speed of the uplink interfaces.

When configuring a reservation, it guarantees network bandwidth for that network traffic type or VM. It is the minimum amount of bandwidth that is accessible. Unlike limits, a network resource can burst beyond the configured value for its bandwidth reservation, as it doesn’t state a maximum consumable amount of bandwidth.

You cannot exceed the value of the maximum reservation allowed. It will always keep aside 25% bandwidth per physical uplink to ensure the basic ESXi network necessities like Management traffic. As seen in the screenshot above, a 10GbE network adapter can only be configured with reservations up to 7.5 Gbit/s.

Bandwidth Reservation Example

(more…)

Read More

vSphere Networking : Traffic Marking

vSphere network quality control features like the Network I/O Control (NIOC) feature is focused on the virtual networking layer within in a VMware virtual data center. But what about the physical network layer and how the two can cooperate?

In converged infrastructures or enterprise networking environments, Quality of Service (QoS) is commonly configured in the physical network layers. QoS is the ability to provide different priorities to network flows, or to guarantee a certain level of performance to a network flow by using tags. In vSphere 6.7, you have the ability to create flow-based traffic marking policies to mark network flows for QoS.

Quality of Service

vSphere 6.7 supports Class of Service (CoS) and Differentiated Services Code Point (DSCP). Both are QoS mechanisms used to differentiate traffic types to allow for policing network traffic flows.

As related to network technology, CoS is a 3-bit field that is present in an Ethernet frame header when 802.1Q VLAN tagging is present. The field specifies a priority value between 0 and 7, more commonly known as CS0 through CS7, that can be used by quality of service (QoS) disciplines to differentiate and shape/police network traffic. Source: https://en.wikipedia.org/wiki/Class_of_service

One of the main differentiators is that CoS operates at data link layer in an Ethernet based network (layer-2). DSCP operates at the IP network layer (layer-3).

Differentiated services or DiffServ is a computer networking architecture that specifies a simple and scalable mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services field (DS field) in the IP header for packet classification purposes. Source: https://en.wikipedia.org/wiki/Differentiated_services

When a traffic marking policy is configured for CoS or DSCP, its value is advertised towards the physical layer to create an end-to-end QoS path.

Traffic marking policies are configurable on Distributed port groups or on the DvUplinks. To match certain traffic flows, a traffic qualifier needs to be set. This can be realized using very specific traffic flows with specific IP address and TCP/UDP ports or by using a selected traffic type. The qualifier options are extensive. (more…)

Read More

vSphere Host Resources Deep Dive: Part 3 – VMworld

In my previous post, I mentioned the part 3 of the Host Deep Dive session at VMworld 2018. The ‘3’ is because we ran the part 1 and 2 at VMworld 2016 and 2017. We had the chance to try a new way of discussing Host Resources settings by the way of creating levels of performance tuning. The feedback we received will test-driving this session at the London and Indianapolis VMUG, was really positive.

As we always stated, the out-of-the-box experience of VMware vSphere is good enough for 80-90% of common virtual infrastructures. We like to show how you can gradually increase performance and reduce latency with advanced vSphere tuning. That’s why we came up with the Pyramid of Performance Optimization. Delivering the content this way allows for better understanding on when to apply certain optimizations. We will start with the basics and work our way up to settings to squeeze out the maximum performance of vSphere ESXi hosts.

Due to session time constrains, we will focus on compute (CPU and Memory) and virtual Networking. The following pyramids contain the subjects about content we will discuss in our session.

Pyramid of Performance Optimization – Compute:


Pyramid of Performance Optimization – Networking:

We will go trough all these levels in detail. We very much look forward to VMworld and hope to see you there! Be sure to reserve your seat for this session!

The following VMworld sessions are all based on the Deep Dive series:

  • vSphere Clustering Deep Dive, Part 1: vSphere HA and DRS [VIN1249BU]
  • vSphere Clustering Deep Dive, Part 2: Quality Control with DRS and Network I/O Control [VIN1735BU]
  • vSphere Host Resources Deep Dive: Part 3 [VIN1738BU]

Read More