vSphere 8 was announced at VMware Explore 2022 in San Francisco. Part of this new major release are vMotion updates. vMotion is extensively developed to support new workloads and is one of the key enablers for VMware’s multi-cloud approach. Whenever a workload is live-migrated between vSphere and/or VMware Cloud infrastructures, vMotion logic is involved.
vMotion In-App Notifications
With the release of vSphere 7, the vMotion logic was updated to greatly reduce the performance impact on applications during live migrations. Significant work was done to minimize the stun-time, aka the switch-over time, where the last memory pages and checkpoint info are transferred from source to destination.
However, there are still workloads that would benefit from understanding that they are being live-migrated. Think about latency-sensitive applications like Telco data-plane apps, high-frequency trading apps, etc. This is where vMotion In-App Notifications come into play.
vMotion Notifications allows for scripted start and end notifications to applications in the guest OS of a virtual machine. It provides ways to allow the application to prepare for potential vMotion impact, like quiescing. Or what about log entries so ops can relate a slight drop in application performance to a vMotion operation? Another example is to allow the app to delay vMotion until it is ready. This new capability is enabled on a per-VM basis, by setting a VM config option.
Check out this resource and the demo below for more detailed information:
vMotion Unified Data Transport
vMotion Unified Data Transport (UDT) solves a specific problem for powered-off virtual machine migrations. Migration virtual machine storage (Storage vMotion) is typically a lot faster when compared to cold storage migrations. This is due to the fact that the latter scenario uses the Network File Copy (NFC) protocol. For powered-on virtual machines storage migrations, Storage vMotion, the vMotion logic is used.
NFC is single-threaded and runs as an user-level process. The vMotion logic however, is highly optimized, multithreaded and runs as a kernel level process. Both are disk transfers, but the performance differs a lot. UDT solves this problem for powered-off virtual machines. It keeps on using the NFC control channel, but offloads the data transfer to the vMotion process.
Check out this resource and the demo below for more detailed information:
We have 3 vmkernel nics.
mgmt
vMotion
vSAN
What NIC is best to enable provisioning on for to use UDT? I saw on one site, it said vMotion and then another used the mgmt vmkernel. We don’t have the luxury of a spare nic to use!