VXLAN and Multiple Receive Threads explained

The (Dynamic) Netqueue feature in ESXi, which is enabled by default if the physical NIC (pNIC) supports it, allows incoming network packets to be distributed over different queues. Each queue gets its own ESXi thread for packet processing. One ESXi thread represents a CPU core.

However, (Dynamic) NetQueue and VXLAN are not the best of friends when it comes to distributing network I/O over multiple queues. That is because of the way Virtual Tunnel End Points (VTEP) are set up. Within a VMware NSX implementation, each ESXi host in the cluster contains at least one VTEP, dependent upon the NIC load balancing mode chosen. The VTEP is the component that provides the encapsulation and decapsulation for the VXLAN packets. That means all VXLAN network traffic from a VM perspective will traverse the VTEP and the receiving VTEP on another ESXi host.

Therein lies the problem when it comes to NetQueue and the ability to distribute network I/O streams over multiple queues. This is because a VTEP will always have the same MAC address and the VTEP network will have a fixed VLAN tag. MAC address and VLAN tag are the filters most commonly supported by pNICs with VMDq and NetQueue enabled. It will seriously restrict the ability to have multiple queues and thereby will possibly restrict the network performance for your VXLAN networks. VMware NSX now supports multi-VTEPs per ESXi host. This helps slightly as a result of the extra MAC addresses, because of the increased number of VTEPs per ESXi host. NetQueue can therefore have more combinations to filter on. Still, it is far from perfect when it comes to the
desired network I/O parallelism handling using multiple queues and CPU cores. To overcome that challenge, there are some pNICs that support the distributing of queues by filtering on inner (encapsulated) MAC addresses. RSS can do that for you.

(more…)

Read More

Using CPU limits in vCloud for NFV

Looking at the VMware vCloud for NFV proposition, you will notice that vCloud Director (vCD) is one of the options for the Virtualized Infrastructure Manager (VIM) layer based on the ETSI framework.

VMware vCloud NFV supports two integrated virtualized infrastructure managers (VIMs): native VMware vCloud Director or VMware Integrated OpenStack, a full OpenStack implementation that is completely tested and integrated. Both VIMs support templated service descriptions as well as multi-tenancy and robust networking, enabling the automation of on-boarding VNFs with the acceleration of configuring and allocating compute, storage and networking resources.

As mentioned, vCD is used for multi tenancy and providing a management layer for the tenants to spin up new workloads within their own set of resources. Now how the compute resources are provided to the workloads is very interesting in some telco workload cases. vCD provides three different ways to deliver compute resources to the tenant vApps:

  • Allocation Pool
    A percentage of the resources you allocate are committed to the Organization virtual DataCenter (OvDC). You can specify the percentage, which allows you to overcommit resources.
  • Pay-As-You-Go
    Allocated resources are only committed when users create vApps in the OvDC. You can specify the maximum amount of CPU and memory resources to commit to the OvDC.
  • Reservation Pool
    All of the resources you allocate are committed to the OvDC.

More information on allocation models in vCD can be found on Duncan’s blogpost, its one of his older posts but still accurate. The Pay-As-You-Go allocation model seems to be a popular choice because it enforces the entitlement for specific resources per Virtual Machine (VM). It does so by setting a reservation and a limit on each VM in the vApp / OvDC using a configurable vCPU speed. That means a VM can only consume CPU cycles as configured in the OvDC. See the following example to get a better feeling on what is configurable within a Pay-As-You-Go OvDC.

 

Now, bring the fact that limits and reservations are placed on a VM back to the fact that network I/O and the required CPU time to process that network I/O is accounted to the VM. Check the Virtual Machine Tx threads explained post to get a better understanding of how the VMkernel behaves when VMs transmit network packets. Because the CPU cycles used by the Tx threads are accounted to the VM, you should be very careful with applying CPU limits to the VM. CPU limits can seriously impact the network I/O performance and packet rate capability!

A severe performance impact can be expected when you deploy NFV Telco workloads that are known for high network utilization when using CPU limits. A clear example is a virtual Evolved Packet Core (vEPC) node, for instance the Packet Gateway (PGW). VMs used in these nodes are known to have a large appetite for network I/O and may be using Intel DPDK to thrive network packet rates.

Several vCPUs in the VMs will be configured in DPDK to poll a queue. Those vCPUs will claim all the cycles they can get their hands on! You should combine that with the additional CPU time required to process the transmitted network I/Os to fully understand the behaviour of such a VM and its need for CPU time. Only then can you make the correct decisions related to allocation models.

So be aware of the possible implications that CPU limits may introduce on network performance. Maybe it is better for certain telco NFV workloads to opt for another allocation model.

More information on how the VMware vCloud for NFV proposition is helping the telco industry and the LTE/4G and 5G innovations is found here: https://vmware.regalixdigital.com/nfv-ebook/automation.html. Be sure to check it out!

Read More

VMworld 2017 session picks

VMworld is upon us. The schedule builder went live and boy am I excited about VMworld 2017!

SER1872BU

This year Frank and I are presenting the successor of last years session at both VMworlds. We are listed in the schedule builder as SER1872BU – vSphere 6.5 Host Resources Deep Dive: Part 2. We are planning to bring even more ESXi epicness with a slight touch of vSAN and networking information that allows you to prep ESXi to run NFV workloads that drive IoT innovations. Last year we were lucky to have packed rooms in Vegas and Barcelona.

The enthusiasm about our book witnessed so far shows us there is still a lot of love out there for the ESXi hypervisor and ‘under-the-hood’ tech! We are working hard on having an awesome session ready for you!

VMworld Session Picks

This year I want to learn more about NFV, IoT and Edge as I find innovation in these areas intriguing. I found some sessions that look to be very interesting. I supplemented these with talks held by industry titans about various topics. If my schedule lets me, I want to see the following sessions:

  • Leading the 5G and IoT Revolution Through NFV [FUT3215BUby Constantine Polychronopoulos
  • vSAN at the Edge: HCI for Distributed Applications Spanning Retail to IoT [STO2839GU] by Kristopher Groh
  • VMware Cloud Foundation Futures [PBO2797BU] by Raj Yavatkar
  • Machine Learning and Deep Learning on VMware vSphere: GPUs Are Invading the Software-Defined Data Center [VIRT1997BU] by Uday Kurkure and Ziv Kalmanovich
  • Managing Your Hybrid Cloud with VMware Cloud on AWS [LHC2971BU] by Frank Denneman and Emad Younis
  • The Top 10 Things to Know About vSAN [STO1264BU] by Duncan Epping and Cormac Hogan

There are way more interesting sessions going on! Be sure to find and schedule your favorite ones as rooms tend to fill up quickly!

See you in Vegas and Barcelona!!

 

Read More

Virtual Machine Tx threads explained

Looking at the ESXi VMkernel network path you will notice it consists of Netpoll threads and Tx threads. Netpoll threads receive traffic from an ESXi host perspective where Tx threads transmit data from a VM to another VM or physical component.

By default, each VM is armed with only one Tx thread. As network packets are transmitted from the VM towards the pNIC layer via the VMkernel, ESXi consumes CPU cycles. These cycles, or CPU time, will be accounted to the VM itself. Tx threads are identified in esxtop in the CPU view as NetWorld-VM-XXX. This ensure that you to have a clear picture on what the costs are of transmitting large numbers of networks packets from that specific VM. It allows you to have a better understanding if a VM is constrained by the amount of CPU time that is spent on transmission of data.

Again, only one Tx thread is spun up by default. That correlates with one CPU core. This is why the NetWorld will not trespass the ±100% of %USED.

In the screenshot above, the VM in question was running the transmit side of the packet-generator test. The NetWorld-VM-69999 world was constantly running up to 100%. This is a clear example of a VM being constrained by only one Tx thread. A relatively quick solution is to add an additional Tx thread. You can add more as needs require. Looking at the network view in esxtop, you will be able to see what vNIC is processing the largest amount of network I/O. In this specific case, we knew exactly what vNIC was in extra need of network processing power.

Additional Tx threads

You can add an additional Tx thread per vNIC. This is configured as an advanced parameter in the VM configuration. The ethernetX.ctxPerDev = 1 advanced setting is used for this. The ‘X’ stands for the vNIC for which the parameter is set. You can configure each vNIC with a separate Tx thread. However, that will create unnecessary Tx threads in your VM and potentially consume CPU time in an inefficient way, because not every vNIC is likely to require its own Tx thread. It really is a setting that is driven by demand. If your workload running in the VMs has a large appetite for network I/O, take a closer look at what vNIC could benefit from additional Tx threads.

Once the additional Tx thread(s) are configured, you want to verify that it is activated. Additional Tx threads will appear in esxtop in the CPU view as NetWorld-Dev-<id>-Tx. By being added as a separate world, a clear overview can be generated on which NetWorld is processing the majority of network I/O as a result of the CPU usage associated with that thread.

In this screenshot, you will notice that the additional Tx thread is active and processing network I/O. This is one way to determine if your advanced setting is working correctly. You can also use a net-stats command to do so.

More information…

…can be found in the vSphere 6.5 Host Resources Deep Dive book that is available on Amazon!

Read More

Host Resources Deep Dive Released!

The time has finally come that we published our (e)book. It is available via Amazon using the following links:

PaperbackKindle (ebook)
Amazon USAmazon US 
Amazon DEAmazon DE
Amazon MexicoAmazon NL
Amazon UKAmazon UK
 Amazon JapanAmazon Japan
 Amazon IndiaAmazon India

 

Countless hours, weeks, months has gone into this project. It all began in the beginning of 2016. Frank and I had a lot of discussions about consistent performance and how to optimize while keeping consolidation ratios in mind. At the time, I was working on a NFV platform that was plotted on a virtual datacenter and Frank was working on his epic NUMA content. It all led to the idea of writing down our findings in a book.

It is almost like going against the current; we were looking into ESXi host behaviour, while the world is advancing to higher services running on top of vSphere. Why even bother about the hypervisor? It is all commodity after all, right? Well… not per se. It is all about understanding what you are doing on each level, not about just throwing hardware at a performance/capacity requirement. We began to talk about our ideas to our peers to see if they were interesting and feasible to include in a book.

We quickly came to the conclusion that there are a lot of unknowns when it comes to tuning your virtual environment to accompany distributed services like vSAN and NSX, latency sensitive workloads and right-sizing your VMs in general. We hope our book will help vSphere administrators, architects, consultants, aspiring VCDX-es and people eager to learn more about the elements that control the behavior of CPU, memory, storage and network resources.

I am extremely grateful that I was part of realising this book. It is really inspiring to work closely with Frank. He has a tremendous way of expressing his train of thought and has a lot of experience in creating tech books. Our discussions about tech or general topics are always a blast.

All our effort led to our (and my first) publishing which contains:

  • 122.543 words
  • 5217 paragraphs
  • 23 chapters
  • 569 pages
  • 311 screenshots and diagrams

The main challenge was to dig deep in the world of host resources while working a very busy day job at my beloved customers. The effort required was immense. Think a VCDX path times 10. At least, that is how it felt to me. But it was so much fun. I remember a few weeks ago; We were working on the last storage content until deep into the night, like we did almost every day for the last few months, and we were still so psyched at 3.30AM. So even after a very intense period, we still got the kicks out of talking about the content and book in the middle of the night!

We tried to keep the price as low as possible to allow everybody interested in it to be able to buy it. We feel we managed to do that even though the book contains 569 pages. The digital book will following after some PTO and both VMworlds.

As discussed in a previous post; We attended several VMUGs to talk about content included in the book. We are now also confirmed to have a ‘part deux’ of last years VMworld top 10 session this year in Las Vegas and in Barcelona (session 1872). We really look forward seeing you there! Please let us know your thoughts on our book, even write a review on Amazon if you will. It is very rewarding to hear it helped you in some way. Thanks!

Read More

Datanauts: Diving Deep Into vSphere Host Resources

Last week Frank and I had the pleasure and honor to join Chris Wahl and Ethan Banks in their awesome Datanauts podcast!

We discussed our upcoming book and the rationale behind it. After that we thoroughly discussed some topics on CPU architecture and Non-Uniform Memory Access (NUMA).

Don’t miss out and listen in on this and other amazing Datanauts podcasts:

Be sure to follow these accounts to get the latest updates about the book.

Twitter: @HostDeepDive
Facebook: HostDeepDive

 

Read More

VMUGs and VMworlds

While being extremely busy to complete our upcoming book, Frank and I are planning to deliver our vSphere 6.5 : Host Resources Deep Dive sessions at several VMUG’s. Just last week we had a blast presenting some Compute and Networking content at one of the largest UserCon VMUGs in the world, the NLVMUG. It was one of the best VMUG UserCons I have ever visited. I do love these days as you really get to connect with your peers and share knowledge and experiences! You can’t afford to miss your local VMUG.

It was very good to see that our room was packed! We got loads of positive feedback which is always nice. It is pretty rewarding to see that the VMware community is taking great interest in our content which is basically about the VMware vSphere layer and how host resources are consumed in ESXi 6.5. To give you some impressions of our session last week, check the pictures below:



After thoroughly enjoying the NLVMUG day, we were on the lookout to attend more VMUGs. So, after having contact with several VMUG leaders, it looks like we are presenting at the following VMUGs:

  • Belgium VMUG in Mechelen, 12th of May  (Didn’t work out planning-wise)
  • German VMUG in Frankfurt, 14th of June
  • UK VMUG in London, 22nd of June

If everything follows through, it will be a very good way to meet up with even more people. Be sure to mark these dates your agenda’s, the listed VMUGs are going to be epic!

VMworld 2017

In other news, we also submitted a session for both VMworld US and EMEA 2017. It will be a sequel to our VMworld 2016 top 10 session. Keep an eye out for session ID 1872vSphere 6.5 Host Resources Deep Dive : Part 2.

We will get the book done long before VMworld, so we are really excited to see the reactions on it. We are working effortlessly, as are our reviewers, to get create a book that can help everybody working with virtual datacenters in their daily job!

Hope to see you at a VMUG or at a VMworld conference!

Read More