Looking at the VMware vCloud for NFV proposition, you will notice that vCloud Director (vCD) is one of the options for the Virtualized Infrastructure Manager (VIM) layer based on the ETSI framework.
VMware vCloud NFV supports two integrated virtualized infrastructure managers (VIMs): native VMware vCloud Director or VMware Integrated OpenStack, a full OpenStack implementation that is completely tested and integrated. Both VIMs support templated service descriptions as well as multi-tenancy and robust networking, enabling the automation of on-boarding VNFs with the acceleration of configuring and allocating compute, storage and networking resources.
As mentioned, vCD is used for multi tenancy and providing a management layer for the tenants to spin up new workloads within their own set of resources. Now how the compute resources are provided to the workloads is very interesting in some telco workload cases. vCD provides three different ways to deliver compute resources to the tenant vApps:
- Allocation Pool
A percentage of the resources you allocate are committed to the Organization virtual DataCenter (OvDC). You can specify the percentage, which allows you to overcommit resources.
Allocated resources are only committed when users create vApps in the OvDC. You can specify the maximum amount of CPU and memory resources to commit to the OvDC.
- Reservation Pool
All of the resources you allocate are committed to the OvDC.
More information on allocation models in vCD can be found on Duncan’s blogpost, its one of his older posts but still accurate. The Pay-As-You-Go allocation model seems to be a popular choice because it enforces the entitlement for specific resources per Virtual Machine (VM). It does so by setting a reservation and a limit on each VM in the vApp / OvDC using a configurable vCPU speed. That means a VM can only consume CPU cycles as configured in the OvDC. See the following example to get a better feeling on what is configurable within a Pay-As-You-Go OvDC.
Now, bring the fact that limits and reservations are placed on a VM back to the fact that network I/O and the required CPU time to process that network I/O is accounted to the VM. Check the Virtual Machine Tx threads explained post to get a better understanding of how the VMkernel behaves when VMs transmit network packets. Because the CPU cycles used by the Tx threads are accounted to the VM, you should be very careful with applying CPU limits to the VM. CPU limits can seriously impact the network I/O performance and packet rate capability!
A severe performance impact can be expected when you deploy NFV Telco workloads that are known for high network utilization when using CPU limits. A clear example is a virtual Evolved Packet Core (vEPC) node, for instance the Packet Gateway (PGW). VMs used in these nodes are known to have a large appetite for network I/O and may be using Intel DPDK to thrive network packet rates.
Several vCPUs in the VMs will be configured in DPDK to poll a queue. Those vCPUs will claim all the cycles they can get their hands on! You should combine that with the additional CPU time required to process the transmitted network I/Os to fully understand the behaviour of such a VM and its need for CPU time. Only then can you make the correct decisions related to allocation models.
So be aware of the possible implications that CPU limits may introduce on network performance. Maybe it is better for certain telco NFV workloads to opt for another allocation model.
More information on how the VMware vCloud for NFV proposition is helping the telco industry and the LTE/4G and 5G innovations is found here: https://vmware.regalixdigital.com/nfv-ebook/automation.html. Be sure to check it out!