Top vBlog & VMworld session voting

It is that time of the year again! The annual top vBlog 2016 voting is open to show your appreciation for all the virtualization bloggers out there. I hope we at Cloudfix have created enough useful and diverse content in order to earn your vote. We sure love what we do, and we will strive to have another successful year of writing interesting blogs.

So click here to start your voting survey which will only takes a few moments. A big shout-out goes out to vSphere-land.com / Eric Siebert and sponsor VMturbo for making it all happen!

If you are willing to vote for us, we are listed in the main voting on the left side (blogs are listed in alphabetical order) and in the independent blogger section.

votevblog2016


In other news, the content catalog for the upcoming VMworld 2016 in Las Vegas is live! Make sure to check it out here! While doing so, it is also possible to cast your vote for your favorite sessions.

Our session (I will be speaking alongside Frank Denneman) is included in the catalog:

votesession2016

 

Thank you in advance for considering us!!

 

 

Read More

Jumbo frames and the risks involved

Even though the jumbo frame and the possible gain and risk trade-offs discussion is not new, we found ourselves discussing it yet again. Because we had different opinions, it seems like a good idea to elaborate on this topic.

Let’s have a quick recap on what jumbo frames actually are. Your default MTU (Maximum Transmission Unit) for a ethernet frame is 1500. A MTU of 9000 is referred to as a jumbo frame.

Jumbo frames or 9000-byte payload frames have the potential to reduce overheads and CPU cycles.

Typically, jumbo frames are considered for IP storage networks or vMotion networks. A lot of performance benchmarking is already described on the web. It is funny to see a variety of opinions whether to adopt jumbo frames or not. Check this blogpost and this blogpost on jumbo frames performance compared to a standard MTU size. The discussion if ‘jumbo frames provide a significant performance advantage’ is still up in the air.

There are other techniques to improve network throughput and lower CPU utilization next to jumbo frames. A modern NIC will support the Large Segment Offload (LSO) and Large Receive Offload (LRO) offloading mechanisms. Note: LSO is also referenced as TSO (TCP Segmentation Offload). Both are configurable. LSO/TSO is enabled by default if the used NIC hardware supports it. LRO is enabled by default when using VMXNET virtual machine adapters.

Risks?

Let’s put the performance aspects aside, and let us look into the possible risks involved when implementing jumbo frames. The thing is, in order to be effective, jumbo frames must be enabled end to end in the network path. The main risk when adopting jumbo frames, is that if one component in the network path is not properly configured for jumbo frames, a MTU mismatch occurs.
(more…)

Read More

Stretched cluster with NSX

Last NLVMUG I was talking about stretched clusters. My presentation elaborated somewhat on how VMware NSX can help you deal with challenges that arise when deploying a stretched cluster solution. In this blogpost I want to have a closer look at this specific topic.

A quick understanding about what a stretched cluster solution actually is; it is a vSphere cluster configured in one vCenter instance containing an equal number of hosts from both sites. This allows for disaster avoidance (vMotion) and disaster recovery (vSphere HA) between two geographical separated sites. From the backend infrastructure perspective, your (synchronous replicated) storage and network solutions must span both sites.

Looking into network designs used for stretched clusters, you will typically face challenges like:

  • How do you design for VM mobility over 2 sites, requiring Layer-2 networks between the 2 sites?
  • Stretched Layer-2 networks (VLANs) introduce a higher risk of failure (think Layer-2 loops).
  • How to properly segment applications and/or tentants (customers/business units)?
  • Netwerk flows. What about your egress and ingress connections?

Let’s begin with how a VMware NSX install-base could look like if it is deployed within stretched cluster infrastructure.

Stretched cluster with NSX architecture

A stretched cluster with VMware NSX could look like the following logical overview.
(more…)

Read More

Synology DSM6.0 VLAN support

I’ve noticed some distress on the web because, with the release of Synology DSM version 6.0, it is no longer possible to use the vconfig command. This command was used to configure VLAN tagging on your interfaces.

It is however still perfectly possible to create multiple sub-interfaces on a physical interface or bond without using the vconfig command. All you need to do is create additional config-files for each of you sub-interfaces. Each sub-interface represents a VLAN ID. The config-files are found in /etc/sysconfig/network-scripts/.

Note: shell access is required to your Synology. So you should enable SSH for instance.

In the example below, you will see my Synology has a bond using eth0 and eth1. My setup required to have some additional VLAN tagged sub-interfaces on top of my physical bond interface.

synologyVLAN
As you can see, I have a sub-interface for VLAN 100, 120, 130 and 20. You only need to copy a config-file using the naming format ifcfg-<phy int>.<vlan-id>, and adjust it to your needs. A (copied) config-file looks like this:

(more…)

Read More

Stretched cluster VM & datastore affinity

When using a vSphere stretched cluster solution, it is important to have your VM(s) and its VMDK(s) affinity aligned in the same datacenter. So if the storage controller in datacenter 1 serves the read/write copy of the datastore, you would like the VM to run on a vSphere host in the same datacenter. This will avoid the storage read IO’s to traverse the inter-datacenter connections, resulting in an obvious impact on performance. With the VM – datastore affinity in place, you will also mitigate the risk of potential VM outage if a datacenter partition (aka split-brain scenario) will occur.

Let me show you what I mean by using a simple logical overview of a stretched cluster infrastructure. The following example is based on an uniform storage backend. More information on uniform and non-uniform metro storage solutions is read here.

What you don’t want:

VM affinity

What you do want:

VM affinity

 

It is perfectly possible to automate the alignment upon… VM creation for example. Needless to say, you will require DRS to run. Preferably in fully automated mode.

(more…)

Read More

vSphere and NFV tuning considerations

In my current role, I am involved in a lot of discussions around network functions virtualization, a.k.a. NFV. Talking about NFV in this post, I mean telco applications. By that I mean applications specifically designed for and used by Communications Service Providers (CSPs) as core applications to, for instance, enable your (mobile) phone to actually be able to call another phone. 🙂

NFV with regards to telco applications is not that mainstream so it seems. The old school native way, having telco specific hardware running line cards, payload servers, etc., obviously is not sustainable looking at the current way we like to do ICT.  On the other hand, it looks like telco application vendors are still finding their way on how to properly adopt virtualization as a technology. So it looks like the level of virtualization adoption for network functions is a few years back in comparison to IT application server virtualization.

But development is rapid, and so it is for NFV. There already is a NFV Architecture Framework created by ETSI. ETSI was selected in November 2012 to be the home of the Industry Specification Group for NFV. The framework is a high-level functional architecture and design philosophy for virtualized network functions and the underlying virtualization infrastructure as shown in the following diagram:

ETSI-NFV

Although there are words that NFV is mostly deployed using a KVM hypervisor working closely with OpenStack as the API framework for NFV,  VMware is looking to hook into the needs of the communications service providers to properly ‘do’ NFV using VMware solutions. Hence the vCloud for NFV suite.

VMware vCloud NFV is a Network Functions Virtualization (NFV) services delivery, operations and management platform, developed for Communications Service Providers (CSPs) who want to reduce infrastructure CapEx and OpEx costs, improve operational agility and monetize new services with rapid time to market.

vcloudnfv

Let’s have a closer look at tuning considerations for vSphere to properly run NFV workloads!
(more…)

Read More

Containers, VMs and unikernels

Last week I had an interesting discussion with a colleague on containers (Docker mostly), VMs, as well as a more recent development in this space called unikernels. Regular geek speak. I’ve mashed up the most interesting parts of the discussion, together with some background information.

 

Containerization

Containerization is lightweight OS virtualization that groups and isolates certain processes and resources from the host operating system and other containers. Containers share the operating system kernel and may share binaries and libraries.

The following image depicts the difference between VMs and containers.
VMs versus containers

(more…)

Read More

NLVMUG 2016 sessions

This March the 17th, the annual NLVMUG UserCon will be held in ‘s-Hertogenbosch. Last year an amazing turnout  of 900+ attendees was reached! It turned out to be one of the largest VMUG world wide. Let’s top that this year!

Make sure you don’t miss out and register here!

NLVMUG

It’s gonna be a full agenda with around 23 sessions divided between VMware, sponsor and community speakers. Next to that there’s  also the boot-camps and 2 keynotes. Speaking at keynote 1 will be VMware’s own Kit Colbert! You can use the breaks to have a look at all the stands. Be sure to talk to as much interesting vendors as possible. Hear them out on what they could bring to your IT challenges.

It will be an action packed day. One could say the NLVMUG is starting to look like, a sort-off, mini VMworld.

Rutger and myself will be there, presenting in our own VMUG community sessions. Check out our time slots:
(more…)

Read More

Host disconnect after ESXi 5.5 U3b (SSLv3 POODLE)

Today I was preparing a new blade chassis in an existing vCenter environment. After applying the predefined Critical Host Patches baseline (default task for new hosts), the hosts would not reconnect to vCenter.

Turns out VMware decided to disable SSLv3 for ESXi 5.5 Update 3b and higher, because of the POODLE vulnerability. The dependency is clearly stated in the release notes and in the VMware Product Interoperability Matrix below. (more…)

Read More

vRops: Beware of the filters

The other day we were messing around with VMware vRealize Operations Manager a.k.a. vRops. My customer wanted to have a clear overview of virtual machines being over- or undersized.

I like to use default views within vRops and adjust them to my needs. The same goes in this example using the predefined Virtual Machine Rightsizing CPU, Memory, and Disk Space view. As I was tuning this view to our liking, it only showed the virtual machines that were oversized.

I am just as curious about undersized virtual machines, but those were missing. I was expecting to see 4 virtual machines instead of 2…

vrops filters

Maybe it’s just me, or me still in ramping-up mode after a short vacation, but I missed the filters that were applicable to this view. Even though the description says it all… 🙂

List of Virtual Machine Rightsizing CPU, Memory, and Disk Space. This list is filtered to only VMs that are oversized and are currently powered on.

(more…)

Read More