My VMworld 2019

VMworld 2019 will mark my first edition as a VMware employee. Thus, I will be working more and have less time to attend sessions myself. However, there are always things to look forward to! As many know, it’s not only about the break-out sessions. I love visiting the Solution Expo and the bloggers area and most of all; meeting old and new friends!

Come meet me at the Meet-the-Expert tables, at the TAM customer day or during one of my break-out sessions. I will be presenting the following sessions:

Make sure to reserve your seat as soon as possible! All 3 sessions are different from each other. However, each session I get to co-present with an awesome peer. The session about vMotion will be together with one of the lead engineers on vMotion. We’ll be discussing vMotion on a deep level and talk about how to tune vMotion to saturate NICs up to 100GbE. The session will include lots of hidden gems on vMotion!

The talk about the latest server technologies is something dear to me, talking about hardware accelerations together with one of the product managers! We will go into how workloads can consume all the latest and greatest in hardware innovations.

Last but not least I will get to co-present with Johan van Amersfoort in a session that is all about a real life use-case which is a medical instance. Really interesting will be seeing the demo’s in this session in which we will show you how cancer cells can be detected in an early stage, effictivally helping the medical staff to start treatments

To Conclude

Lots of other cool sessions are going on during the VMworld week. Lost of them will be recorded but nothing beats the live interaction! Make sure you reserve your seats or join the waiting lists or queues as there is a good chance you will still make it in your favorite sessions. I would encourage you to see sessions live but to leave enough time open to mingle with your peers and to be open for meeting new people!

See you at VMworld!! Can’t wait.

Read More

Exploring the GPU Architecture

A Graphics Processor Unit (GPU) is mostly known for the hardware device used when running applications that weigh heavy on graphics, i.e. 3D modeling software or VDI infrastructures. In the consumer market, a GPU is mostly used to accelerate gaming graphics. Today, GPGPU’s (General Purpose GPU) are the choice of hardware to accelerate computational workloads in modern High Performance Computing (HPC) landscapes.

HPC in itself is the platform serving workloads like Machine Learning (ML), Deep Learning (DL), and Artificial Intelligence (AI). Using a GPGPU is not only about ML computations that require image recognition anymore. Calculations on tabular data is also a common exercise in i.e. healthcare, insurance and financial industry verticals. But why do we need a GPU for these types of all these workloads? This blogpost will go into the GPU architecture and why they are a good fit for HPC workloads running on vSphere ESXi.

Latency vs Throughput

Let’s first take a look at the main differences between a Central Processing Unit (CPU) and a GPU. A common CPU is optimized to be as quick as possible to finish a task at a as low as possible latency, while keeping the ability to quickly switch between operations. It’s nature is all about processing tasks in a serialized way. A GPU is all about throughput optimization, allowing to push as many as possible tasks through is internals at once. It does so by being able to parallel process a task. The following exemplary diagram shows the ‘core’ count of a CPU and GPU. It emphasizes that the main contrast between both is that a GPU has a lot more cores to process a task.

Differences and Similarities

However, it is not only about the number of cores. And when we speak of cores in a NVIDIA GPU, we refer to CUDA cores that consists of ALU’s (Arithmetic Logic Unit). Terminology may vary between vendors.

Looking at the overall architecture of a CPU and GPU, we can see a lot of similarities between the two. Both use the memory constructs of cache layers, memory controller and global memory. A high-level overview of modern CPU architectures indicates it is all about low latency memory access by using significant cache memory layers. Let’s first take a look at a diagram that shows an generic, memory focussed, modern CPU package (note: the precise lay-out strongly depends on vendor/model).

A single CPU package consists of cores that contains separate data and instruction layer-1 caches, supported by the layer-2 cache. The layer-3 cache, or last level cache, is shared across multiple cores. If data is not residing in the cache layers, it will fetch the data from the global DDR-4 memory. The numbers of cores per CPU can go up to 28 or 32 that run up to 2.5 GHz or 3.8 GHz with Turbo mode, depending on make and model. Caches sizes range up to 2MB L2 cache per core.

Exploring the GPU Architecture

If we inspect the high-level architecture overview of a GPU (again, strongly depended on make/model), it looks like the nature of a GPU is all about putting available cores to work and it’s less focussed on low latency cache memory access.

A single GPU device consists of multiple Processor Clusters (PC) that contain multiple Streaming Multiprocessors (SM). Each SM accommodates a layer-1 instruction cache layer with its associated cores. Typically, one SM uses a dedicated layer-1 cache and a shared layer-2 cache before pulling data from global GDDR-5 memory. Its architecture is tolerant of memory latency.

Compared to a CPU, a GPU works with fewer, and relatively small, memory cache layers. Reason being is that a GPU has more transistors dedicated to computation meaning it cares less how long it takes the retrieve data from memory. The potential memory access ‘latency’ is masked as long as the GPU has enough computations at hand, keeping it busy.

A GPU is optimized for data parallel throughput computations.

Looking at the numbers of cores it quickly shows you the possibilities on parallelism that is it is capable of.  When examining the current NVIDIA flagship offering, the Tesla V100, one device contains 80 SM’s, each containing 64 cores making a total of 5120 cores! Tasks aren’t scheduled to individual cores, but to processor clusters and SM’s. That’s how it’s able to process in parallel. Now combine this powerful hardware device with a programming framework so applications can fully utilize the computing power of a GPU.

ESXi support for GPU

VMware vSphere ESXi supports the usage of GPU’s. You will be able do dedicate a GPU device to a VM using DirectPath I/O, or assign a partitioned vGPU to a VM using the co-developed NVIDIA GRID technology or using 3rd party tooling like BitFusion. To fully understand how GPU’s are supported in vSphere ESXi and how to configure them, please review the following blog series:

To conclude

High Performance Computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly.

This is exactly why GPU’s are a perfect fit for HPC workloads. Workloads can greatly benefit from using GPU’s as it enables them to have massive increases in throughput. A HPC platform using GPU’s will become much more versatile, flexible and efficient when running it on top of the VMware vSphere ESXi hypervisor. It allows for GPU-based workloads to allocate GPU resources in a very flexible and dynamic way.

More resources to learn

Machine Learning with GPUs on vSphere

Why the Data Scientist and Data Engineer Need to Understand Virtualization in the Cloud

Running common Machine Learning Use Cases on vSphere leveraging NVIDIA GPU

Machine Learning with H2O – the Benefits of VMware

Read More

I am joining VMware!

I am incredibly excited to announce that I will be joining VMware! Even more thrilled that I will be a member of the Cloud Platform Business Unit, in the R&D organization, as a Technical Marketing Architect.

Very grateful for the opportunity to be part of a team full of all-stars! My main focus will be my longtime love; VMware vSphere. And everything that comes with it. There is a lot going on with vSphere; version 6.7 U1 is released, we are moving towards the next-gen performance enhancements like PMEM and vRDMA, let alone new (hardware based) security improvements and the recently announced ESXi on ARM support. The list goes on and on…

It all starts with VMware vSphere!!

I thought a lot about joining VMware and kept an eye out for the right opportunity. Even though I was having a good time working as a freelance IT architect with fun projects, the time was right for a new challenge. It all just worked out. Perfect timing.

A big shout-out to all the people who pushed for me, you know who you are! A special thank-you goes out to Emad (Younis) and my fellow Dutchies Frank (Denneman) and Duncan (Epping). Thanks for recommending me. Can’t wait to get started at the end of this November!

See you at VMworld in Barcelona next week!

 

Read More

My public speaking experience

It can be a bit of a hassle; working on cool projects as a contractor and finding the time to take days off to visit and/or present at a VMworld, VMUG or other industry events. And that is next to putting in the hours to co-write some pretty awesome books if I may say so myself.

Busy but fun times! However, it is really important to me to keep attending industry events. I genuinely love to visit and present as I really see it as a investment in myself and our community. Meeting new people is always fun and interesting, and listening in on sessions is very educational. While doing so, I get the chance to practise public speaking.

The article by Duncan Epping (http://www.yellow-bricks.com/2018/03/08/confessions-of-a-vmug-speaker-the-prequel-speakerfail/) made me recap and think about my experiences with public speaking so far.

The First Time

It’s only been 2 years since I’ve started to (co-)host presentations. It was the NLVMUG in 2016. I can relate to the excess of rehearsing as Duncan described in his article. I was pretty psyched but mostly nervous to present at a VMUG. I talked in front of customers a lot, but public speaking felt like a totally different ballgame.

But there I was, presenting in front of peers. Looking back at the recordings, there are so many points I want to improve on. And the best way to learn and improve is to do it more often. Around that time, Frank and I started writing our Host Deep Dive book. We got the chance to present some of the content at both VMworlds in 2016.

I remember that VMworld US 2016 was the second or third time I was on stage, but the first time not speaking native tongue. We had a big room and something in the range of 800 registrants! Once we walked to our room, Oceanside D, we witnessed a large amount of people waiting to get into the room. I managed to film the part when they opened the doors:

Very cool to witness, but a little bit scary as well. I kicked of the introductions only to stumble on the very first sentences and had trouble pronouncing our own session title. Talking about #speakerfail. Once I was up to do my part, the nerves settled and I had a lot of fun presenting. Afterwards, people who attended the session were positive and we scored a top-10 session which was awesome!

Since then, I had the chance to do several VMUGs with Frank next to the NLVMUG and both VMworlds again in 2017. We presented at the German, London, Italy and the Nordic VMUG.

Moral of the this short write-up;

If I can do it, you most certainly can!

I talked to a lot of people who want to present at a VMUG or other events, but it looks like they worry far too much to go through with it. It’s okay to be nervous, but don’t let it get you. It’s that thing about stepping outside your comfort zone and into the zone where the magic happens… Presenting is a great opportunity to share your knowledge and experience, and in the process, put your name on the charts.

Upcoming Schedule

Hopefully, I’ll get the chance to present at VMworld 2018. Two session are submitted, session ID 1738 and 1735. Pretty cool as one session will be together with a lead-engineer who works on Network DRS within VMware!

With the support of the VMUG organization and our friends at Rubrik, we will attend the Indianapolis VMUG at the 10th of July. Attending and presenting at one of the largest US based VMUGs will be a very good experience.

Also, the VMUG at Prague on the 24th of May is on the schedule, really looking forward to that one!

There are some other opportunities that are work in progress.

To Conclude

I honestly hope that my experience can encourage others to take the leap of faith and contribute at an upcoming VMUG. Think about that project you’re working on and the design choices you made for it to succeed. Or what about that issue you ran into and solved. All very good content to convey to our VMware community.

Like the Nike campaign launched in 1988 stated; Just do it!!

Read More

VMworld 2017 session picks

VMworld is upon us. The schedule builder went live and boy am I excited about VMworld 2017!

SER1872BU

This year Frank and I are presenting the successor of last years session at both VMworlds. We are listed in the schedule builder as SER1872BU – vSphere 6.5 Host Resources Deep Dive: Part 2. We are planning to bring even more ESXi epicness with a slight touch of vSAN and networking information that allows you to prep ESXi to run NFV workloads that drive IoT innovations. Last year we were lucky to have packed rooms in Vegas and Barcelona.

The enthusiasm about our book witnessed so far shows us there is still a lot of love out there for the ESXi hypervisor and ‘under-the-hood’ tech! We are working hard on having an awesome session ready for you!

VMworld Session Picks

This year I want to learn more about NFV, IoT and Edge as I find innovation in these areas intriguing. I found some sessions that look to be very interesting. I supplemented these with talks held by industry titans about various topics. If my schedule lets me, I want to see the following sessions:

  • Leading the 5G and IoT Revolution Through NFV [FUT3215BUby Constantine Polychronopoulos
  • vSAN at the Edge: HCI for Distributed Applications Spanning Retail to IoT [STO2839GU] by Kristopher Groh
  • VMware Cloud Foundation Futures [PBO2797BU] by Raj Yavatkar
  • Machine Learning and Deep Learning on VMware vSphere: GPUs Are Invading the Software-Defined Data Center [VIRT1997BU] by Uday Kurkure and Ziv Kalmanovich
  • Managing Your Hybrid Cloud with VMware Cloud on AWS [LHC2971BU] by Frank Denneman and Emad Younis
  • The Top 10 Things to Know About vSAN [STO1264BU] by Duncan Epping and Cormac Hogan

There are way more interesting sessions going on! Be sure to find and schedule your favorite ones as rooms tend to fill up quickly!

See you in Vegas and Barcelona!!

 

Read More

Host Resources Deep Dive Released!

The time has finally come that we published our (e)book. It is available via Amazon using the following links:

Paperback Kindle (ebook)
Amazon US Amazon US 
Amazon DE Amazon DE
Amazon Mexico Amazon NL
Amazon UK Amazon UK
 Amazon Japan Amazon Japan
 Amazon India Amazon India

 

Countless hours, weeks, months has gone into this project. It all began in the beginning of 2016. Frank and I had a lot of discussions about consistent performance and how to optimize while keeping consolidation ratios in mind. At the time, I was working on a NFV platform that was plotted on a virtual datacenter and Frank was working on his epic NUMA content. It all led to the idea of writing down our findings in a book.

It is almost like going against the current; we were looking into ESXi host behaviour, while the world is advancing to higher services running on top of vSphere. Why even bother about the hypervisor? It is all commodity after all, right? Well… not per se. It is all about understanding what you are doing on each level, not about just throwing hardware at a performance/capacity requirement. We began to talk about our ideas to our peers to see if they were interesting and feasible to include in a book.

We quickly came to the conclusion that there are a lot of unknowns when it comes to tuning your virtual environment to accompany distributed services like vSAN and NSX, latency sensitive workloads and right-sizing your VMs in general. We hope our book will help vSphere administrators, architects, consultants, aspiring VCDX-es and people eager to learn more about the elements that control the behavior of CPU, memory, storage and network resources.

I am extremely grateful that I was part of realising this book. It is really inspiring to work closely with Frank. He has a tremendous way of expressing his train of thought and has a lot of experience in creating tech books. Our discussions about tech or general topics are always a blast.

All our effort led to our (and my first) publishing which contains:

  • 122.543 words
  • 5217 paragraphs
  • 23 chapters
  • 569 pages
  • 311 screenshots and diagrams

The main challenge was to dig deep in the world of host resources while working a very busy day job at my beloved customers. The effort required was immense. Think a VCDX path times 10. At least, that is how it felt to me. But it was so much fun. I remember a few weeks ago; We were working on the last storage content until deep into the night, like we did almost every day for the last few months, and we were still so psyched at 3.30AM. So even after a very intense period, we still got the kicks out of talking about the content and book in the middle of the night!

We tried to keep the price as low as possible to allow everybody interested in it to be able to buy it. We feel we managed to do that even though the book contains 569 pages. The digital book will following after some PTO and both VMworlds.

As discussed in a previous post; We attended several VMUGs to talk about content included in the book. We are now also confirmed to have a ‘part deux’ of last years VMworld top 10 session this year in Las Vegas and in Barcelona (session 1872). We really look forward seeing you there! Please let us know your thoughts on our book, even write a review on Amazon if you will. It is very rewarding to hear it helped you in some way. Thanks!

Read More