Last week I was going through ‘What’s New: VMware Virtual SAN 6.0‘, it seems like VSAN 6.0 is bigger, better and faster. The latest installment of VMware’s distributed storage platform provides a significant IOPS boost, up to twice the performance in hybrid mode. The new VirstoFS on-disk format is capable of high performance snapshots and clones. Time to put it to the test.
Disclaimer: this benchmark has been performed on a home lab setup, components used are not listed in the VSAN HCL. My goal is to confirm an overall IOPS and snapshot performance increase by comparing VSAN 5.5 with 6.0. I did so by running a synthetic IOmeter workload.
VMware has a really nice blogpost on more advanced VSAN performance testing utilizing IOmeter.
Hardware
My lab consists of 3 Shuttle SZ87R6 nodes, connected by a Cisco SG300.
Chipset | Z87 |
Processor | Intel Core i5-4590S |
Memory | 32 GB |
NIC 1 | 1 GE (management) |
NIC 2 | 1 GE (VSAN) |
HDD 1 | Samsung 840 Evo (120GB) |
HDD 2 | HGST Travelstar 7K1000 (1TB) |
ESXi/VSAN versions
- ESXi 5.5 Update 2 (build 2068190)
- ESXi 6.0 (build 2494585)
IOmeter VM
1 Windows Server 2012 VM with IOmeter is deployed on each ESXi host. The VM contains 2 vCPU’s and 4 GB of memory. Besides the OS hard disk, 2 extra disks are configured, backed by their own VMware Paravirtual SCSI controller.
IOmeter profiles
3 workload profiles are configured. Each worker is assigned its own vDisk and uses a 100 MB (204800 sectors) working set. I’ve chosen a small working set to ensure a fast cache warm up. The number of outstanding IO is 16, this makes for a total of 32 outstanding IO per IOmeter VM.
Profile 1 | Profile 2 | Profile 3 | |
Worker 1 | vDisk 1 | vDisk 1 | vDisk 1 |
Sectors | 204800 | 204800 | 204800 |
Outst. IO | 16 | 16 | 16 |
Worker 2 | vDisk 2 | vDisk 2 | vDisk 2 |
Sectors | 204800 | 204800 | 204800 |
Outst. IO | 16 | 16 | 16 |
Read | 100% | 0% | 65% |
Write | 0% | 100% | 35% |
Random | 100% | 100% | 60% |
Sequential | 0% | 0% | 40% |
Block size | 4 KB | 4 KB | 4 KB |
Alignment | 4096 B | 4096 B | 4096 B |
Results
It’s obvious VSAN 6.0 delivers more IOPS at a reduced latency.
- 100% READ | 100% RANDOM: +37% IOPS | –27% latency
- 65% READ | 60% RANDOM: +53% IOPS | -35% latency
- 100% WRITE | 100% RANDOM: +25% IOPS | -24% latency
Snapshot performance
VSAN 5.5 uses the vmfsSparse snapshot format, which is redo log based. The new snapshot format in VSAN 6.0 is called vsanSparse and uses a redirect-on-write mechanism. Watch this Storage Field Day video for more information on the subject.
The benchmark is really straightforward; I start an IOmeter workload (profile 3 with 8 OIO), take subsequent snapshots and observe the performance in VSAN Observer.
I’ve grouped the VSAN 6.0 (light grey background) and VSAN 5.5 (dark grey background) graphs. Please note that the values on the x- and y-axis are not always identical!
Click to enlarge
It seems the latency is actually somewhat better in VSAN 5.5. The added latency for VSAN 5.5 never exceeds 0.5 ms, VSAN 6.0 adds up to 1.0 ms.
IOPS
Click to enlarge
VSAN 6.0: IOPS decrease by an average of 8%
VSAN 5.5: IOPS decrease by an average of 49%
The performance optimizations in VSAN 6.0 are obvious. There doesn’t seem to be any noticeable performance impact after the first snapshot.
Bandwith
Click to enlarge
Interesting result! The snapshot format of VSAN 5.5 increases read traffic after each snapshot. VSAN 6.0 does not show this behaviour, yet another improvement of vsanSparse.
Conclusion
VSAN 6.0 shows some nice performance improvements over it’s older brother. I do feel like there’s still a fairly large gap to close, looking at snapshot performance. Zero impact snapshots are becoming increasingly popular.