Had access to some 10GE NICs and therefore decided to do some performance testing and try out Multi-NIC vMotion.
The 10GE NICs were Intel X520-DA2 and I used SFP-H10GB-CU3M to connect directly to both hosts as I do not access to a 10GE switch at the moment.
Without reading any documentation, I tried add both 10GE ports into a vSwitch and vMotion does not work. After googling and reading the documentation, we actually have to separate the 10GE ports into their own vSwitches before the Multi-NIC vMotion will work.
Single-NIC 10GE vMotion Configuration
Single-NIC 10GE vMotion Tests
Test 1: 14 seconds
Test 2: 14 seconds
Test 3: 10 seconds
Test 4: 11 seconds
Single-NIC 10GE vMotion Network Performance Graphs
Multi-NIC 2X10GE vMotion Configuration
Multi-NIC 2X10GE vMotion Tests
Test 1: 10 seconds
Test 2: 12 seconds
Test 3: 09 seconds
Test 4: 12 seconds
Multi-NIC 2X10GE vMotion Network Performance Graphs
Although with Multi-NIC, vMotion seems to perform faster as compare to Single-NIC, however, the difference is not significant. This is probably due to the size and the number of VMs I used for the vMotion tests.
It was noticed that there was load sharing of vMotion traffic between the 2 10GE ports as you can see from the network performance graphs. Using single-NIC, network usage is about 200MBps while multi-NIC scenario, network usage is about 100MBps per NIC which is 50% of that single-NIC.
With VSAN6.1, Virtual SAN monitors solid state drive and magnetic disk drive health and proactively isolates unhealthy devices by unmounting them. This has happen to my home lab and wish to share how I recover the unmounted disks.
You can read more about this feature on http://cormachogan.com/2015/09/22/vsan-6-1-new-feature-problematic-disk-handling/
As you can see, the disks in one of my hosts are unmounted. I have tried to find a menu so I can remount the disks but could not find one. Put the hosts into maintenance mode and restart the hosts also did not recover the unmounted disks.
I have to erase the partitions (both HDD and Flash) for that host, claim the disks again before I could observe the disks are mounted correctly again.