I first want to start off by saying that I love this feature which was introduced in Windows Server 2012 R2! It has allowed me to deploy many lab scenarios with serious ease and get entire cluster environments up and running using nothing but PowerShell. I get a lot of customers who are really excited about this feature and sometimes jump right in missing a couple of things that should seriously be considered before going into production. Rather than continuing to have this conversation I figured I would write up a quick post to explain so I can point people this direction when it comes up.

IO Redirection with Traditional SAN

Lets take the scenario that you have a traditional SAN infrastructure today and you wish to simply implement Shared VHDx on top of Cluster Shared Volumes which reside on the Hyper-V cluster nodes - No Scale-Out File Servers in this case.

You have a single CSV and build up a guest cluster on top of your Hyper-V cluster, some Shared VHDx files are presented to the guest cluster which reside in the CSV.

You would expect that the IO path to these shared VHDx file would look like the below diagram; and you would be correct given the scenario in which the guest cluster node which is active currently resides on the Hyper-V host which owns the underlying CSV (coordinator node)

Note: The below diagram may appear as though I have no redundancy on Management, LiveMigration or Cluster. Please keep in mind I am using Cisco UCS in this design and making use of the CNA + Fabric interconnect to handle convergence and failover on the hardware side.

alt

But lets flip things up a bit and say that the guest cluster node which is active resides on a Hyper-V cluster node which does not currently own the CSV (not the coordinator node). You may be surprised to find out that the IO to this Shared VHDx file is being redirected over the cluster interface(s) (Using SMB and SMB Multi-Channel if multiple cluster networks are available) to the node which owns the CSV (coordinator node) and that node is then processing IO over the Storage Area Network.

alt

Considering this scenario you need to put some serious though into your network topology, specifically that which is supporting your Hyper-V cluster communication if you are deploying any high IO workload that will be using Shared VHDx. I usually see folks deploying this for SQL server cluster databases so there is typically a lot of performance considerations.

So what Performance Impact Might this Have?

I went ahead and set up a scenario in my environment to get an idea of the performance variance we may see. In my environment my SAN is accessible via FCoE and I have a dedicated Cluster interface being presented via a Cisco UCS vNIC - In this case I have an RSS capable interface for cluster communication so multiple CPU cores are available for processing.

Two tests were ran:

  • An IO test while the active guest cluster node resides on the Hyper-V node which owns the underlying CSV (direct IO).
#Ran while Direct IO was capable
Diskspd.exe -b64K -d60 -h -L -o2 -t8 -r -c50G h:\io.dat > direct.txt  

alt

  • An IO test while the active guest cluster node resides on a Hyper-V node which did not own the underlying CSV (redirected IO).
#Ran while in Redirected IO scenario
Diskspd.exe -b64K -d60 -h -L -o2 -t8 -r -c50G h:\io.dat > redirected.txt  

alt

While this is still performing quite well I have to point out that when looking at a decrease in performance via percentages it is pretty significant. The big issue here in my eyes is that you have the potential for things to be a bit unpredictable as workload is moving around during maintenance or host optimization. I could always add multiple cluster interfaces to let SMB Multi-Channel come into effect or start to consider RDMA via RoCE or something like that but that is the point of this post - You need to consider this scenario, the performance variance you may see depending on where things live and what network topology you can implement to meet your performance needs.

What About Scale-Out File Server (SOFS)?

This changes things quite a bit. In the interest of not making things any more confusing lets leave storage spaces out of the picture and go with the scenario of traditional SAN infrastructure with SOFS nodes out in front of my SAN.

In this case the negotiation over SMB from Hyper-V to SOFS determines automatically what the best SOFS node is to access the VHDx eliminating a redirection scenario in the SOFS cluster. In this case it will talk directly to the SOFS node which owns the underlying CSV for that Shared VHDx file. Now of course the connection to this is still happening over SMB but ideally you have configured your network to meet your performance needs with SMB whether that be via RDMA, RSS etc. The main point here is that this is going to have a consistent performance characteristic no matter where the active guest cluster node lives or who owns the underlying CSV on the SOFS side unlike the scenario I explained above.

It is not to say that Shared VHDx will only work in the SOFS scenario, it is just that you need to be aware of the IO path and be sure you have configured the network appropriately. A scenario I have seen in the past was 1Gbps networking on the Hyper-V hosts with 8Gbps Fibre Channel on the storage side resulting in extremely poor performance when the active guest cluster node was not aligned with the CSV owner Hyper-V host. For this particular customer I ended up building an SMA runbook that would run every couple of minutes and determine which node of the guest cluster was active, which Hyper-V cluster node that lived on, and then make sure the owner of the underlying CSV(s) for the shared disks were owned by that node - This scenario may work its way into a future Blog Post! :-)

Management of VMs with Shared VHDx from SCVMM

If System Center Virtual Machine Manager is your choice of management you need to understand that you can only deploy Shared VHDx from SCVMM as a part of a Service Template. I find that not a lot of people really use Service Templates and neither do I. You can still create the VMs using SCVMM, then outside of SCVMM (Hyper-V Manager or PowerShell) create and attach the Shared VHDx files then refresh SCVMM.

Wrap Up

Shared VHDx is an awesome feature!

Be sure to understand that you may see a performance variance when you implement Shared VHDx on top of CSVs that are attached to the Hyper-V cluster (no SOFS) depending on where active guest cluster nodes live / who owns the CSV.

When implementing Shared VHDx with SOFS you will not see a variance in performance when active guest cluster nodes move to different Hyper-V hosts / CSV ownership on the SOFS side changes.

© 2017. All Rights Reserved.