There has been a lot of information floating around about converged fabric with Hyper-V in Widnows Server 2012 and a lot of information about how to configure a Converged Fabric using PowerShell. The purpose of this blogpost is to explain in detail how to deploy Converged Fabric using System Center 2012 – Virtual Machine Manager 2012 SP1. The reason there are so many examples explaining how to deploy a Converged Fabric using PowerShell is because a lot of the things you need to configure are not available in Hyper-V Manager, Failover Cluster Manager or any other GUI tool built into Widnows Server 2012. You can however, deploy a converged fabric using SCVMM 2012 SP1 once you have all of your fabric components put into place.

So What is Converged Fabric?

Converged Fabric is not a feature that can simply be enabled in Windows Server 2012, it is rather the implementation of a number of features which are built into Windows Server 2012 which make Converged Fabric possible:

  • NIC Teaming (load balancing and failover – LBFO)
  • Hyper-V Extensible Switch
  • Management OS Virtual NICs
  • VLAN Support
  • Hyper-V QoS (there are multiple methods to implement QoS, this blogpost will focus on Hyper-V QoS)

A typical Hyper-V host network configuration pre Windows Server 2012 often dedicated physical network interfaces per host workload (1 NIC for Management, 1 NIC for Cluster/HB, 1 NIC for Live Migration, 1 NIC for Hyper-V Switch etc.). This required a lot of often underutilized physical NICs and a number of port configurations per Hyper-V host on the physical switch.

Legacy fabric example:

Legacy_Fabric

With a converged fabric option for the same workload requirements in Server 2012 you could:

  1. Create a team comprising your physical links
  2. Create a Hyper-V Extensible switch utilizing that team
  3. Create Virtual Network Adapters for the Management OS to utilize plugged directly into the Virtual Switch
  4. VLAN tag the Management OS Virtual Network Adapters
  5. Apply QoS policies to ensure a minimum bandwidth requirement for specific virtual adapters should network congestion occur on a single physical network link.

Converged Fabric example:

Converged<em>Fabric</em>Example

Sample Converged Fabric deployment via PowerShell: http://technet.microsoft.com/en-us/library/jj735302.aspx#bkmk_2

What is required in VMM?

Firsts things first, we need to deploy all of the required fabric components in VMM to support Converged Fabric. To summarize, we need to configure the following:

  • Logical Network
  • Logical Network Definition (Site)
  • Native Uplink Port Profile
  • Native Virtual Adapter Port Profiles (VMM has some built in we can use)
  • Logical Switch
  • VM Networks

So lets dig in, starting with the Logical Network

Logical Network and Associated Logical Network Definition

Logical Network’s are a container for Logical Network Definitions which contain the associated VLANs / Subnets for a specific location. – That’s a lot of containing !

  1. Navigate to the “Fabric” pane in VMM > and select “Logical Networks” under “Networking” > click “Create Logical Network” in the action pane.
  2. Give your Logical Network a useful name and check the box for “Network sites within this logical network are not connected” – This will enable us to use VLAN isolation.

Create<em>Logical</em>Network

  1. On the next page you need to create your Logical Network Definition (Network Site) and associate that with your host group. In my example I am creating a Logical Network Definition for my Engineering location which utilizes the following VLANs and network subnets:

0 (Native) / 10.168.31.0/24:      Hyper-V Host Network
27 / 10.168.27.0/24:                  Hyper-V Live Migration Network
28 / 10.168.28.0/24:                  Hyper-V Cluster Network
30 / 10.168.30.0/24:                  Virtual Machine Network

Note: Although I am specifying that my Hyper-V Host Network is 0 the actual VLAN on the switch side is 31, 31 is simply configured as the native VLAN on the switchport trunk for these Hypervisors. Below is an example switchport configuration for my Hyper-V Hosts (Cisco Nexus 5548UP):

interface Ethernet1/13  
description hv3  
switchport mode trunk  
switchport trunk native vlan 31  
spanning-tree port type edge  

Create<em>Logical</em>NetworkDefinition

Native Uplink Port Profile

Our next step is to create a “Native Uplink Port Profile”, this is going to define the physical NIC team as well as what Logical Network Definitions (Network Sites) it has available to it.

  1. Navigate to the “Fabric” pane in VMM > and select “Native Port Profiles” under “Networking” > right click “Native Port Profiles” and select “Create Native Port Profile”
  2. Give your port profile a useful name and select “Uplink port profile” under “Type of native port profile”
  3. Its time to select your “Teaming mode” and “Load balancing algorithm” > for the purposes of this walk through I will be demonstrating SwitchIndependent with HyperVPort
  4. On the next page select your Logical Network Definition (Network site) for this Uplink Port Profile > click next > click finish.

Note: You will notice that on the “Network configuration” page of the “Create Native Port Profile” wizard you see a box you can check for “Enable Windows Network Virtualization”. What this does is enable the “Windows Network Virtualization Filter driver” on the physical links for the purposes of enabling NVGRE. For the purposes of this post we will not be digging into NVGRE but it should be noted that this is where you can globally enable the filter so you can use that feature in Hyper-V.

Create<em>Native</em>Port<em>Profile</em>1

Create<em>Native</em>Port<em>Profile</em>2

Native Virtual Adapter Port Profiles

When we go to deploy this configuration to a Hyper-V host we will actually assign Virtual Network Adapters to the host (just like we do with a VM). With the use of Native Virtual Adapter Port Profiles we can actually define the Offload Settings (adapter offloads), Security Settings (things like DHCP guard) and Bandwidth Settings (QoS). Fortunately for the purposes of the Virtual Adapter workloads we plan on deploying (Host Management, Live migration and Cluster), VMM already has some good examples which shipped with VMM.

We are going to use these built in ones so go ahead and review them, pay particular attention to the “Bandwidth Settings” tab and more specifically the “Minimum bandwidth weight”.

Review<em>Port</em>Profiles

Logical Switch

Instead of going Hyper-V host to Hyper-V host and manually creating your virtual switches this is where we can build a single switch which we then deploy to our Hyper-V hosts. This Logical Switch will contain the Uplink Port Profile which will determine the teaming settings for the physical adapters on the Hyper-V host as well as the Virtual Adapter Profiles available on the switch to be used for both ManagementOS workloads (Host Management, Live Migration, Cluster etc.) and Virtual Machine workloads (High bandwidth, medium bandidth etc.).

  1. Navigate to the “Fabric” pane in VMM > and select “Logical Switches” under “Networking” > Click “Create Logical Switch” in the action pane.
  2. On the “General” tab give your logical switch a useful nameCreate<em>Logical</em>Switch_1
  3. On the “Extensions” tab leave this as default as we are not messing with additional extensions as a part of this walk through
    Create<em>Logical</em>Switch_2
  4. On the “Uplink” tab specify your “Uplink Mode” to be “Team” > Click “Add” under “Uplink port profiles” and select the uplink port profile you created in the previous step.
    Create<em>Logical</em>Switch_3
  5. On the “Virtual Port” tab click add and select the “Port classification” we want to add > click the box for “Include a virtual network adapter port profile in this virtual port” > select the appropriate port profile for the classification. Repeat this step until you have all of the required classifications for your converged fabric design
    Create<em>Logical</em>Switch<em>4 Create</em>Logical<em>Switch</em>5
  6. Click Next > click finish

VM Networks

We now need to create the VM networks which basically creates an object we use to plug a virtual network adapter (assigned to the host or a VM) into a specific network, VLANs in our example.

  1. Navigate to the “VMs and Services” pane in VMM click on “VM Networks” and click “Create VM Network”
  2. On the “Name” tab give your VM network a useful name and select the “Logical network” you created earlier
    Create<em>VM</em>Network_1
  3. On the “Isolation” tab select the radio for “Specify a VLAN” > choose your “Logical network definition” > choose your “Subnet VLAN”
    Create<em>VM</em>Network_2
  4. Click next > click finish
  5. Repeat this step for all of your VM networks

Deploying your Fabric to Hyper-V Hosts

Now that we have all of that built in VMM we can finally deploy our converged fabric to our Hyper-V host(s) via the Logical Switch. To get ready for this have a Hyper-V host added to VMM which does not yet have a virtual switch. Be sure this Hyper-V host is added to the Host Group which your Logical Network Definition (Site) is assigned to.

  1. Navigate to the “Fabric” pane in VMM > expand your host group > right click on your Hyper-V host and click “Properties
  2. Select the “Virtual Switches” tab and click “New Virtual Switch” > and select “New LogicalSwitch”   Apply<em>Logical</em>Switch_1
  3. Select your “Logical switch” and click “Add” under “Physical adapters” for the number of adapters you wish yo be a part of this team > also select your “Uplink Port Profile” – In my case I have 2 x 10G adapters
    Apply<em>Logical</em>Switch_2
  4. Select “New Virtual Network Adapter” > give it a name >  select the box for “This virtual network adapter inherits settings from the physical management adapter” for the first one which will be your Management adapter > select the appropriate “VM Network” and “Port profile classification” > Repeat this step for each of your adapters (Management, LiveMigration and Cluster in my example)
    Apply<em>Logical</em>Switch_3

Note: Only select the box for “This virtual network adapter inherits settings from the physical management adapter” for the “Management” virtual network adapter. This will move the IP settings from the current management adapter to this virtual adapter insuring VMM can continue to connect to this host during the deployment of the logical switch. I have found that without this it will likely pick up a new IP from DHCP and without an administrator flushing the DNS on the VMM server during the deployment of the Logical Switch your job will not complete and you will have half of your converged fabric deployed.

  1. Select OK > and click OK to continue after reading the warning
    Apply<em>Logical</em>Switch_4

  2. Go to the Jobs in VMM and be sure this Logical Switch is applied successfully
    Apply<em>Logical</em>Switch_5

What about QoS?

Remember all of those Native Virtual Adapter Port Profiles VMM shipped with which we used when adding virtual adapters to the ManagementOS during the deployment of the logical switch? Each of those adapters has a minimum bandwidth weight assigned to them:

  • Host management = 10
  • Live migration = 40
  • Cluster = 10

If you run the following command on your Hyper-V host you will see how this can be calculated as a percentage of bandwidth:

QoS_Calculation

Weighted QoS policies are nice because they will not kick in unless traffic contention actually occurs over one of the physical links. This means Live Migration can consume 100% of a physical link until one of the other adapters is contending for traffic, at which point QoS with throttle the traffic.

Limitations

Currently VMM is not capable of setting Jumbo Frames on either the Physical NICs or specific ManagementOS vNICs. In my case I would want each physical link to have an MTU of 9014 and I would also want Cluster and LiveMigration vNICs to have an MTU of 9014. I currently have a PowerShell script I run post deployment to handle this but I hope future releases of the product will resolve this so you manage your entire fabric from one pane of glass.

Summary

We have now deployed the example converged fabric we talked about at the beginning of this post. I hope you find this useful!

© 2017. All Rights Reserved.