Today I realized SQLIO can be a great tool for verifying full use of your iSCSI paths. I was simply testing IO / throughput on my lab SAN from a VM sitting on a Cluster Shared Volume and I noticed that my two iSCSI NICs on the Hyper-V hosts looked like this during the test:
Highlighted in red are my two iSCSI NICs, clearly something is not balanced here as I should see an even distribution across these two NICs if MPIO is set up properly.
Setting up SQLIO
Here is a quick explanation on how I got SQLIO configured for my testing:
- Set up VM for testing: Server 2012 VM sitting in CSV, added additional 250GB for SQLIO testing files
- Download and Install SQLIO from Microsoft
- Modify the SQLIO configuration files (C:Program Files (x86)SQLIOparam.txt)
Below is what my param.txt file looks like. This is a 50GB file so be careful! – In my case E: is a dedicated 250GB VHDx for SQLIO testing purposes which sits right in my cluster shared volume.
e:testfile.dat 2 0x0 51200 #d:testfile.dat 2 0x0 100
Below is the command I am executing to kick of SQLIO, I am running this from the SQLIO installation directory (C:Program Files (x86)SQLIO)
sqlio.exe -kW -s100 -fsequential -t4 -o4 -b64 -LS -F”param.txt”
To better understand the parameters I am using and what else you have available to you go through the readme.txt file (C:Program Files (x86)SQLIOreadme.txt)
- -kW: use Write IO
- -s100: run for 100 seconds
- -fsequential: use sequential stripe size
- -t4: use 4 threads
- -o4: depth to use for completion routines
- -b64: use 64KB IO block size
- -LS: latencies from system
- -F”param.txt”: read parameters from file
Note: There are many more options, this is just what I selected to use. Read the readme.txt file!
Note: The first time you run the above command it is going to create d:testfile.dat, this is going to take a really long time depending on your system as its creating a large flat file 50GB in size. – Once this is created every time yo use SQLIO with the same parameters it will use the existing file.
Obviously my environment does not apply to everyone, in my case I have two iSCSI NICs and two iSCSI endpoints on my SAN. I would like each NIC to see each iSCSI endpoint (single fault domain – 1 VLAN) and utilize all paths evenly. Take into consideration what your actual environment looks like, I am just walking you through my testing scenario.
- Since I am running a Hyper-V cluster I am pausing my effected node and letting all the VMs migrate off this node (draining)
- Next I will go into the iSCSI initiator and disconnect from my targets and remove the discovery portals (starting fresh)
- Run a quick connect to your targets discovery interface (I am testing on Compellent so I have a control port which will respond with all the available paths) – Don’t click connect! – Click Done instead!
Instead of just letting the iSCSI initiator do whatever it feels like doing when you simply click connect we are going to go into the advanced settings and specify each interface we want to connect to each target as well as specifying MPIO – Here’s how:
- Click on your first target and click Properties
- Under “Properties” click “Add session” > select the box for Enable multi-path > click Advanced > Specify your local adapter, Initiator IP and Target portal IP like so:
Now you need to repeat this until you get each adapter connected to each target node address (this is how I wanted it in my case – you may have an environment / vendor where a 1to1 is required / desirable). My end result will look something like this:
Get-IscsiSession AuthenticationType : NONE InitiatorInstanceName : ROOTISCSIPRT