Technical notes on IP Based Storage Guide for EMC VNX and VMware 5.5

When it comes to IP based storage there are two major choices for use in VMware environments. NFS and iSCSI

This article will discuss iSCSI options.

NFS is a very valid design choice with very flexible options for deployment. NFS for use with VMware is Fast and Flexible in other words a solid choice. But NFS is for another time and a different discussion.

Why IP Based storage??

In a single word: SPEED and Flexibility. Okay two words. Core networking speeds are no longer limited by 10baseT, 100baseT but 10 Gigabit Ethernet is more so the standard. You already have IP based network, why not try to leverage what is installed?

I have seen greater interest in deployment of iSCSI based storage for VMware lately. Not just 1 gb but 10 ‎Gigabit Ethernet is gaining more of a foothold for DataCenters, as customer upgrade their core networking from 1 Gb to 10Gb. The potential for performance gains is really a good thing, but fundamental to deployment is to consider a solid network design.

What I mean is end-to-end. What kind of switch are you using? Are you limited by a certain number of 10 gb ports? How many switches do you have. Do you have a single point of failure? This is more critical as you will be really leveraging your network for “double duty”. Instead of a discrete network designated for storage such as FC can provide you will now run storage information across your IP network. Ideally two separate physical switches are ideal. BUT at a minimum use VLANS for logical separation. And let me go ahead and say it… “VLAN 0 (zero) is a not really a scalable enterprise option!” That is a huge red flag you will require more network analysis and work to deploy a IP based storage solution.

There are many considerations for a successful iSCSI implementation.

1) Gather Customer Requirements 2) Design  3) Verify 4) Implement/ Deploy 5) TEST User Acceptance Testing

Ideally having two 10Gb switches for redundancy is a good thing! Be careful in the selection of a Enterprise grade switch. Have seen horrible experience when improper features are not enabled. i.e. flow control can be a good thing!

Software based iSCSI initiators. Don’t forget about Delayed ACK. See Sean’s excellent article here: Delayed ACK setting with VNX. Read more VMware details about how to implement Delayed ACK from KB1002598

“Modify the delayed ACK setting on a discovery address (recommended):

  1. On a discovery address, click the Dynamic Discovery tab.
  2. Click the Server Address tab.
  3. Click Settings > Advanced.”

Double, no TRIPLE check the manufacture recommended CABLE TYPE and LENGTH. For example: Does your 10GbE use fiber optic cable? Do you have the correct type? What about the SFP?  And if you are not using fiber optic, but choose to use TwinAX cabling. Do you have the correct cable as per manufacture requirements?

For example: Meraki, only makes and supports a 1 Meter 10g Passive Copper Cable for their switches.  If you look at any normal Cisco Business Class switch they support 1, 3, 5 meter passive and 7, 10 meter active cables on their switches. 

Active cables are generally more expensive, but could be a requirement depending on your datacenter and or colocation layout.

I try to approach the solution from both ends. Storage to the Host and the reverse Host to the storage.  Examine end-to-end dependancies. Even though your area of responsibility isn’t the network, you will be reliant on the network services and any misconfiguration will impact your ability to meet the design requirements stated. You may not have bought or had any input to the existing infrastructure but you will be impacted by what is there currently in use. How will you Keyword: Interoperability how each independent system will interact with another system. Upstream and downstream dependencies.

Other considerations:

For example: vmkernel port binding: The diagram below is from VMware KB 2038869 “Considerations for iSCSI Port Binding”

Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address. When using port binding, you must remember that:

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
  • Currently, port binding does not support network routing.”

portbinding

While there isn’t FC zoning for IP based storage there will be a requirement for subletting and VLAN separation.

For VNX here are some design considerations for iSCSI design:

The following points are best practices for connecting iSCSI hosts to a CLARiiON or VNX:

  • iSCSI subnets must not overlap the management port or Service LAN (128.221.252.x).

  • For iSCSI, there is no zoning (unlike an FC SAN) so separate subnets are used to provide redundant paths to the iSCSI ports on the CLARiiON array. For iSCSI you should have mini-SANs (VLANs) with only one HBA per host in each VLAN with one port per storage processor (SP) (for example, A0 and  B0 in one VLAN, A1 and  B1 in another).  All connections from a single server to a single storage system must use the same interface type, either NIC or HBA, but not both.

  • It is a good practice to create a separate, isolated IP network/VLAN for the iSCSI subnet. This is because the iSCSI data is unencrypted and also having an iSCSI-only network makes troubleshooting easier.

  • If the host has only a single NIC/HBA, then it should connect to only one port per SP. If there are more NICs or HBAs in the host, then each NIC/HBA can connect to one port from SP A and one port from SP B. Connecting more SP ports to a single NIC can lead to discarded frames due to the NIC being overloaded.

  • In the iSCSI initiator, set a different “Source IP” value for each iSCSI connection to an SP.  In other words, make sure that each NIC IP address only appears twice in the host’s list of iSCSI Source IP addresses: once for a port on SP A and once for a port on SP B.

  • Make sure that the Storage Process Management ports do not use the same subnets as the iSCSI ports – see [Link Error:UrlName “emc235739-Changing-configuration-on-one-iSCSI-port-may-cause-I-O-interruption-to-all-iSCSI-ports-on-this-storage-processor-SP-if-using-IP-addresses-from-same-subnet” not found] for more information.

  • It is also a best practice to use a different IP switch for the second iSCSI port on each SP. This is to prevent the IP switch being a single point of failure. In this way, were one IP switch to completely fail, the host can failover (via PowerPath) to the paths on the other IP switch. In the same way, it would be advisable to use different switches for multiple IP connections in the host.

  • Gateways can be used, but the ideal configuration is for HBA to be on the same subnet as one SP A port and one SP B port, without using the gateway.

For example, a typical configuration for the iSCSI ports on a CLARiiON, with two iSCSI ports per SP would be:

A0: 10.168.10.10 (Subnet mask 255.255.255.0)
A1: 10.168.11.10 (Subnet mask 255.255.255.0)
B0: 10.168.10.11 (Subnet mask 255.255.255.0)
B1: 10.168.11.11 (Subnet mask 255.255.255.0)

A host with two NICs should have its connections configured similar to the following in the iSCSI initiator to allow for load balancing and failover:

NIC1 (for example, 10.168.10.180) – SP A0 and SP B0 iSCSI connections
NIC2 (for example, 10.168.11.180) – SP A1 and SP B1 iSCSI connections

Similarly, if there were four iSCSI ports per SP, four subnets would be used. Half of the hosts with two HBA would then use the first two subnets, and the rest would use the other two.

The management ports should also not overlap the iSCSI ports. As the iSCSI network is normally separated from the LAN used to manage the SP, this is rarely an issue, but to follow the example iSCSI addresses above, the other IP used by the array could be as following examples:

VNX Control Station 1: 10.20.30.1
VNX Control Station 2: 10.20.30.2
SP A management IP address: 10.20.30.3
SP B management IP address: 10.20.30.4


The High Availability Validation Tool will log an HAVT warning if it detects that a host is connected via a single iSCSI Initiator. Even if the initiator has a path to both SP’s it is still at HA risk from a host connectivity view. You will also see this if using unlicensed PowerPath.

Caution! Do not use the IP address range 192.168.1.x because this is used by the serial port PPP connection

Oh.. I haven’t even discussed VMware storage path policy, as that would really be dependent on your array. However VNX is ALUA 4 and RoundRobin works really well! If you don’t have or want PowerPath as an option!

References:

VMware Storage Guide 5.5 (PDF)

VMware Storage Guide 6.0 (PDF)

“Best Practices for Running VMware vSphere on iSCSI” (TECHNICAL MARKETING DOCUMENTATION v 2.0A)

“Using VNX Storage with VMware vSphere” EMC TechBook

Published by digitalkungfu

How is your digital kungfu?? :-)

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.