Change Control Station IP for VNX

The installation of a VNX2

The first day is the toughest and easiest. Like baking a cake you need to make sure all the ingredients are there and ready to go. You verify what  was ordered is installed and is in working order. Sounds simple enough?

As it turns out the initial VNX “Rack and Stack” was correct.

  • All cables were plugged in correctly
  • No amber lights on the system.
  • The DAE (Disk Assembly Enclosures) were recognized.
  • No faults in Unisphere

Ok. Good to go? No? Why. Mr. Customer provided a duplicate IP for one of the control stations. The correct IP is the same subnet and mask but needs to be changed.

Here are the steps

  1. Log onto Unisphere confirm IP information for Control Station
  2. Log onto Control station via ssh

Use root/nasadmin (default)

3. Verify status of Control station. You can make changes to the primary not standby.

In this case the primary Control station was IP’d correctly. I need to change the secondary Control station, the standby.

/nasmcd/getreason  <<– run this command as root

Here is what you will see:

output:
10 – slot_0 primary control station
11 – slot_1 secondary control station
 5 – slot_2 contacted
 5 – slot_3 contacted
4. Failover the Control station with this command:
    /nasmcd/sbin/cs_standby -failover
   
About 5 minutes later you will reboot the primary and failover to the secondary. 
5. Run this: /nasmcd/getreason
output:
11 – slot_0 secondary control station
10 – slot_1 primary control station
 5 – slot_2 contacted
 5 – slot_3 contacted

6. Log into Unisphere

Normally you login to the VNX with using the primary Control Station IP. However since it is in failover mode use the secondary IP. In this case I am using the duplicate IP and will change it to the correct IP by using the Unisphere GUI

Screen Shot 2016-02-19 at 5.10.39 PM

Log into the Unisphere GUI as root. Make sure you choose scope “local” not Global

Navigate to the “Control station Properties Tab”

Screen Shot 2016-02-19 at 5.11.43 PM

Change the IP of the Control Station

Screen Shot 2016-02-19 at 5.12.20 PM

 

Click Apply.

In Confirm Action, click OK.

WARNING:

Modifying the Control Station hostname, IP address, subnet mask, or gateway may disrupt the Unisphere software connection and any other client access to the Control Station. It may be necessary to reconnect to continue administrative activities. If you make a mistake changing the network, the Control Station may no longer be reachable remotely.

7. ssh to the primary Control station and failback. Verify that the primary control station is in slot 0

#  /nasmcd/getreason
10 – slot_0 primary control station
11 – slot_1 secondary control station
 5 – slot_2 contacted
 5 – slot_3 contacted

That is it.

 

Remember when you login to Unisphere you connect via the Primary Control Station IP, if you are failed over you will connect to the secondary Control Station.

 

Thanks for reading! If you have a more efficient way please share. Yes, you can ssh in and do all of this via cli. 😉

 

 

 

Advertisements

EMC vVNX – 3.1.2 FULL-FEATURED NO TIME LIMITS


header-image-vvnx

What is it: 

“The vVNX Community Edition (also referred to as vVNX) provides a flexible storage alternative for test and development environments.”

SDS = Software Defined Storage!!
Download a full-featured version of vVNX available for non-production use without any time limits.”
File type: *.ova
File Size: 2.1 GB
Release date: 5/4/2015
Version: 3.1.2

Download here! https://www.emc.com/products-solutions/trial-software-download/vvnx.htm

Architecture and functionality:

White Paper details: vVNX Community Edition https://community.emc.com/docs/DOC-44552

Installation Guide: https://community.emc.com/docs/DOC-42029

Prerequisites

Before you can obtain and activate the trial vVNX license, you must have completed the following tasks:

  1. Registered to create a product support account.
  2. Downloaded the vVNX software.
  3. Installed vVNX.
  4. Launched Unisphere.
The Configuration  runs when you log in to Unisphere for the first time.
Procedure
  1. Note the vVNX system UUID provided on the License dialog in the Configuration Wizard.
  2. Go to the Electronic License Management System (ELMS) download page at www.emc.com/auth/elmeval.htm
  3. Click Obtain evaluation license for vVNX.
  4. Enter the vVNX system UUID and select vVNX as the product type. 
  5. Click Download to save the license to your local system.
    Note: An email confirming that you have successfully obtained the evaluation license is sent to the email address you provided when you registered.
  6. Return to the License dialog in the Configuration Wizard and click Install License File
  7. Locate the license file, select it, and click Upload to install and activate it. 

Note: Do not repeat this procedure once you have saved the license and received the confirmation email. If you try to enter the vVNX system UUID again, you will receive a “Duplicate UUID” error message.

Technical notes on IP Based Storage Guide for EMC VNX and VMware 5.5

When it comes to IP based storage there are two major choices for use in VMware environments. NFS and iSCSI

This article will discuss iSCSI options.

NFS is a very valid design choice with very flexible options for deployment. NFS for use with VMware is Fast and Flexible in other words a solid choice. But NFS is for another time and a different discussion.

Why IP Based storage??

In a single word: SPEED and Flexibility. Okay two words. Core networking speeds are no longer limited by 10baseT, 100baseT but 10 Gigabit Ethernet is more so the standard. You already have IP based network, why not try to leverage what is installed?

I have seen greater interest in deployment of iSCSI based storage for VMware lately. Not just 1 gb but 10 ‎Gigabit Ethernet is gaining more of a foothold for DataCenters, as customer upgrade their core networking from 1 Gb to 10Gb. The potential for performance gains is really a good thing, but fundamental to deployment is to consider a solid network design.

What I mean is end-to-end. What kind of switch are you using? Are you limited by a certain number of 10 gb ports? How many switches do you have. Do you have a single point of failure? This is more critical as you will be really leveraging your network for “double duty”. Instead of a discrete network designated for storage such as FC can provide you will now run storage information across your IP network. Ideally two separate physical switches are ideal. BUT at a minimum use VLANS for logical separation. And let me go ahead and say it… “VLAN 0 (zero) is a not really a scalable enterprise option!” That is a huge red flag you will require more network analysis and work to deploy a IP based storage solution.

There are many considerations for a successful iSCSI implementation.

1) Gather Customer Requirements 2) Design  3) Verify 4) Implement/ Deploy 5) TEST User Acceptance Testing

Ideally having two 10Gb switches for redundancy is a good thing! Be careful in the selection of a Enterprise grade switch. Have seen horrible experience when improper features are not enabled. i.e. flow control can be a good thing!

Software based iSCSI initiators. Don’t forget about Delayed ACK. See Sean’s excellent article here: Delayed ACK setting with VNX. Read more VMware details about how to implement Delayed ACK from KB1002598

“Modify the delayed ACK setting on a discovery address (recommended):

  1. On a discovery address, click the Dynamic Discovery tab.
  2. Click the Server Address tab.
  3. Click Settings > Advanced.”

Double, no TRIPLE check the manufacture recommended CABLE TYPE and LENGTH. For example: Does your 10GbE use fiber optic cable? Do you have the correct type? What about the SFP?  And if you are not using fiber optic, but choose to use TwinAX cabling. Do you have the correct cable as per manufacture requirements?

For example: Meraki, only makes and supports a 1 Meter 10g Passive Copper Cable for their switches.  If you look at any normal Cisco Business Class switch they support 1, 3, 5 meter passive and 7, 10 meter active cables on their switches. 

Active cables are generally more expensive, but could be a requirement depending on your datacenter and or colocation layout.

I try to approach the solution from both ends. Storage to the Host and the reverse Host to the storage.  Examine end-to-end dependancies. Even though your area of responsibility isn’t the network, you will be reliant on the network services and any misconfiguration will impact your ability to meet the design requirements stated. You may not have bought or had any input to the existing infrastructure but you will be impacted by what is there currently in use. How will you Keyword: Interoperability how each independent system will interact with another system. Upstream and downstream dependencies.

Other considerations:

For example: vmkernel port binding: The diagram below is from VMware KB 2038869 “Considerations for iSCSI Port Binding”

Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address. When using port binding, you must remember that:

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
  • Currently, port binding does not support network routing.”

portbinding

While there isn’t FC zoning for IP based storage there will be a requirement for subletting and VLAN separation.

For VNX here are some design considerations for iSCSI design:

The following points are best practices for connecting iSCSI hosts to a CLARiiON or VNX:

  • iSCSI subnets must not overlap the management port or Service LAN (128.221.252.x).

  • For iSCSI, there is no zoning (unlike an FC SAN) so separate subnets are used to provide redundant paths to the iSCSI ports on the CLARiiON array. For iSCSI you should have mini-SANs (VLANs) with only one HBA per host in each VLAN with one port per storage processor (SP) (for example, A0 and  B0 in one VLAN, A1 and  B1 in another).  All connections from a single server to a single storage system must use the same interface type, either NIC or HBA, but not both.

  • It is a good practice to create a separate, isolated IP network/VLAN for the iSCSI subnet. This is because the iSCSI data is unencrypted and also having an iSCSI-only network makes troubleshooting easier.

  • If the host has only a single NIC/HBA, then it should connect to only one port per SP. If there are more NICs or HBAs in the host, then each NIC/HBA can connect to one port from SP A and one port from SP B. Connecting more SP ports to a single NIC can lead to discarded frames due to the NIC being overloaded.

  • In the iSCSI initiator, set a different “Source IP” value for each iSCSI connection to an SP.  In other words, make sure that each NIC IP address only appears twice in the host’s list of iSCSI Source IP addresses: once for a port on SP A and once for a port on SP B.

  • Make sure that the Storage Process Management ports do not use the same subnets as the iSCSI ports – see [Link Error:UrlName “emc235739-Changing-configuration-on-one-iSCSI-port-may-cause-I-O-interruption-to-all-iSCSI-ports-on-this-storage-processor-SP-if-using-IP-addresses-from-same-subnet” not found] for more information.

  • It is also a best practice to use a different IP switch for the second iSCSI port on each SP. This is to prevent the IP switch being a single point of failure. In this way, were one IP switch to completely fail, the host can failover (via PowerPath) to the paths on the other IP switch. In the same way, it would be advisable to use different switches for multiple IP connections in the host.

  • Gateways can be used, but the ideal configuration is for HBA to be on the same subnet as one SP A port and one SP B port, without using the gateway.

For example, a typical configuration for the iSCSI ports on a CLARiiON, with two iSCSI ports per SP would be:

A0: 10.168.10.10 (Subnet mask 255.255.255.0)
A1: 10.168.11.10 (Subnet mask 255.255.255.0)
B0: 10.168.10.11 (Subnet mask 255.255.255.0)
B1: 10.168.11.11 (Subnet mask 255.255.255.0)

A host with two NICs should have its connections configured similar to the following in the iSCSI initiator to allow for load balancing and failover:

NIC1 (for example, 10.168.10.180) – SP A0 and SP B0 iSCSI connections
NIC2 (for example, 10.168.11.180) – SP A1 and SP B1 iSCSI connections

Similarly, if there were four iSCSI ports per SP, four subnets would be used. Half of the hosts with two HBA would then use the first two subnets, and the rest would use the other two.

The management ports should also not overlap the iSCSI ports. As the iSCSI network is normally separated from the LAN used to manage the SP, this is rarely an issue, but to follow the example iSCSI addresses above, the other IP used by the array could be as following examples:

VNX Control Station 1: 10.20.30.1
VNX Control Station 2: 10.20.30.2
SP A management IP address: 10.20.30.3
SP B management IP address: 10.20.30.4


The High Availability Validation Tool will log an HAVT warning if it detects that a host is connected via a single iSCSI Initiator. Even if the initiator has a path to both SP’s it is still at HA risk from a host connectivity view. You will also see this if using unlicensed PowerPath.

Caution! Do not use the IP address range 192.168.1.x because this is used by the serial port PPP connection

Oh.. I haven’t even discussed VMware storage path policy, as that would really be dependent on your array. However VNX is ALUA 4 and RoundRobin works really well! If you don’t have or want PowerPath as an option!

References:

VMware Storage Guide 5.5 (PDF)

VMware Storage Guide 6.0 (PDF)

“Best Practices for Running VMware vSphere on iSCSI” (TECHNICAL MARKETING DOCUMENTATION v 2.0A)

“Using VNX Storage with VMware vSphere” EMC TechBook

VNX and NFSv4

Just a note to self: (Actually when discussing NFS with your customer)

If you are using VNX make sure you use OE 7.1 and greater. Why??

NFS4 is enabled by default but just not turned on!!

$ server_nfs <movername> -v4 -service -start where: <movername> = name of the Data Mover

There are other considerations to implement NFSv4

  • NFSv4 Domain
  • Access Policy: Mixed is recommended
  • Delegation Mode off
  • You can even restrict access to NFSv4 only, as normally a file system is exported to all versions of the NFS protocol

Please see:

EMC White Paper: h10949-configuring-nfsv4-vnx-wp.pdf

HLU ID.. aka Host LUN ID.. what?

Putting on the storage admin hat.. you have to remember the little details that a good storage admin knows instinctively. I was assisting a customer with allocating new LUNS to their file system (CIFS/ NFS)… and well I gave general high level instruction but after the customer had some errors.. here are the lessons learned.

Host LUN ID.
It is important.
For example.. a boot LUN needs to have HLU = 0
That is important when you boot from SAN.
For example each ESXI must have a separate storage group for the boot LUN, as there can only be on HLU = 0 per storage group.

Also: HLU IdS 6 to 15 are reserved for system use only

Now this is different than a LUN ID.

EMC tip:
When adding a LUN to a storage group; this is the opportunity (or only time) you have to change the HLU ID.

BUT if you forgot and didn’t pay attention when adding LUNS to your storage group..
(This is disruptive. You will lose all data on LUN as it has a new HLUID)

remedy.
1. remove LUNS from storage group.
2. re-assign LUNS to storage group; remember to assign HLU ID at this time >16
3. I try to match ID with HLU ID if possible.

Possible errors is HLU ID is <16
SCAN failure message = Message ID 1360601492
SCAN from GUI (unisphere) or
from CLI: nas_diskmark -m -a (login to Control Station as NASADMIN and run cli)

Also.. when dealing with VNX file systems. Do not remove backend LUN without first removing dvols.
Without deleting dvol you should not remove backend LUN.. there is some binding that happens between the two.. and well, let's just say your storage pool for file really doesn't like it.

Also..
clarsas_archive what the heck??

What is it?
"This a system definded pool. If you create a RAID group at the backend with RAID 5 type and add the LUNs from that RG to ~fs group. It will automatically create a file pool by name clarsas_archive"

Well I don't want it around.. the remove dvols and then remove the supporting backend LUNS.

Just a few VNX storage tips. Hope you find it helpful.. be social, share.