HLU ID.. aka Host LUN ID.. what?

Putting on the storage admin hat.. you have to remember the little details that a good storage admin knows instinctively. I was assisting a customer with allocating new LUNS to their file system (CIFS/ NFS)… and well I gave general high level instruction but after the customer had some errors.. here are the lessons learned.

Host LUN ID.
It is important.
For example.. a boot LUN needs to have HLU = 0
That is important when you boot from SAN.
For example each ESXI must have a separate storage group for the boot LUN, as there can only be on HLU = 0 per storage group.

Also: HLU IdS 6 to 15 are reserved for system use only

Now this is different than a LUN ID.

EMC tip:
When adding a LUN to a storage group; this is the opportunity (or only time) you have to change the HLU ID.

BUT if you forgot and didn’t pay attention when adding LUNS to your storage group..
(This is disruptive. You will lose all data on LUN as it has a new HLUID)

1. remove LUNS from storage group.
2. re-assign LUNS to storage group; remember to assign HLU ID at this time >16
3. I try to match ID with HLU ID if possible.

Possible errors is HLU ID is <16
SCAN failure message = Message ID 1360601492
SCAN from GUI (unisphere) or
from CLI: nas_diskmark -m -a (login to Control Station as NASADMIN and run cli)

Also.. when dealing with VNX file systems. Do not remove backend LUN without first removing dvols.
Without deleting dvol you should not remove backend LUN.. there is some binding that happens between the two.. and well, let's just say your storage pool for file really doesn't like it.

clarsas_archive what the heck??

What is it?
"This a system definded pool. If you create a RAID group at the backend with RAID 5 type and add the LUNs from that RG to ~fs group. It will automatically create a file pool by name clarsas_archive"

Well I don't want it around.. the remove dvols and then remove the supporting backend LUNS.

Just a few VNX storage tips. Hope you find it helpful.. be social, share.

SDDC, Virtualization, vmware

The answer is…. VMware EVO:RAIL ?? VMworld2014

VMworld 2014 has just wrapped up. WOW!
There is SO MUCH to share and so much to learn.
One of the biggest things to make public is EVO:RAIL.

That is just one of many.. other items to be discussed later…


What is EVO:RAIL
An overview demo from Duncan Epping.


EVO:RAIL is considered a HCIA, Hyper Converged Infrastructure Appliance

Some points to consider. “With an EVO:RAIL SYSTEM from power on within 15 minutes you can start provisioning VMs” WOW. Think about the decrease in complexity for the implementation. That is not just converged but HYPER-CONVERGED.

Like the wiki definition states; this is a single optimized computing package. You have virtualized compute, virtualized storage. All underneath an optimized management layer. vCenter, and log insight is the backbone but you have an optimized HTML5 INTERFACE.

This is the core fundamentals of SDDC. Software Defined Data Center.

EVO: RAIL Management
EVO: RAIL enables deployment, configuration, and management through a new, intuitive HTML5-based user
interface showcased in the next section. EVO: RAIL provides new non-disruptive updates for VMware software with zero downtime and automatic scale-out of EVO: RAIL appliances.

Software components of EVO:RAIL
• EVO: RAIL Deployment, Configuration, and Management
• VMware vSphere® Enterprise Plus, including ESXi for compute
• Virtual SAN for storage
• vCenter Server™
• vCenter Log Insight™

What does one EVO:RAIL Provide?
Virtual Machine Density
• EVO: RAIL is sized to run approximately 100 average-sized, general-purpose, data center VMs. Actual capacity varies by VM size and workload. There are no restrictions on application type. EVO: RAIL supports any application that a customer would run on vSphere.

(General-purpose VM profile: 2 vCPU, 4GB vMEM, 60GB of vDisk, with redundancy)

But, but .. I need more power!!
There will be a followup to EVO:RAIL… EVO:RACK!!!

Highly Resilient by Design

A highly resilient HCIA design starting with four independent server nodes within a 2U footprint from our qualified EVO:RAIL partners. Each node is running vSphere and Virtual SAN, configured as a single vSphere cluster with a single distributed Virtual SAN datastore; add into the mix VMotion, HA and DRS for additional resiliency, you now have all the key ingredients to facilitate zero VM downtime during planned maintenance or during disk, or a host failure.

Some key notes from Duncan’s post: http://blogs.vmware.com/tribalknowledge/2014/08/vmworld-2014-vmware-evorail-building-block-software-defined-data-center.html
Customer Choice

EVO:RAIL is delivered as a fully integrated HCIA offering via a single SKU to the customer. There are two important things to note:

EVO:RAIL is not a reference architecture
A customer cannot purchase the EVO:RAIL software standalone and attempt to build their own HCIA on an EVO:RAIL Partners qualified and optimized hardware or non qualified server hardware

This is just the beginning of many conversations. Some questions to followup on would be to define the use case for EVO:RAIL.
How does it integrate into a brownfield environment? How to scale out this solution? Can you use other storage in addition to VSAN? What are licensing cost?

It’s alive!!! Marvin lives…


From wikipedia:
“Converged infrastructure operates by grouping multiple information technology (IT) components into a single, optimized computing package. Components of a converged infrastructure may include servers, data-storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.”


What is EVO:RAIL



What is Converged Infrastructure?