Is your IT infrastructure an Oil Tanker?

I had the opportunity to attend an Avnet/EMC/VMware/Brocade sponsored for channel partners EVO-RAIL VSPEX Blue BootCamp.

In a nutshell it was all you can drink information from a firehose— about EVO-RAIL specifically the EMC VSPEX BLUE.

EVO RAIL is a new beast of an animal. It is a different breed. No, not in and single dimension you measure. The combination of technology presented is a synergy. Definition: Synergy is the creation of a whole that is greater than the simple sum of its parts.

Yes; you can get the the form factor for compute separately. You can also do the same for VSAN and ESXi vSphere 5.5 and networking. but you cannot get what the entire VSPEX BLUE offering of EVO RAIL provides TOGETHER.

I get ahead of myself.

You have to have perspective to understand where we are today. To me that means if you don’t know where you come from you  cannot know where are you today and where you will be tomorrow. It is all relative.

EVO RAIL is a clustered system. It is a Datacenter in a 2U form factor. Not just compute but a modern hyper converged solution.

DD709A0B-0F27-43AD-ABB2-F2D3D75C2601

Sure. Another IT buzzword. Is it just talk? I would say no.

Everyone is talking about it but only a few are “doing it”… more often than not the IT industry is a buzz with the new technology of the day.
In this case it is Hyper-converged.
To put it simply storage is local to compute. What a minute how is that different than 15 years ago when Client-Server model was the norm and storage was already local to the compute. Compute meaning the processing of the server CPU. Well lots has changed.
How is it better. A snapshot of what is the current available technology:
  • Compute is way, way faster and more dense.
  • Networks are 10 gigabit vs Fast Ethernet 100 Base-T or FDDI Optical rings are no longer the only viable choice.
  • Storage is IOPs crazy.
  • And add to that the agility of VMware Virtualization!
So look at the speed of compute, network and storage. Technology will continue to get faster and better (lower cost for the return on investment).
But what really hasn’t changed much is the complexity of the solution. There are many moving parts but how do you delivery your solution today to support legacy applications and have the agility to respond to changing business objectives.
The old phrase “turning an oil tanker on a dime”. Is IT today an Oil tanker? Does your private cloud have the agility your business requires? How will the current toolset respond? How will your staff? Oh what was that “IT staffing has been reduced and that is a trend that hasn’t gone away”
IT organizations are forced to do more with little… queue in viable alternatives…
Do you outsource.. the simplest short term gain, but not always the best long term investment.
I view EVO rail as a datacenter in a box. EVO-RAIL VPSEX Blue is Not the Cluster in a box solution but much much more.
This reminds me of the forerunning of MCSC cluster in a box. Like I said.. perspective. Where has the IT industry been before relative to where it is today. I recall back in the day when cluster in a box was a viable (and the best solution at the time) era 2000. That was only a high-availablity MSCS cluster with one node active at a time. 
1BB7AEF5-74F9-46FE-8971-6808D6593D0A
Yes, I deployed and supported a few of these solutions. It was cutting edge back then.
Adjusted for inflation:
$1000.00 USD in 2000 is $1395.20
so the CL1850 was $27,864.00 (2000 $)
or $38,569 in 2015 $.
But what did you get for your IT dollar?
Each “node” in the CL1850
RAM:
  • 1 GB RAM (128-MB 100-MHz registered ECC SDRAM memory)
Compute:
  • 2 Pentium III processors @550MHz
Network
  • 3 NICS (one dedicate for internode communication) 100 BaseT
Storage:
  • 2 RAID controllers
Shared Storage System:
  • 218.4 GB (6 x 36.4-GB 1′′ Ultra3 10,000 drives)
Form Factor 10U
Fast forward to 2015….What does EVO-RAIL VSPEX BLUE PROVIDE:
Each EMC VSPEX BLUE appliance includes:
AKA WHAT’S UNDER THE HOOD??
13AA6B68-17C2-4521-93D6-CE89FE4F432B
FORM FACTOR
  • 4 nodes of integrated compute and storage, including flash (SSD) and HDD — 2U
  • VMware EVO:RAIL software including VMware Virtual SAN (VSAN), Log Insight
COMPUTE
        12 cores @ 2.1GHz per node
RAM
  • 128 or 192 GB memory per node
NETWORKING
  • Choice of 10 Gigabit Ethernet network connectivity: SFP+ or RJ45
STORAGE
  • Drives: up to 16 (four per node)
  • Drives per node: 1 x 2.5” SSD, 3 x 2.5” HDD
  • Drive capacities
    • HDD: 1.2TB (max total 14.4TB)
    • SSD for caching: 400GB (max total 1.6TB)
               14.4TB capacity RAW
Additional Software Solutions – Exclusive to VSPEX-BLUE
  • EMC VSPEX BLUE Manager providing a system health dashboard and support portal
  • EMC CloudArray to expand storage capacity into the cloud (license for 1 TB cache and 10 TB cloud storage included)
  • VMware vSphere Data Protection Advanced (VDPA) for centralized backup and recovery
  • EMC RecoverPoint for Virtual Machines for continuous data protection of VMs. Includes licenses for 15 VMs.

The question still remains. Is your IT infrastructure an Oil Tanker? OR Can you turn on a dime??

How does your IT respond the the ever changing business demands?

Is EVO RAIL for everyone? There are a lot of use cases that EVO RAIL VSPEX BLUE will work perfect for. But,  No it isn’t for everyone.  BUT what it does is usher in a new consumption of IT that is different manner. You will not have to be provision your datacenter in the same piece meal function. You can commoditize that to a pre-validated solution that is supported by a single vendor.

FF9DB928-E9E8-4718-8B16-320FC5B28333

This graphic has a lot more details than this single blog post can explain! I will try to explain each section that helps to make VSPEX BLUE a different redefined EVO RAIL solution.

EMC VPSEX BLUE MANAGER  

– VSPEX BLUE Manager users can conveniently access electronic services, such as the EMC knowledge base articles, access to the VSPEX Community for online and real-time information and EMC VSPEX BLUE best practices.

EMC VPEX BLUE WITH ESRS 

– ESRS is a two-way, secure remote connection between your EMC environment and EMC Customer Service that enables remote monitoring, diagnosis, and repair – assuring availability and optimization of your EMC products

*EMC VSPEX BLUE SUPPORT

628D9175-352E-4B21-A9F3-7FFA630E8C53

EMC EMC VSPEX BLUE WITH RECOVERPOINT FOR VMs

– Protects at VM-LEVEL GRANULARITY, OPERATIONAL AND DR AT ANY POINT IN TIME

EMC VSPEX Blue with VDPA and DATA DOMAIN

– Built in Deduplicated Backups, powered by EMC Avamar

EMC VSPEX BLUE & CLOUD ARRAY

– Block and File upto 10 TB FREE. “EMC CloudArray software provided scalable cloud based storage with your choice of many leading cloud providers enabling limitless Network Attached Storage, offsite backup and disaster recovery and the ability to support both Block and File simply”

VSPEX BLUE MARKET

– Built into the VSPEX MANAGER dashboard. This unique feature enables customers to browse complementary EMC and 3rd party products that easily extend the capabilities of the appliance.

 ===
Again there are a lot of take aways for the VSPEX BLUE – EVO RAIL solution. Contact me if you more information.

part 2/2 Troubleshooting VSAN errors. VSAN misconfiguration??

VSAN cluster isn’t 100% there is some problems with see all the storage. 
Symptoms:
1. Cannot write to the VSAN Datastore
2. Correct amount of capacity isn’t present
How to resolve with ESXCLI
Log into each host via ssh. You do remember the root password right?? This example is a three node VSAN cluster.
node 1::::::
 
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-31T01:11:33Z
Local Node UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Local Node State: MASTER
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Backup UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 1
Sub-Cluster Member UUIDs: 551374b5-03f9-7bd6-6257-a0369f58b8e8, 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster Membership UUID: d6da1955-e2f8-38eb-d7f0-a0369f58b8e8
NODE 2:::::
 
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-31T00:12:02Z
   Local Node UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
   Local Node State: MASTER  << a different master for a different UUID
   Local Node Health State: HEALTHY
   Sub-Cluster Master UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4  <<< That is a different UUID!
   Sub-Cluster Backup UUID:
Sub-Cluster UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster Membership Entry Revision: 0
Sub-Cluster Member UUIDs: 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Membership UUID: 60df1955-b9f1-d685-13b0-a0369f58b8e4
~ # esxcli vsan cluster leave
~ # esxcli vsan cluster join -u 551374b5-03f9-7bd6-6257-a0369f58b8e8   <<<- join the correct UUID (cluster)
 
Validate on NODE 2
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-31T00:12:52Z
Local Node UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
Local Node State: AGENT
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Backup UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 2
Sub-Cluster Member UUIDs: 551374b5-03f9-7bd6-6257-a0369f58b8e8, 54f9dc6f-8674-f412-364d-a0369f58b5a8, 55197cee-f530-4966-5ea6-a0369f58b8e4  <<< three members
Sub-Cluster Membership UUID: d6da1955-e2f8-38eb-d7f0-a0369f58b8e8
~ # esxcli vsan network list
Interface
VmkNic Name: vmk1
IP Protocol: IPv4
Interface UUID: 17dd1955-0bdf-abac-aba6-a0369f58b8e4
Agent Group Multicast Address: 224.2.3.4
Agent Group Multicast Port: 23451
   Master Group Multicast Address: 224.1.2.3
Master Group Multicast Port: 12345
Multicast TTL: 5
Valdiate on NODE 3 and 1
 
NODE 3::::::
 
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-31T00:14:33Z
Local Node UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Local Node State: BACKUP
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Backup UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 2
Sub-Cluster Member UUIDs: 551374b5-03f9-7bd6-6257-a0369f58b8e8, 54f9dc6f-8674-f412-364d-a0369f58b5a8, 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Membership UUID: d6da1955-e2f8-38eb-d7f0-a0369f58b8e8
~ # esxcli vsan network list
Interface
VmkNic Name: vmk1
IP Protocol: IPv4
Interface UUID: f16e1355-c174-11f6-2602-a0369f58b5a8
Agent Group Multicast Address: 224.2.3.4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group Multicast Port: 12345
   Multicast TTL: 5
NODE 1
FIXED
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-31T01:12:22Z
Local Node UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Local Node State: MASTER
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Backup UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
   Sub-Cluster Membership Entry Revision: 2
   Sub-Cluster Member UUIDs: 551374b5-03f9-7bd6-6257-a0369f58b8e8, 54f9dc6f-8674-f412-364d-a0369f58b5a8, 55197cee-f530-4966-5ea6-a0369f58b8e4 <<— all three members!
   Sub-Cluster Membership UUID: d6da1955-e2f8-38eb-d7f0-a0369f58b8e8
~ # esxcli vsan network list
Interface
VmkNic Name: vmk1
IP Protocol: IPv4
Interface UUID: 3b5e1955-5eb6-2bbc-57bc-a0369f58b8e8
Agent Group Multicast Address: 224.2.3.4
   Agent Group Multicast Port: 23451
   Master Group Multicast Address: 224.1.2.3 <<< all the same Multicast address
   Master Group Multicast Port: 12345
Multicast TTL: 5
~ #

VMWARE Virtual SAN networking

VSAN networking can be a bit tricky to troubleshoot. Before I go deeper into the topic here is a very important concept to remember about VSAN clusters.

Given any VSAN cluster remember the following:

** “Introduction to Virtual SAN Networking

Before getting into network in detail, it is important to understand the roles that nodes/hosts can play in Virtual SAN. There are three roles in Virtual SAN: master, agent and backup. There is one master that is responsible for getting CMMDS (clustering service) updates from all nodes, and distributing these updates to agents. Roles are applied during cluster discovery, when all nodes participating in Virtual SAN elect a master. A vSphere administrator has no control over roles.”

** from Cormac’s troubleshooting guide

That is a lot to digest but if break it down you can see some key principles about a VSAN cluster to remember.

The roles in VSAN:
A master
B agent
C backup.

There is one master.
If you see more than one master there is something not quite right with you VSAN CLUSTER.

The VSAN admin does not control which node will be the master.

Example:
Log into each node of a three node VSAN. The normal pre-req for troubleshooting make sure ssh is enabled.

Run the following command on each node:
~ # esxcli vsan cluster get

Cluster Information will output below.
NODE 1

Cluster Information
Enabled: true

Current Local Time: 2015-03-30T22:38:38Z
Local Node UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
Local Node State: MASTER
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Backup UUID:
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 0
Sub-Cluster Member UUIDs: 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Membership UUID: a5ce1955-f5e5-5663-d338-a0369f58b8e4

Node 2
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-30T22:38:38Z
Local Node UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
Local Node State: MASTER
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Backup UUID:
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 0
Sub-Cluster Member UUIDs: 55197cee-f530-4966-5ea6-a0369f58b8e4
Sub-Cluster Membership UUID: a5ce1955-f5e5-5663-d338-a0369f58b8e4

Node 3
~ # esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-30T22:56:46Z
Local Node UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Local Node State: BACKUP
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Backup UUID: 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster UUID: 551374b5-03f9-7bd6-6257-a0369f58b8e8
Sub-Cluster Membership Entry Revision: 1
Sub-Cluster Member UUIDs: 551374b5-03f9-7bd6-6257-a0369f58b8e8, 54f9dc6f-8674-f412-364d-a0369f58b5a8
Sub-Cluster Membership UUID: d6da1955-e2f8-38eb-d7f0-a0369f58b8e8

See the image below for the error seen in the web client.

From the output above can you see the problem?

IMG_2285.PNG

IMG_2285-0.PNG

Open Source tool: Vagrant + vmware Fusion +EMC ScaleIO = ?

Open Source tool: Vagrant OPENSOURCE. Learn it. Use it. Software tools are great and sharing tools and supporting the OpenSource community is a good thing. This is part of a multi-part post. I will share my experience setting up ScaleIO in my vmware Fusion Lab. First find a tool to make provisioning quicker, more consistent, and automated. Tool: Vagrant From the vagrant website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.”

more later..   vagrant and more about

ScaleIO

Intro to ScaleIO

 

EMC ScaleIO at a Glance

EMC ScaleIO is a software-only solution that uses application hosts’ local disks to realize a virtual SAN that is comparable to or better than external SAN storage, but at a fraction of the cost and complexity. ScaleIO makes a convergence of the storage and application layers possible, ending up with a wall-to-wall single layer of hosts. The lightweight software components of ScaleIO are installed on the application hosts alongside applications like databases and hypervisors.

Breaking traditional barriers of storage scalability, ScaleIO scales out to hundreds and thousands of nodes. ScaleIO’s performance scales linearly with the number of application servers and disks. With ScaleIO, any administrator can add, move, or remove servers and capacity on demand during I/O operations. ScaleIO helps ensure the highest level of enterprise-grade resilience while maintaining maximum storage performance.

ScaleIO natively supports all the leading Linux distributions, Windows Server and hypervisors and works agnostically with any SSD, HDD, and network. The product includes encryption at rest and quality of service (QoS) of performance. ScaleIO can be managed from both a command-line interface (CLI) and an intuitive graphical user interface (GUI). Deploying ScaleIO in both greenfield and existing data center environments is a simple process and takes only a few minutes.”

VMware Product Walk-thru

VMware Product Walk-thru

This is pretty cool..
How do you explain a vmware product quickly?
Use this tool!
It won’t provide a deep-dive technical explanation but it IS a good starting point for conversations.

This will cover:
vCloud Suite
NSX
VSAN
Big Data Extensions
vSphere Upgrades
VSOM (vsphere operations manager)
Openstack + VMWARE
VDS Virtual Distributed Switch
Storage Management

It is pretty cool.. click thru and learn something new!