EMC Elect 2016.. by the numbers

The EMC Elect 2016 members have been announced.

PastedGraphic-5

Wow what a privileged honor to be a part of this list. This year will be filled with many new challenges as EMC will have a flurry of changes internal to help the customer make their goals a success! I refer to the Dell + EMC adventure. I look forward to helping my customers and fellow IT professionals in 2016. Don’t forget that is just the business side. The never stopping change of technology is still in full force! The information to be consumed is non-stop, and the challenges for the customer to take advantage of this is incredible.  I know I am excited for 2016. Sharing the information and empower each other!

EMC Elect 2016 — Official list

FACT:

EMC Elect Summary

  • 2015 – 102 members chosen out of 450 nominations
  • 2016 – 71 members chosen out of 700 nominations

In 2015 – There were 450 nominations leading to 200 finalists, the 102 official directory of its members for 2015
In 2016 – There were 700 nominations, leading to 150 finalists. Of those finalists, under the toughest selection process for the EMC Elect here are the 71 members for 2016

In my opinion it seems that for 2016.. the selection process was roughly 10% of the nominated persons were chosen. There was an increase in nominations, but a decrease in the number of selected. I can only imagine how much a difficult task this was. Thanks for all the hard work Mr. Mark Brown @DathBrun

Here are some interesting facts by the numbers:

 

Number crunching for 2015  Great job Mr. Henry!

My attempt inspired by Mr. Henry @davemhenry

PastedGraphic-3

PastedGraphic-4

 

I didn’t have a chance to calculate repeat EMC Elect members from former years. Metrics tell all sort of stories, but more over. Take a look at the content provided by the EMC Elect. There is A LOT!

Follow an EMC Elect member become informed and part of the discussion. Search for the tag #EMCElect.  Read the blogs (like this one) read up on the ECN (EMC Community Network). If you have a question reach out to a member via twitter or post on ECN. You will find very helpful individuals; who generally are very open to objective discussion on the technical merits of X, Y or Z!

Congrats to ALL the EMC ELECT 2016! And to all past EMC Elect… A famous quote to consider.

                   “If I have seen further than others, it is by standing upon the shoulders of giants.”                                                    -Sir Isaac Newton

@Digital_kungfu

 

 

Is your IT infrastructure an Oil Tanker?

I had the opportunity to attend an Avnet/EMC/VMware/Brocade sponsored for channel partners EVO-RAIL VSPEX Blue BootCamp.

In a nutshell it was all you can drink information from a firehose— about EVO-RAIL specifically the EMC VSPEX BLUE.

EVO RAIL is a new beast of an animal. It is a different breed. No, not in and single dimension you measure. The combination of technology presented is a synergy. Definition: Synergy is the creation of a whole that is greater than the simple sum of its parts.

Yes; you can get the the form factor for compute separately. You can also do the same for VSAN and ESXi vSphere 5.5 and networking. but you cannot get what the entire VSPEX BLUE offering of EVO RAIL provides TOGETHER.

I get ahead of myself.

You have to have perspective to understand where we are today. To me that means if you don’t know where you come from you  cannot know where are you today and where you will be tomorrow. It is all relative.

EVO RAIL is a clustered system. It is a Datacenter in a 2U form factor. Not just compute but a modern hyper converged solution.

DD709A0B-0F27-43AD-ABB2-F2D3D75C2601

Sure. Another IT buzzword. Is it just talk? I would say no.

Everyone is talking about it but only a few are “doing it”… more often than not the IT industry is a buzz with the new technology of the day.
In this case it is Hyper-converged.
To put it simply storage is local to compute. What a minute how is that different than 15 years ago when Client-Server model was the norm and storage was already local to the compute. Compute meaning the processing of the server CPU. Well lots has changed.
How is it better. A snapshot of what is the current available technology:
  • Compute is way, way faster and more dense.
  • Networks are 10 gigabit vs Fast Ethernet 100 Base-T or FDDI Optical rings are no longer the only viable choice.
  • Storage is IOPs crazy.
  • And add to that the agility of VMware Virtualization!
So look at the speed of compute, network and storage. Technology will continue to get faster and better (lower cost for the return on investment).
But what really hasn’t changed much is the complexity of the solution. There are many moving parts but how do you delivery your solution today to support legacy applications and have the agility to respond to changing business objectives.
The old phrase “turning an oil tanker on a dime”. Is IT today an Oil tanker? Does your private cloud have the agility your business requires? How will the current toolset respond? How will your staff? Oh what was that “IT staffing has been reduced and that is a trend that hasn’t gone away”
IT organizations are forced to do more with little… queue in viable alternatives…
Do you outsource.. the simplest short term gain, but not always the best long term investment.
I view EVO rail as a datacenter in a box. EVO-RAIL VPSEX Blue is Not the Cluster in a box solution but much much more.
This reminds me of the forerunning of MCSC cluster in a box. Like I said.. perspective. Where has the IT industry been before relative to where it is today. I recall back in the day when cluster in a box was a viable (and the best solution at the time) era 2000. That was only a high-availablity MSCS cluster with one node active at a time. 
1BB7AEF5-74F9-46FE-8971-6808D6593D0A
Yes, I deployed and supported a few of these solutions. It was cutting edge back then.
Adjusted for inflation:
$1000.00 USD in 2000 is $1395.20
so the CL1850 was $27,864.00 (2000 $)
or $38,569 in 2015 $.
But what did you get for your IT dollar?
Each “node” in the CL1850
RAM:
  • 1 GB RAM (128-MB 100-MHz registered ECC SDRAM memory)
Compute:
  • 2 Pentium III processors @550MHz
Network
  • 3 NICS (one dedicate for internode communication) 100 BaseT
Storage:
  • 2 RAID controllers
Shared Storage System:
  • 218.4 GB (6 x 36.4-GB 1′′ Ultra3 10,000 drives)
Form Factor 10U
Fast forward to 2015….What does EVO-RAIL VSPEX BLUE PROVIDE:
Each EMC VSPEX BLUE appliance includes:
AKA WHAT’S UNDER THE HOOD??
13AA6B68-17C2-4521-93D6-CE89FE4F432B
FORM FACTOR
  • 4 nodes of integrated compute and storage, including flash (SSD) and HDD — 2U
  • VMware EVO:RAIL software including VMware Virtual SAN (VSAN), Log Insight
COMPUTE
        12 cores @ 2.1GHz per node
RAM
  • 128 or 192 GB memory per node
NETWORKING
  • Choice of 10 Gigabit Ethernet network connectivity: SFP+ or RJ45
STORAGE
  • Drives: up to 16 (four per node)
  • Drives per node: 1 x 2.5” SSD, 3 x 2.5” HDD
  • Drive capacities
    • HDD: 1.2TB (max total 14.4TB)
    • SSD for caching: 400GB (max total 1.6TB)
               14.4TB capacity RAW
Additional Software Solutions – Exclusive to VSPEX-BLUE
  • EMC VSPEX BLUE Manager providing a system health dashboard and support portal
  • EMC CloudArray to expand storage capacity into the cloud (license for 1 TB cache and 10 TB cloud storage included)
  • VMware vSphere Data Protection Advanced (VDPA) for centralized backup and recovery
  • EMC RecoverPoint for Virtual Machines for continuous data protection of VMs. Includes licenses for 15 VMs.

The question still remains. Is your IT infrastructure an Oil Tanker? OR Can you turn on a dime??

How does your IT respond the the ever changing business demands?

Is EVO RAIL for everyone? There are a lot of use cases that EVO RAIL VSPEX BLUE will work perfect for. But,  No it isn’t for everyone.  BUT what it does is usher in a new consumption of IT that is different manner. You will not have to be provision your datacenter in the same piece meal function. You can commoditize that to a pre-validated solution that is supported by a single vendor.

FF9DB928-E9E8-4718-8B16-320FC5B28333

This graphic has a lot more details than this single blog post can explain! I will try to explain each section that helps to make VSPEX BLUE a different redefined EVO RAIL solution.

EMC VPSEX BLUE MANAGER  

– VSPEX BLUE Manager users can conveniently access electronic services, such as the EMC knowledge base articles, access to the VSPEX Community for online and real-time information and EMC VSPEX BLUE best practices.

EMC VPEX BLUE WITH ESRS 

– ESRS is a two-way, secure remote connection between your EMC environment and EMC Customer Service that enables remote monitoring, diagnosis, and repair – assuring availability and optimization of your EMC products

*EMC VSPEX BLUE SUPPORT

628D9175-352E-4B21-A9F3-7FFA630E8C53

EMC EMC VSPEX BLUE WITH RECOVERPOINT FOR VMs

– Protects at VM-LEVEL GRANULARITY, OPERATIONAL AND DR AT ANY POINT IN TIME

EMC VSPEX Blue with VDPA and DATA DOMAIN

– Built in Deduplicated Backups, powered by EMC Avamar

EMC VSPEX BLUE & CLOUD ARRAY

– Block and File upto 10 TB FREE. “EMC CloudArray software provided scalable cloud based storage with your choice of many leading cloud providers enabling limitless Network Attached Storage, offsite backup and disaster recovery and the ability to support both Block and File simply”

VSPEX BLUE MARKET

– Built into the VSPEX MANAGER dashboard. This unique feature enables customers to browse complementary EMC and 3rd party products that easily extend the capabilities of the appliance.

 ===
Again there are a lot of take aways for the VSPEX BLUE – EVO RAIL solution. Contact me if you more information.

EMC vVNX – 3.1.2 FULL-FEATURED NO TIME LIMITS


header-image-vvnx

What is it: 

“The vVNX Community Edition (also referred to as vVNX) provides a flexible storage alternative for test and development environments.”

SDS = Software Defined Storage!!
Download a full-featured version of vVNX available for non-production use without any time limits.”
File type: *.ova
File Size: 2.1 GB
Release date: 5/4/2015
Version: 3.1.2

Download here! https://www.emc.com/products-solutions/trial-software-download/vvnx.htm

Architecture and functionality:

White Paper details: vVNX Community Edition https://community.emc.com/docs/DOC-44552

Installation Guide: https://community.emc.com/docs/DOC-42029

Prerequisites

Before you can obtain and activate the trial vVNX license, you must have completed the following tasks:

  1. Registered to create a product support account.
  2. Downloaded the vVNX software.
  3. Installed vVNX.
  4. Launched Unisphere.
The Configuration  runs when you log in to Unisphere for the first time.
Procedure
  1. Note the vVNX system UUID provided on the License dialog in the Configuration Wizard.
  2. Go to the Electronic License Management System (ELMS) download page at www.emc.com/auth/elmeval.htm
  3. Click Obtain evaluation license for vVNX.
  4. Enter the vVNX system UUID and select vVNX as the product type. 
  5. Click Download to save the license to your local system.
    Note: An email confirming that you have successfully obtained the evaluation license is sent to the email address you provided when you registered.
  6. Return to the License dialog in the Configuration Wizard and click Install License File
  7. Locate the license file, select it, and click Upload to install and activate it. 

Note: Do not repeat this procedure once you have saved the license and received the confirmation email. If you try to enter the vVNX system UUID again, you will receive a “Duplicate UUID” error message.

Technical notes on IP Based Storage Guide for EMC VNX and VMware 5.5

When it comes to IP based storage there are two major choices for use in VMware environments. NFS and iSCSI

This article will discuss iSCSI options.

NFS is a very valid design choice with very flexible options for deployment. NFS for use with VMware is Fast and Flexible in other words a solid choice. But NFS is for another time and a different discussion.

Why IP Based storage??

In a single word: SPEED and Flexibility. Okay two words. Core networking speeds are no longer limited by 10baseT, 100baseT but 10 Gigabit Ethernet is more so the standard. You already have IP based network, why not try to leverage what is installed?

I have seen greater interest in deployment of iSCSI based storage for VMware lately. Not just 1 gb but 10 ‎Gigabit Ethernet is gaining more of a foothold for DataCenters, as customer upgrade their core networking from 1 Gb to 10Gb. The potential for performance gains is really a good thing, but fundamental to deployment is to consider a solid network design.

What I mean is end-to-end. What kind of switch are you using? Are you limited by a certain number of 10 gb ports? How many switches do you have. Do you have a single point of failure? This is more critical as you will be really leveraging your network for “double duty”. Instead of a discrete network designated for storage such as FC can provide you will now run storage information across your IP network. Ideally two separate physical switches are ideal. BUT at a minimum use VLANS for logical separation. And let me go ahead and say it… “VLAN 0 (zero) is a not really a scalable enterprise option!” That is a huge red flag you will require more network analysis and work to deploy a IP based storage solution.

There are many considerations for a successful iSCSI implementation.

1) Gather Customer Requirements 2) Design  3) Verify 4) Implement/ Deploy 5) TEST User Acceptance Testing

Ideally having two 10Gb switches for redundancy is a good thing! Be careful in the selection of a Enterprise grade switch. Have seen horrible experience when improper features are not enabled. i.e. flow control can be a good thing!

Software based iSCSI initiators. Don’t forget about Delayed ACK. See Sean’s excellent article here: Delayed ACK setting with VNX. Read more VMware details about how to implement Delayed ACK from KB1002598

“Modify the delayed ACK setting on a discovery address (recommended):

  1. On a discovery address, click the Dynamic Discovery tab.
  2. Click the Server Address tab.
  3. Click Settings > Advanced.”

Double, no TRIPLE check the manufacture recommended CABLE TYPE and LENGTH. For example: Does your 10GbE use fiber optic cable? Do you have the correct type? What about the SFP?  And if you are not using fiber optic, but choose to use TwinAX cabling. Do you have the correct cable as per manufacture requirements?

For example: Meraki, only makes and supports a 1 Meter 10g Passive Copper Cable for their switches.  If you look at any normal Cisco Business Class switch they support 1, 3, 5 meter passive and 7, 10 meter active cables on their switches. 

Active cables are generally more expensive, but could be a requirement depending on your datacenter and or colocation layout.

I try to approach the solution from both ends. Storage to the Host and the reverse Host to the storage.  Examine end-to-end dependancies. Even though your area of responsibility isn’t the network, you will be reliant on the network services and any misconfiguration will impact your ability to meet the design requirements stated. You may not have bought or had any input to the existing infrastructure but you will be impacted by what is there currently in use. How will you Keyword: Interoperability how each independent system will interact with another system. Upstream and downstream dependencies.

Other considerations:

For example: vmkernel port binding: The diagram below is from VMware KB 2038869 “Considerations for iSCSI Port Binding”

Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address. When using port binding, you must remember that:

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
  • Currently, port binding does not support network routing.”

portbinding

While there isn’t FC zoning for IP based storage there will be a requirement for subletting and VLAN separation.

For VNX here are some design considerations for iSCSI design:

The following points are best practices for connecting iSCSI hosts to a CLARiiON or VNX:

  • iSCSI subnets must not overlap the management port or Service LAN (128.221.252.x).

  • For iSCSI, there is no zoning (unlike an FC SAN) so separate subnets are used to provide redundant paths to the iSCSI ports on the CLARiiON array. For iSCSI you should have mini-SANs (VLANs) with only one HBA per host in each VLAN with one port per storage processor (SP) (for example, A0 and  B0 in one VLAN, A1 and  B1 in another).  All connections from a single server to a single storage system must use the same interface type, either NIC or HBA, but not both.

  • It is a good practice to create a separate, isolated IP network/VLAN for the iSCSI subnet. This is because the iSCSI data is unencrypted and also having an iSCSI-only network makes troubleshooting easier.

  • If the host has only a single NIC/HBA, then it should connect to only one port per SP. If there are more NICs or HBAs in the host, then each NIC/HBA can connect to one port from SP A and one port from SP B. Connecting more SP ports to a single NIC can lead to discarded frames due to the NIC being overloaded.

  • In the iSCSI initiator, set a different “Source IP” value for each iSCSI connection to an SP.  In other words, make sure that each NIC IP address only appears twice in the host’s list of iSCSI Source IP addresses: once for a port on SP A and once for a port on SP B.

  • Make sure that the Storage Process Management ports do not use the same subnets as the iSCSI ports – see [Link Error:UrlName “emc235739-Changing-configuration-on-one-iSCSI-port-may-cause-I-O-interruption-to-all-iSCSI-ports-on-this-storage-processor-SP-if-using-IP-addresses-from-same-subnet” not found] for more information.

  • It is also a best practice to use a different IP switch for the second iSCSI port on each SP. This is to prevent the IP switch being a single point of failure. In this way, were one IP switch to completely fail, the host can failover (via PowerPath) to the paths on the other IP switch. In the same way, it would be advisable to use different switches for multiple IP connections in the host.

  • Gateways can be used, but the ideal configuration is for HBA to be on the same subnet as one SP A port and one SP B port, without using the gateway.

For example, a typical configuration for the iSCSI ports on a CLARiiON, with two iSCSI ports per SP would be:

A0: 10.168.10.10 (Subnet mask 255.255.255.0)
A1: 10.168.11.10 (Subnet mask 255.255.255.0)
B0: 10.168.10.11 (Subnet mask 255.255.255.0)
B1: 10.168.11.11 (Subnet mask 255.255.255.0)

A host with two NICs should have its connections configured similar to the following in the iSCSI initiator to allow for load balancing and failover:

NIC1 (for example, 10.168.10.180) – SP A0 and SP B0 iSCSI connections
NIC2 (for example, 10.168.11.180) – SP A1 and SP B1 iSCSI connections

Similarly, if there were four iSCSI ports per SP, four subnets would be used. Half of the hosts with two HBA would then use the first two subnets, and the rest would use the other two.

The management ports should also not overlap the iSCSI ports. As the iSCSI network is normally separated from the LAN used to manage the SP, this is rarely an issue, but to follow the example iSCSI addresses above, the other IP used by the array could be as following examples:

VNX Control Station 1: 10.20.30.1
VNX Control Station 2: 10.20.30.2
SP A management IP address: 10.20.30.3
SP B management IP address: 10.20.30.4


The High Availability Validation Tool will log an HAVT warning if it detects that a host is connected via a single iSCSI Initiator. Even if the initiator has a path to both SP’s it is still at HA risk from a host connectivity view. You will also see this if using unlicensed PowerPath.

Caution! Do not use the IP address range 192.168.1.x because this is used by the serial port PPP connection

Oh.. I haven’t even discussed VMware storage path policy, as that would really be dependent on your array. However VNX is ALUA 4 and RoundRobin works really well! If you don’t have or want PowerPath as an option!

References:

VMware Storage Guide 5.5 (PDF)

VMware Storage Guide 6.0 (PDF)

“Best Practices for Running VMware vSphere on iSCSI” (TECHNICAL MARKETING DOCUMENTATION v 2.0A)

“Using VNX Storage with VMware vSphere” EMC TechBook

VNX and NFSv4

Just a note to self: (Actually when discussing NFS with your customer)

If you are using VNX make sure you use OE 7.1 and greater. Why??

NFS4 is enabled by default but just not turned on!!

$ server_nfs <movername> -v4 -service -start where: <movername> = name of the Data Mover

There are other considerations to implement NFSv4

  • NFSv4 Domain
  • Access Policy: Mixed is recommended
  • Delegation Mode off
  • You can even restrict access to NFSv4 only, as normally a file system is exported to all versions of the NFS protocol

Please see:

EMC White Paper: h10949-configuring-nfsv4-vnx-wp.pdf

HOT HOT HOT… Hot Spare that is! VNX VNX2

The other day I had a customer purchase a brand new DAE for his VNX.. awesome.. A full shelf of 25 drives.  900 GB SAS drives 2.5″ form factor.  Well do some quick math.. you have 5 R5 groups (4+1)

But… what a sec.. What about hot spare? You can run parity and have R5 for protection.. but you still need to be in compliance with your hot spare policy.  This customer has the older 3.5″ DAE (15 slots) and the newer drives are 2.5 ” .. what to do..

Will you have a valid hot spare on hand?

After some online research:

Based on the discussion and the reference white papers for both VNX and VNX2 – drive size isn’t of importance. The other factors are: drive type and density. VNX2 is global and won’t take into consideration the drive speed, so you could potentially have slower speed drive of same type for a replacement. This is unknown to the Admin as the policy is set differently.

https://community.emc.com/thread/123197?start=0&tstart=0  — A great discussion about this and a fantastic resource for EMC related issues. The following is take from the above thread.
Hot spare algorithm
The appropriate hot spare is chosen from the provisioned hot spares algorithmically.  If there were no hot spares provisioned of appropriate type and size when a drive fails, no rebuild occurs.  (See the Drive Rebuilds section.)  The RAID group with the failed drive remains in a degraded state until the failed drive is replaced; then the failed drive’s RAID group rebuilds.
The hot spare selection process uses the following criteria in order:
  1. Failing drive in-use capacity – The smallest capacity hot spare drive that can accommodate the in-use capacity of the failing drive’s LUNs will be used.
  2. Hot spare location – Hot spares on the same back-end port as the failing drive are preferred over other like-size hot spares.
  3. Same Drive type – Hot spare must be of the same drive type.
Failing drive in-use capacity
It is the in-use capacity of the failing drive’s that determines the capacity of the hot spare candidates.  Note this is a LUN-dependent criterion, not a raw drive capacity dependency.  This is measured by totalling the capacity of the drive’s bound LUNs.  The in-use capacity of a failing drive’s LUNs is not predictable.  This rule can lead to an unlikely hot spare selection.  For example, it is possible for a smaller capacity hot spare to be automatically selected over a hot spare drive identical to, and adjacent to the failing drive in the same DAE.  This occurs because the formatted capacity of the smaller, hot spare (the highest-order selection criteria) matches the in-use capacity of the failing drive’s LUNs more closely than the identical hot spare drive.
Note that a hot spare’s form factor and speed are not a hot spare criteria within the type.
For example, a 3.5” format 15K rpm drive can be a hot spare for a failing 2.5” 10K rpm SAS drive.
PastedGraphic-14
For the VNX2

PastedGraphic-13

Bottom line this is good to know because the customer had open slots in their existing 15 slot 3.5 ” DAE and if drive form factor did matter they would need to buy another DAE for the 2.5″ drives!

Here is the Hot Spare drive matrix. It illustrates the Failed drive and compatible spare.

mcx_hs_matrix

Software Defined Storage: VMware VVOL

Closer to reality… Closer to GA

Yesterday VMware announced public beta2 for vsphere6 and VVOL.

Why separate Beta programs?

While vSphere 6 can be leveraged ontop of most existing hardware that supports vSphere 5.5; not all storage vendors will be ready for VVOL.

All major storage vendors will be supporting the new VVOL feature. And the impact of this is very big.

But let’s start with some background information….

What is VVOL

**please note the capital V 🙂

VVOL is a new paradigm — a standard, a model, template for storage. You won’t be doing storage like you have in the past; well not 100% the same. Sure you will have the traditional tasks a storage admin will have to do for fabric based storage arrays.. but what makes things really interesting is what I call the integration point.

Currently Storage arrays are not VM aware; VVOL changes that. I guess you can call it adding more “intelligence” to your storage.

The conversation is storage meet this VM/Application. It isn’t about just consuming a LUN, but more about making a VMDK a native object on the storage system.

This point/ intersection is where the conversation should grab your attention.

I won’t try to recap what the status quo is today for storage and vmware, but know this: it is complex. Time consuming at a high operational cost and requires specialized training to ensure availability, management of SLA: performance and capacity.

Very important reference reading: (much more detailed information)

Cormac Hogan’s blog:

http://blogs.vmware.com/vsphere/2012/10/virtual-volumes-vvols-tech-preview-with-video.html

Duncan Epping:

http://www.yellow-bricks.com/2012/08/07/vmware-vstorage-apis-for-vm-and-application-granular-data-management/

Chad Sakac

http://virtualgeek.typepad.com/virtual_geek/2013/08/vmworld-2013-vvol-update-and-vm-granular-storagenow.html

 

VMworld 2013 VVOL Session

VMworld 2012 VVOL Session

 

And when you are ready… go and DO THE BETA! (or at least join the beta community and learn all you can!!)

Rawlinson shows you all about the beta and examples of storage vendors using VVOL

http://www.punchingclouds.com

 

Open Source tool: Vagrant + vmware Fusion +EMC ScaleIO = ?

Open Source tool: Vagrant OPENSOURCE. Learn it. Use it. Software tools are great and sharing tools and supporting the OpenSource community is a good thing. This is part of a multi-part post. I will share my experience setting up ScaleIO in my vmware Fusion Lab. First find a tool to make provisioning quicker, more consistent, and automated. Tool: Vagrant From the vagrant website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.”

more later..   vagrant and more about

ScaleIO

Intro to ScaleIO

 

EMC ScaleIO at a Glance

EMC ScaleIO is a software-only solution that uses application hosts’ local disks to realize a virtual SAN that is comparable to or better than external SAN storage, but at a fraction of the cost and complexity. ScaleIO makes a convergence of the storage and application layers possible, ending up with a wall-to-wall single layer of hosts. The lightweight software components of ScaleIO are installed on the application hosts alongside applications like databases and hypervisors.

Breaking traditional barriers of storage scalability, ScaleIO scales out to hundreds and thousands of nodes. ScaleIO’s performance scales linearly with the number of application servers and disks. With ScaleIO, any administrator can add, move, or remove servers and capacity on demand during I/O operations. ScaleIO helps ensure the highest level of enterprise-grade resilience while maintaining maximum storage performance.

ScaleIO natively supports all the leading Linux distributions, Windows Server and hypervisors and works agnostically with any SSD, HDD, and network. The product includes encryption at rest and quality of service (QoS) of performance. ScaleIO can be managed from both a command-line interface (CLI) and an intuitive graphical user interface (GUI). Deploying ScaleIO in both greenfield and existing data center environments is a simple process and takes only a few minutes.”