Design, Storage, Uncategorized, Virtualization, vmware, VSAN

Core Knowledge vSAN HBA

The fundamentals cannot be over-emphasized. You need to ensure that the key components of your vSAN host is configured per recommendations.

Just a reminder of the HBA controller configuration.

  1. Make sure the device is on the Hardware Compatibility Guide (HCG) 
  2. And verify the firmware is up-to-date.

I have seen first hand what impact different firmware can have on your environment.

Example: Dell Perc H310

Controller queue depth impacts the rebuild/resync times. A low controller queue depth may impact the availability of your production VMs during rebuild/resync. A minimum queue depth of 256 is required in vSAN. Some vSAN Ready Node profiles require minimum queue depth of 512, All Flash configs.

For more details see this: vSAN Hardware Quick Reference Guide

The availability of vSAN and VMFS can be vying for the same resource; the HBA.

Do NOT mix Disk Access modes to your Host Bus Adapter (HBA) also called an I/O Controller. Pass through configuration is preferred, but RAID-0 can work. vSAN prefers to have a more direct access to the device attached to the I/O Controller.  So for example if the HBA is setup with some logic configuration the groups all the devices together before presenting to the ESXi host then you have some prep work to do. Several array controllers do not support pass through mode,  to use this type of controller for vSAN, we need to create a single disk RAID-0 group for every SSD and HDD.

 

dell_08173_H740P_MINI_MONO_14G_3130LF

Dell PERC 740

Example.

  • RAID levels access for the devices attached.
  • vSAN and VMFS devices on same HBA.

From the VMware KB:

  • Do not mix the controller mode for vSAN and non-vSAN disks.
    • If the vSAN disks are in pass-through/JBOD mode, the non-vSAN disks must also be in pass-through/JBOD mode.
    • If the vSAN disks are in RAID mode, the non-vSAN disks must also be in RAID mode.
    • Mixing the controller mode will mean that various disks will be handled in different ways by the storage controller. This introduces the possibility that issues affecting one configuration could also affect the other, with possible negative consequences for vSAN.
    • https://kb.vmware.com/s/article/2129050

If you absolutely must use the same HBA:

  1. limit the use of the VMFS that is sharing the HBA with vSAN.
  2. AND DO NOT USE RDM for that shared device/HBA
  3. DO NOT have the boot device on the same vSAN controller
  • If the non-vSAN disks are in use for VMFS, the VMFS datastore should be used only for scratch, logging and coredumps.
    • Virtual machines should not be running from a disk or RAID group that shares its controller with vSAN disks or RAID groups.
    • ESXi host installation is permitted on non-vSAN disks attached to same controller.
  • Do not pass through non-vSAN disks to virtual machine guests as Raw Device Mappings (RDMs).

The number and type of drives plus their disk group configuration is not covered here but another topic of important discussion!

 

 

 

Advertisements
Standard
Design, servers, Storage, Uncategorized, Virtualization, vmware

vSphere Content Libraries (CL)

2017-10-01_13-44-16

The introduction of the Content Libraries feature came with vSphere 6. The goal is to reduce the complexity in management of VM templates, vApps, ISO images, and scripts that your virtual environment needs for day to day operations. Content libraries are container objects.
The Content library can be

  1. Local to the vCenter your create it in.
  2. Published externally to other vCenters with password authentication
  3. Subscribed Content Library to another library

The flexibility of the content library topology availability will enable your organization to maximize your operational efficiencies. How? Here are some scenarios that Administrator face.
“What Template did you use to build this VM?”
“Is it patched? Is it the latest one?”

Now imagine this conversation across the business units that span across geographic regions, time zone etc.
What and Where?
Some key things that a CL will help prevent is the bad practice of building workflow and processes around a single person. Increase efficiency in your organization, by using a central repository of essentials files you can avoid using the “wrong” vm template. That answers the what version is the latest? You can increase efficiency of answering the question of where is the latest version?

How do you setup a CL?

  1. In the vSphere Web Client navigator, select vCenter Inventory Lists > Content Libraries.
  2. Click the Objects tab.
  3. Click the Create a New Library icon (create a content library).
  4. Enter a name for the content library, and in the Notes text box, enter a description for the library and click Next.
  5. Select the type of content library that you want to create.

Option

Description

Local content library

A local content library is accessible only in the vCenter Server instance where you create it.

Published content library

Select Publish externally to make the content of the library available to other vCenter Server instances.

If you want the users to use a password when accessing the library, select Enable authentication and set a password.

Optimized published content library

Select Optimize for syncing over HTTP to create an optimized published library.

This library is optimized to ensure lower CPU usage and faster streaming of the content over HTTP. Use this library as a main content depot for your subscribed libraries. You cannot deploy virtual machines from an optimized library. Use optimized published content library when the subscribed libraries reside on a remote vCenter Serversystem and enhanced linked mode is not used.

Subscribed content library

Creates a content library that is subscribed to a published content library. You can sync the subscribed library with the published library to see up-to-date content, but you cannot add or remove content from the subscribed library. Only an administrator of the published library can add, modify, and remove contents from the published library.

Provide the following settings to subscribe to a library:

  1. In the Subscription URL text box, enter the URL address of the published library.

  2. If authentication is enabled on the published library, enter the publisher password.

  3. Select a download method for the contents of the subscribed library.

    • If you want to download a local copy of all the items in the published library immediately after subscribing to it, select Download all library content immediately.

    • If you want to save storage space, select Download library content only when needed. You download only the metadata for the items in the published library.

      If you need to use an item, you can synchronize it to download its content.

  4. When prompted, accept the SSL certificate thumbprint.

    The SSL certificate thumbprint is stored on your system until you delete the subscribed content library from the inventory.

6. Click Next.
7. Select a datastore, or enter the path to a remote storage location where to keep the contents of this library.

Option

Description

Enter an SMB or an NFS server and path

If you use avCenter Server instance that runs on a Windows system, enter the SMB machine and share name.

If you use vCenter Server Appliance, enter a path to an NFS storage. You can store your templates on an NFS storage that is mounted to the appliance. After the create a new library operation is complete, the vCenter Server Appliance mounts the shared storage to the host OS.

Select a datastore

Select a datastore from your vSphere inventory.

vSAN Datastore will appear here as a choice

8. Review the information on the Ready to Complete page and click Finish.

Great now you have a Content library.. what next?

ADD CONTENT to your Content Library.
You can:
Clone the VM as a template into your Content Library (Right click the VM choose
Actions–> Clone –> Clone to Template in Library

2017-10-01_13-44-59

Now for another time saver!
So, you already realize the importance of a repository and you have a single folder on datastore that says /iso-templates. Now what? You need to be able to copy all of that to your new Content Library. So you can publish the CL and enable other vCenter’s to Subscribe.
The tricky option is to deal with ISO images.

Sure Templates and VM’s can be handled with cloning VM to Template actions but here is a option for existing templates in your datastore. This will save you a bit of time in re-copying the ISO back into the content library.

 

When I first started to use the CL I didn’t see an option the the CL to add ISO files. I reached out to Roman Konarev and he provided this excellent guide.

 

How to import your ISOs from DS:
Get a URL to your ISO file that you want to import to Content library. The structure of that URL is the following: [DataStore url]/[ISOs folder]/[file_name].

Here is my ISOs folder:
1.png
Here is my DS url:

So, the final URL will be the following: ds:///vmfs/volumes/56cd1758-86602854-5166-020019640efe/RK_ISOs/small_ISO.iso

2)    Open a standard “Import library item” wizard and paste the URL above there:

 

** vSphere 6.5 update **

** Update to vSphere 6.5 and make it easier! **

What a difference a version makes!

Procedure

  1. In the vSphere Web Client navigator, select vCenter Inventory Lists > Content Libraries.
  2. Right-click a content library and select Import Item.

    The Import Library Item dialog box opens.

  3. Under Source section, select the option to import an item from a local file. Click Browse to navigate to the file that you want to import from your local system. You can use the drop-down menu to filter files in your local system.
  4. Under Destination section, enter a name and description for the item, and click OK.

Content Libraries can even extend into the Cloud!

Create a content library that is subscribed to the content library you published from your on-premises data center. Content is synchronized from your on-premises data center to your SDDC in VMware Cloud on AWS.

Standard
Design, Troubleshooting, Virtualization, vmware

vSphere Web Client cool feature! Topology maps

Anyone who has to work with, administer VMware sphere needs to have to top down view. You can review uplink settings, uplinks per host. How each Distributed port group is related to the VM defined. VMKernel ports (vmk) IP addresses — all of them at a glance.. Very helpful to see what is online or offline etc.

To access a Topology map of the Distributed vSwitch and Virtual Machine Networking.

There are also advanced features to check out for you uses.

example: filter and save views!

Procedure

  1. Navigate to the vSphere distributed switch in the vSphere Web Client.
  2. On the Configure tab, expand Settings and select Topology.


Standard
Design, EMC, iSCSI, vmware

Technical notes on IP Based Storage Guide for EMC VNX and VMware 5.5

When it comes to IP based storage there are two major choices for use in VMware environments. NFS and iSCSI

This article will discuss iSCSI options.

NFS is a very valid design choice with very flexible options for deployment. NFS for use with VMware is Fast and Flexible in other words a solid choice. But NFS is for another time and a different discussion.

Why IP Based storage??

In a single word: SPEED and Flexibility. Okay two words. Core networking speeds are no longer limited by 10baseT, 100baseT but 10 Gigabit Ethernet is more so the standard. You already have IP based network, why not try to leverage what is installed?

I have seen greater interest in deployment of iSCSI based storage for VMware lately. Not just 1 gb but 10 ‎Gigabit Ethernet is gaining more of a foothold for DataCenters, as customer upgrade their core networking from 1 Gb to 10Gb. The potential for performance gains is really a good thing, but fundamental to deployment is to consider a solid network design.

What I mean is end-to-end. What kind of switch are you using? Are you limited by a certain number of 10 gb ports? How many switches do you have. Do you have a single point of failure? This is more critical as you will be really leveraging your network for “double duty”. Instead of a discrete network designated for storage such as FC can provide you will now run storage information across your IP network. Ideally two separate physical switches are ideal. BUT at a minimum use VLANS for logical separation. And let me go ahead and say it… “VLAN 0 (zero) is a not really a scalable enterprise option!” That is a huge red flag you will require more network analysis and work to deploy a IP based storage solution.

There are many considerations for a successful iSCSI implementation.

1) Gather Customer Requirements 2) Design  3) Verify 4) Implement/ Deploy 5) TEST User Acceptance Testing

Ideally having two 10Gb switches for redundancy is a good thing! Be careful in the selection of a Enterprise grade switch. Have seen horrible experience when improper features are not enabled. i.e. flow control can be a good thing!

Software based iSCSI initiators. Don’t forget about Delayed ACK. See Sean’s excellent article here: Delayed ACK setting with VNX. Read more VMware details about how to implement Delayed ACK from KB1002598

“Modify the delayed ACK setting on a discovery address (recommended):

  1. On a discovery address, click the Dynamic Discovery tab.
  2. Click the Server Address tab.
  3. Click Settings > Advanced.”

Double, no TRIPLE check the manufacture recommended CABLE TYPE and LENGTH. For example: Does your 10GbE use fiber optic cable? Do you have the correct type? What about the SFP?  And if you are not using fiber optic, but choose to use TwinAX cabling. Do you have the correct cable as per manufacture requirements?

For example: Meraki, only makes and supports a 1 Meter 10g Passive Copper Cable for their switches.  If you look at any normal Cisco Business Class switch they support 1, 3, 5 meter passive and 7, 10 meter active cables on their switches. 

Active cables are generally more expensive, but could be a requirement depending on your datacenter and or colocation layout.

I try to approach the solution from both ends. Storage to the Host and the reverse Host to the storage.  Examine end-to-end dependancies. Even though your area of responsibility isn’t the network, you will be reliant on the network services and any misconfiguration will impact your ability to meet the design requirements stated. You may not have bought or had any input to the existing infrastructure but you will be impacted by what is there currently in use. How will you Keyword: Interoperability how each independent system will interact with another system. Upstream and downstream dependencies.

Other considerations:

For example: vmkernel port binding: The diagram below is from VMware KB 2038869 “Considerations for iSCSI Port Binding”

Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address. When using port binding, you must remember that:

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
  • Currently, port binding does not support network routing.”

portbinding

While there isn’t FC zoning for IP based storage there will be a requirement for subletting and VLAN separation.

For VNX here are some design considerations for iSCSI design:

The following points are best practices for connecting iSCSI hosts to a CLARiiON or VNX:

  • iSCSI subnets must not overlap the management port or Service LAN (128.221.252.x).

  • For iSCSI, there is no zoning (unlike an FC SAN) so separate subnets are used to provide redundant paths to the iSCSI ports on the CLARiiON array. For iSCSI you should have mini-SANs (VLANs) with only one HBA per host in each VLAN with one port per storage processor (SP) (for example, A0 and  B0 in one VLAN, A1 and  B1 in another).  All connections from a single server to a single storage system must use the same interface type, either NIC or HBA, but not both.

  • It is a good practice to create a separate, isolated IP network/VLAN for the iSCSI subnet. This is because the iSCSI data is unencrypted and also having an iSCSI-only network makes troubleshooting easier.

  • If the host has only a single NIC/HBA, then it should connect to only one port per SP. If there are more NICs or HBAs in the host, then each NIC/HBA can connect to one port from SP A and one port from SP B. Connecting more SP ports to a single NIC can lead to discarded frames due to the NIC being overloaded.

  • In the iSCSI initiator, set a different “Source IP” value for each iSCSI connection to an SP.  In other words, make sure that each NIC IP address only appears twice in the host’s list of iSCSI Source IP addresses: once for a port on SP A and once for a port on SP B.

  • Make sure that the Storage Process Management ports do not use the same subnets as the iSCSI ports – see [Link Error:UrlName “emc235739-Changing-configuration-on-one-iSCSI-port-may-cause-I-O-interruption-to-all-iSCSI-ports-on-this-storage-processor-SP-if-using-IP-addresses-from-same-subnet” not found] for more information.

  • It is also a best practice to use a different IP switch for the second iSCSI port on each SP. This is to prevent the IP switch being a single point of failure. In this way, were one IP switch to completely fail, the host can failover (via PowerPath) to the paths on the other IP switch. In the same way, it would be advisable to use different switches for multiple IP connections in the host.

  • Gateways can be used, but the ideal configuration is for HBA to be on the same subnet as one SP A port and one SP B port, without using the gateway.

For example, a typical configuration for the iSCSI ports on a CLARiiON, with two iSCSI ports per SP would be:

A0: 10.168.10.10 (Subnet mask 255.255.255.0)
A1: 10.168.11.10 (Subnet mask 255.255.255.0)
B0: 10.168.10.11 (Subnet mask 255.255.255.0)
B1: 10.168.11.11 (Subnet mask 255.255.255.0)

A host with two NICs should have its connections configured similar to the following in the iSCSI initiator to allow for load balancing and failover:

NIC1 (for example, 10.168.10.180) – SP A0 and SP B0 iSCSI connections
NIC2 (for example, 10.168.11.180) – SP A1 and SP B1 iSCSI connections

Similarly, if there were four iSCSI ports per SP, four subnets would be used. Half of the hosts with two HBA would then use the first two subnets, and the rest would use the other two.

The management ports should also not overlap the iSCSI ports. As the iSCSI network is normally separated from the LAN used to manage the SP, this is rarely an issue, but to follow the example iSCSI addresses above, the other IP used by the array could be as following examples:

VNX Control Station 1: 10.20.30.1
VNX Control Station 2: 10.20.30.2
SP A management IP address: 10.20.30.3
SP B management IP address: 10.20.30.4


The High Availability Validation Tool will log an HAVT warning if it detects that a host is connected via a single iSCSI Initiator. Even if the initiator has a path to both SP’s it is still at HA risk from a host connectivity view. You will also see this if using unlicensed PowerPath.

Caution! Do not use the IP address range 192.168.1.x because this is used by the serial port PPP connection

Oh.. I haven’t even discussed VMware storage path policy, as that would really be dependent on your array. However VNX is ALUA 4 and RoundRobin works really well! If you don’t have or want PowerPath as an option!

References:

VMware Storage Guide 5.5 (PDF)

VMware Storage Guide 6.0 (PDF)

“Best Practices for Running VMware vSphere on iSCSI” (TECHNICAL MARKETING DOCUMENTATION v 2.0A)

“Using VNX Storage with VMware vSphere” EMC TechBook

Standard