HP C7000 iLo

What happened? Why doesn’t my iLo have an IP Address? The blade inserted has the correct profile but the iLo cannot obtain an IP address. I cannot start a remote session to the iLo….

What? What about configuration of the iLo IP Address?

HP iLo uses DHCP to obtain an IP address.

The Onboard Administrator configures this parameter.  The iLos have DHCP enabled and OA distributes addresses. Feature is called Enclosure Bay IP Addressing or EBIPA.

I was scratching my head, as it has been a while since I had to configure a new blade with the iLo.

I forget to run the full configuration for the iLo in the Onboard Administrator.

Verify the blade is functional powered on and recognized in the and the chassis isn’t throwing any errors.


Go into the OA.

Locate the Menu “First Time Setup Wizard >>> EBIPA section”

  1. Make sure the enable box is checked for the new blade
  2. Notice how the blade has not obtained an IP address yet, pending set of feature and reboot


Once the blade has rebooted or ILo has rebooted it will have an IP address and you will be able to remotely console into the blade.

Remember to patch the latest firmware for the blade. Download the full ISO image of the SPP from HP’s site and boot from that ISO.


vmworld 2015 Highlights Day 3 VSAN 6.1

Well it is almost the end of vmworld2015. I have been incredibly busy attending so many sessions it is numbing both for the mind and my feet! The content this year has a different perspective for me. In years past it has been on the tip of everyone’s tongue a new release of x, y or z but this year it is more than that. For me it is Day 3 as I have been at vmworld 2015 since Sunday.

VSAN 6.1

This is the third major release for this product. VSAN has isn’t the new kid on the block. It is shown its ability to deliver Enterprise Class performance with the flexibility of the Software Defined Data Center. It is delivery flexibility and ease of operations as a foundation of SDS. Performance is just one of it’s attributes. The beta for a long awaited feature is announced. DEDUPLICATION


“VSAN 6.next beta – A glimpse of the future”

VSAN stretched clusters.

If you have a chance watch the session delivered by Rawlinson Rivera @Punchingclouds and Duncan Epping @duncanYB. It is a deep dive of the design of using VSAN as a stretched cluster option. Key points to remember are L2 L3 requirements and the newest feature the Witness VM. While VSAN requires L2 Multicast; the witness vm only requires L3 communication. The impact is a stretched cluster Site A and Site B while a lower latency site can host your witness VM with L3 requirements. As the Fault Domains are dependent on the use of a Witness VM and this re-design of the witness lends itself to use for VSAN as A ROBO solution.

Look for it STO5333 “Building a Stretched Cluster with Virtual SAN”

The impact of VSAN on design and deployment of a stretched / Metro cluster combined with the versatility and flexibility of vsphere 6 (Think PSC HA Think the flexibility of vmotion across VDS, VSS and different Datacenters!)



What is it? In nutshell VSAN can now be deployed at remote sites (Remote Office Branch Office) with only 2 nodes! Technically the third node is a witness vm.


I am very excited to leverage this ROBO configuration for many customers. There is a lot of flexibility in how to implement this design. It decrease the complexity and cost of shared storage, but that is the crux of VSAN of vmware’s SDS.

Once last note about VSAN 6.1

I have had many conversations with customers and as well as the vmware VSAN team about the requirements and benefits of such architecture. This focus of this conversation was address directly in the session STO5336 “VMware Virtual SAN – Architecture Deep Dive” delivered by Rawlinson Rivera @Punchingclouds and Christos Karamanolis

Note the Hybrid Design vs the AFA VSAN are NOT the same. It that flame bait? No.

I will try to summarize it here. Both are hyper-converged SDS solutions employing the benefits of clustered storage and SSD. Both have simplified operations and lower TCO. Both deliver fantastic consistent predictable performance.

Cache is where it is at. Read Write

SSD is leverage in both the hybrid model and the AFA model but the key is the are leverage DIFFERENTLY.

70/30 Read Write for Hybrid moreover… this is the crux. The algorithm used to deliver performance for a AFA is different than the Hybrid approach.

  1.  LRU vs ARC.   That is why you cannot. I repeat CANNOT bolt on All SSD in a licensed Hybrid VSAN design and get the same performance of a  AFA solution. I have tried to explain this to a customer but one persons results skew their reasoning.  I would tend believe results that you and review that are from countless man-hours of R&D vetted against multiple manufacturers / vendors and various RA bench marks
  2.  Read Write characteristics of SSD / HDD   Not all SSD are created equal. SSD and HDD are different. vmware knows this. Hardware manufactures know this. Storage engineers know this. But Consumers sometimes know this. You have the flexibility of choosing your own hardware to fulfill your requirements for VSAN ref: vmware compatibility guide

“A candle that burns twice as bright burns half as long” 

VSAN 6.1 AFA design understands this at a much deeper level. It is designed and tested for not only performance but for longevity. VSAN 6.1 helps you sort out which SSD is appropriate.  HDD aka spinning magnetic disks can handle long term data and dense capacity at a price: IOPS. Conversely SSD can handle 10x increase in IOPS but at the cost of density and longevity.

Pick carefully your components. To borrow from the culinary world. The best ingredients make for the best food. You will consume something awesome if you cook something awesome. “You can’t make chicken salad out of chicken sh*t” lol

Reference Duncan’s explanation of the VSAN Ready Nodes http://www.yellow-bricks.com/2015/08/25/virtual-san-ready-nodes-taking-charge/

Well there is more to VSAN 6.1 and vmworld2015… more when I can find some time to share. Thanks for reading.