Category Archives: UCS

Troubleshoot SAN boot from UCS!!

Well !! There are lot of articles published on the same topic but want to publish my own version by consolidating available articles & add my experience and make people to understand who has moderate knowledge on UCS.

I am not going to explain how to create a service profile because lot of videos are available in the market.

Here is  the simple diagram with explanation:


 MDS(native FC- NPIV mode)-Fabric interconnect(Native FC – NPV mode)-UCS

    1)   Configured FC link as uplink interface on both th FI`s

                 – Equipment- FI section- Click on FI-A and FC ports- select 1/1 as Uplink port (do the same for FAB B)

   2)   Create a VSAN 10 and 20 on MDS and FI as per the diagram

            – Example on FI-A –> 1/1 to be mapped to VSAN 10 and FI-B-1/1 to be      mapped VSAN 20

              – Go to SAN (click on SAN cloud)–> Click on Fabric A – click on VSAN – create VSAN 10 and  FCOE vlan is 10 – Select the Fabric A  (repeat the same step for Fabric B as well)

              –  Go to SAN (click on SAN cloud)–> Click on Fabric A – Up link FC interfaces – click on port 1/1 and map vsan 10 (repeat same for FI-B)

   3)  Ensure that MDS is configured in NPIV mode

          – feature NPIV

         – feature fport-channel-trunk

  4) You have to ensure that FI has configured as “set FC end host mode”(means NPV will be enabled by default)

        – Check the NPV flogi – show NPV flogi-table/status

        – Configure the boot policies.

   5) Create a boot policies

           –  boot order 1 CD-ROM

           –  boot order 2  – HBA`s/fc template)

          – 1st HBA/fc0- as primary – Point to  FAB-A storage controller WWN(get it from storage team))

           – 2nd HBA/fc1 – as a secondary- Point to  FAB-B storage controller WWN(get it from storage team)

         6) One more thing that you need to give the “lun id” for each WWN(By default  0 or 1 used by any storage vendor but some organizations may assign`s specific LUN ID to LUN`s those stuff you should get it from the storage team) because wrong of lun id I got LUN access error (Click below link for troubleshoot)

      7) Reboot and KVM  console to blade, while booting you should be LUN mapped to blade means, yo are good from storage prospective.

        8) MAP ESX/any OS image to blade before booting(for first time installation)

       9) KVM console to Blade – Click on virtual media – Activate virtual device – CD/DVD and MAP ISO Image and install the ESX/any OS.

     10 ) If you are having any issue with LUN, here is the link for document t troubleshoot because I got the solution from this only.

My research on infrastructure design for BigData

Future trend is going to  be change for networking folks, should be ready to handle application awareness networks and  have better understanding of application functionality to come up with best network design.

As part of transformation, fortunately/Unfortunately 🙂  got a chance to work on HADOOP solution. During my research  on INTERNET/GOOGLE I came know  that to handle bigdata we require  special h/w (Compute/Network/Storage) and also, I learned how big data works and why we need special infrastructure

Hadoop is Opensource Data mining platform to process/convert large set of variety of unstructured data to structred data in Datalake  integrated with BigData Platforms in Hadoop such Cassendra/ mongoDB /CouchDB etc., to Manage Cluster by using Ambari and Automation by ZooKeeper. The Scoop for data load from RDBMS to HDFS  and so on…..

Today the market leading hadoop ecosystem distribution  flavors are

  1. MapR
  2. Cloud era
  3. Horton works

Hadoop ecosystem, please don`t ask me how it functions 🙂


Here are the my inputs to choose right hardware for BigDATA platform.

Key principles which should be considered while designing Hadoop environment.

  • Usually not virtualized(hypervisor only adds overhead)
  • Usually not blade servers (not enough local storage)
  • Usually not highly oversubscribed (significant east-west traffic)
  • Usually not SAN/NAS (see subsequent slides)
  • Servers should have 10 Gig ports.

Network options:

  1. To handle Hadoop platform’s high density traffic,  the datacenter would require 10/40 Gigabit ports and low latency switches like Cisco Nexus platform (5K/3K)and UCS Common Platform Architecture to deliver high performance.
  2. Cisco ACI kit(Nexus 9k) , but I haven`t seen  right use cases ACI with Bigdata

Myself, I will prefer to go with Option1, if anyone interested in  next generation network, can go with ACI but defiantly will have more challenges while deploying and integration.

Compute options:

  1. UCS M3 240 M3 servers2.
  2. UCS CPA(Common platform architecture)

Two Cisco UCS 6296UP Fabric Interconnects

Eight cisco Nexus 2232PP Fabric Extenders (two per rack)

64 Cisco UCS C240M3 Rack-Mount Servers (16 per rack )

Single Domain Up to 10 racks, 160 servers

Four Cisco R42610 standard racks

Offcourse,  We don`t require to go with all the above mentioned components. Initially, go with one rack(Two fex)  with few rack servers, as and when require keep adding the servers

Recommended FEX connectivity by cisco:


10 Hadoop Hardware Leaders:

  • Source: Cisco live BRKAPP-2033/BRKCOM-2011