f you don’t know, a FC-VC module looks like this, and you install them in redundant pairs in adjacent interconnect bays at the rear of the chassis.
You then patch each of the FC Ports into a FC switch.
The supported configuration is one FC-VC Module to 1 FC switch (below)
Connecting one VC module to more than one FC switch is unsupported (below)
So, in essence you treat each VC module as terminating all HBA Port 1’s and the other FC-VC module as terminating all HBA Port 2’s.
The setup we had:
- A number of BL460c blades with dual-port Qlogic Mezzanine card HBAs.
- HP c7000 Blade chassis with 2 x FC-VC modules plugged into interconnect bay 3 & 4 (shown below)
The important point to note is that whilst you have 4 uplinks on each FC-VC module that does not mean you have 2 x 16Gb/s connection “pool or trunk” that you just connect into.
Put differently if you unplug one, the overall bandwidth does not drop to 12Gb/s etc. it will disconnect a single HBA port on a number of servers and force them to failover to the other path and FC-VC module.
It does not do any dynamic load balancing or anything like that – it is literally a physical port concentrator which is why it needs NPIV to pass through the WWN’s from the physical blade HBAs.
There is a concept of over-subscription, in the Virtual Connect GUI that’s managed by setting the number of uplink ports used.
Most people will probably choose 4 uplink ports per VC module, this is 4:1 oversubscription, meaning each FC-VC port (and there are 4 per module) has 4 individual HBA ports connected to it, if you reduce the numeber of uplinks you increase the oversubscription (2 uplinks = 8:1 oversubscription, 1 uplink = 16:1 oversubscription)
Which FC-VC Port does my blade’s HBA map to?
The front bay you insert your blade into determines which individual 4Gb/s port it maps to and shares with other blades) on the FC-VC module, its not just a virtual “pool” of connections, this is important when you plan your deployment as it can affect the way failover works.
the following table is what we found from experimentation and a quick glance at the “HP Virtual Connect Cookbook” (more on this later)
|FC-VC Port||Maps to HBA in Blade Chassis Bay, and these ports are also shared by..|
|Bay3-Port 1, Bay-4-Port 1||1,5,11,15|
|Bay3-Port 2, Bay-4-Port 2||2,6,12,16|
|Bay3-Port 3, Bay-4-Port 3||3,7,9,13|
|Bay3-Port 4, Bay-4-Port 4||4,8,10,14|
Each individual blade has a dual port HBA, so for example the HBA within the blade in bay 12 maps out as follows
HBA Port 1 –> Interconnect Bay 3, Port 2
HBA Port 2 –> Interconnect Bay 4, Port 2
Looking at it from a point of a single SAN attached Blade – the following diagram is how it all should hook together
Unplugging an FC cable from bay 3, port 4 will disconnect one of the HBA connections to all of the blades in bays 4,8,10 and 14 and force the blade’s host OS to handle a failover to its secondary path via the FC-VC module in bay 4.
A key take away from this is that your blade hosts still need to run some kind of multi-pathing software, like MPIO or EMC PowerPath to handle the failover between paths – the FC-VC modules don’t handle this for you.
A further point to take away from this is that if you plan to fill your blade chassis with SAN attached blades, each with an HBA connected to a pair of FC-VC modules then you need to plan your bay assignment carefully based on your server load.
Imagine if you were to put heavily used SAN-attached VMWare ESX Servers in bays 1,5,11 and 15 and lightly used servers in the rest of the bays then you will have a bottleneck as your ESX blades will all be contending with each other for a single pair of 4Gb/s ports (one on each of the FC-VC modules) whereas if you distributed them into (for example) bays 1,2,3,4 then you’ll spread the load across individual 4Gb/s FC ports.