WHEN CISCO JUMPED into the blade server market with its first Project California blades, that was big news, but the hardware was a yawner. The new ‘Ventura’ blades we were shown at IDF are going to change the game, radically.
The first rev of the Cisco blades were a basic Nehalem board that made people shout “meh” with an evident lack of excitement. The big trick was that the blade had a network convergence board (UCS) on the back. It looks like this.
Cisco Intel + Qlogic convergence board
The idea is simple. Instead of having a NIC or two, sometimes four, per blade, a couple more for iSCSI or FCoE, plus a handful of others for management and whatnot, there is just one connector. The UCS takes all of that and stuffs it into a single cable. Since not many servers can actually saturate a 10GigE cable, this is a very sane use for an up ’til now underutilized resource.
There are at least three of the adapters out there. One with a Qlogic SAN chip and an Intel Oplin 10GigE NIC, both stuffed into one channel by the Cisco ASIC. Also this one, and then another one that substitutes the Qlogic SAN for an Emulex chip. Both of these are code named Menlo, and a future rev named Palo is either all Cisco, or Cisco plus Intel.
The Palo boards will also support up to 100 virtual NICs in hardware, but they require a driver to do so. There is no need for vSwitch anymore, 1:1 hardware to vNIC mapping is well worth that management hassle. Palo will save a lot of network overhead.
UCSes route to little box in the blade called a Fabric Controller (FC) that has four 10GigE ports, and those connect to a 40 port Fabric Interconnect (FI). Depending on your bandwidth needs, a single FI can control between 10 and 40 blade chassis.
In the end, the idea is simple – you see a FI through your management software, and everything on it is a resource. There is a Hypervisor, likely VMWare, running on the box, and each CPU is basically a compute unit that you can assign VMs to. Everything, including the MACs and IPs are fully virtualized, so you can move them around with impunity. The best part is that you essentially only have to manage a single FI instead of a rack of blades.
It is very clear to me that Cisco gets what is going on in the data center. It might be accused of being a stodgy company that jumps on trends after it is too late, but this time it is way ahead of the curve. HP, IBM and everyone else has a long long way to go to catch up with this.
That said, the first blades it came out with were only differentiated from the sea of nameless boards by the UCS. That by itself wasn’t all that impressive, especially given the Cisco premium attached to every box that bears the logo. Then I saw the Ventura blade.
Yup, 48 DIMMs per Ventura
Everyone knows that virtualization is all about memory, and how much of it you can stuff into a single box. MetaRAM understood this, but unfortunately it didn’t make it. Cisco is taking a different route to the same place, more DIMM slots. This blade is a full width, dual Nehalem blade that packs an astounding 48 DIMMs into a single box.
The Cisco buffer itself
Instead of two or three DIMMs per channel, Cisco put a buffer chip, labeled Catalina, in these blades to extend that to 8 DIMMs per channel. 8 DIMMs, 3 channels per CPU, two CPUs, and you are at 48 DIMMs. If MetaRAM was still around, you could potentially cram 1.5TB in this 1U box. That is enough to run hundreds of VMs or nearly three copies of Vista without feeling crowded.
The buffers themselves are said to add less than one clock of latency to memory requests, so basically add 1 CAS to whatever you use as a good speed ballpark. For almost three times the DIMM count, this is a worthwhile tradeoff.
The Ventura blades are what Cisco had in mind for the whole blade and fabric concept. It allows the company to cram VMs into a box and manage them and their resource utilization with relative ease. This is where everyone will be going, but Cisco is there now. HP and IBM should be very afraid, because their piecemeal approaches just became obsolete in the data center.S|A
Note: We are not using the Cisco logo because of their moronic terms to use it. Normally we would just use it under fair use, but we don’t want to reward companies that stupid. Instead we used an Apple logo. Besides, I used that logo before it did.
Charlie Demerjian
Latest posts by Charlie Demerjian (see all)
- What silicon is in the new Sony Handheld? - Nov 26, 2024
- Surf Security Puts Deepfake Detection In A Browser - Nov 20, 2024
- Asus’s ROG9 phone has a lot of interesting engineering - Nov 12, 2024
- What platform is Intel’s Diamond Rapids on? - Nov 7, 2024
- What Is Diamond Rapids +1 Called? - Oct 31, 2024