10 things you should know about Cisco UCS
Two years ago the HP Vice President for Enterprise Servers Storage and Networking, Randy Seidl, said the following:
“A year from now the difference will be (Cisco) UCS (Unified Compute System) is dead and we have had phenomenal market share growth in the networking space.“
Man, he must feel pretty stupid right now. In Q2 of 2012 Cisco realized a 22% market share in the blade server market in North America and a 15% market share worldwide. How’s that for being dead, mister Seidl?
I have sold Cisco UCS solutions for about 18 months now and Cisco has definitely made a good impression in the server market. But there are still a lot of people (customers, (former) colleagues, VMware enthusiasts) who have vaguely heard of Cisco UCS but don’t see the distinct differences with blade server solutions like those from HP, IBM, Dell or Fujitsu.
So I decided to sum up
10 things you should know about Cisco UCS
- First of all, Cisco UCS is based on industry standards and default x86 hardware architecture. This makes it fully compatible with other systems. The UCS system is compatible with any existing infrastructure and can be integrated into existing management and monitoring applications.
- The most distinctive feature of Cisco UCS, which diverges the most from other vendor’s solutions, is the I/O consolidation. Cisco UCS reduces infrastructure components and costs by providing advanced I/O consolidation. The Cisco UCS system is designed to converge separate I/O networks onto a single Ethernet infrastructure. This consolidation is not limited to FCoE deployments, it extends these benefits to NFS, iSCSI and any other protocol utilizing Ethernet for Layer 2 communication. You can connect up to 20 blade server chassis to a single fabric interconnect.
A small example: 20 chassis with 8 blade servers per chassis equals 160 blade servers, connected to the storage and network using four 10Gbps uplinks for network and four 8Gbps uplinks for storage. With two redundant fabric interconnects this is 80Gbps bandwidth for LAN and 64Gbps for storage on 16 uplinks. This means you need eight 10Gbps ports on your core switch and eight 8Gbps FC ports on your storage network for 160 servers. Imagine how much you can save on cabling!
- Current UCS hardware can provide a maximum of 160Gbps of converged I/O to each chassis of 4 to 8 blades. This is done through two redundant 8 port 10Gbps Fabric Extenders in an active/active configuration. The uplinks to the chassis contain both network and storage traffic. According to the required bandwidth you can choose to connect 1, 2, 4 or 8 uplinks per fabric extender to the fabric interconnect. Can you fill 160 Gbps with 8 servers?
- Many people only think of blade servers when mentioning Cisco UCS, but Cisco UCS is not just about blades. The management and I/O infrastructure is designed to manage the entire server infrastructure including Cisco C-series rackmount servers, like the C210. Even with Cisco B-series blade servers, we still need rack-mount servers for example to attach a back-up device or add graphics offload cards for heavy VDI desktops. UCS’s ability to manage both rack-mount and blade servers under one single platform is a key differentiator with major ROI benefits. Can you manage your blades and rack servers from one console?
- Virtual Interface Card (VIC) or Converged Network Adapters (CNA) are used to create network connectivity on the (blade) server side. Cisco UCS has a unique capability of detecting network failures and fail traffic paths in hardware on the card. This allows network administrators to design and configure network failover end-to-end, ensuring consistent policies and bandwidth utilization. In addition, this unique feature provides faster failover and higher redundancy than other systems. Fabric Failover is a unique capability found only in Cisco UCS that allows a server adapter to have a highly available connection to two redundant network switches without any NIC teaming drivers or any NIC failover configuration required in the OS, hypervisor, or virtual machine.
With Fabric Failover, the intelligent network provides the server adapter with a virtual cable that can be quickly and automatically moved from one upstream switch to another. The upstream switch that this virtual cable connects to, doesn’t have to be the first switch inside a blade chassis, it can be extended through a “fabric extender” and connected to the fabric interconnects.
- Cisco has a number of Virtual Interface Cards (VIC) available. The reason they are called Virtual is because you can use these adapters to present 128 (M81KR, P81E) or 256 (VIC1240, VIC1280) virtual adapters (VIFs) to the operating system. These virtual adapters can be managed from the central Cisco UCS Manager software. When using VMware vSphere using these virtual adapters eliminates the need to use the VMware virtual switches, the virtual adapters can be assigned directly to a virtual machine. In this way the network administrator can manage the entire network stack again from endpoint to endpoint. The maximum number of VIFs if determined by the type of VIC used and the number of uplinks from the chassis to the fabric interconnect. Check out this article if you want to know more.
- The flexibility and ease-of-use of the Cisco UCS platform is created by the service profiles. Service profilesintegrate server identity, network identity and storage access into a simple, easy-to-use profile that can be used to configure hundreds of servers quickly and painlessly. Infrastructure policies, such as networking memberships, cabling requirements, server configuration and performance characteristics, are encapsulated in the service profile. This allows datacenter admins to scale without adding complexity and helps reduce the operational costs of server management and administration.
- One of the coolest features of Cisco UCS (in my opinion) is the so called ‘stateless computing’. With Cisco UCS the underlying hardware (or server) can be made completely transparent to the OS or applications that run over it. The kind of environment which an OS or application requires can be moved from one server to another or can be changed very easily. This is made possible by moving resources, such as MAC addresses, WWN values, IP addresses, UUID, firmware versions and even server BIOS, from one server to another at the time of deploying the server. This is accomplished by using the Service profiles, mentioned above, which is like software definition of a server. When you combine these three features (service profiles, server pools and ‘identity virtualization’) with diskless servers which boot-from-SAN you are able to create a stateless computing environment. This allows you to replace failed servers or change server functions within minutes. Just change the service profile of a (group of) server, boot the server from it’s new SAN location defined in the service profile and your VMware host is now a Oracle database server. The concept of stateless computing facilitates much greater scalability and can be used in conjunction with virtualization to achieve maximum data center utilization. Imagine handing out MAC addresses and BIOS settings like IPs on a DHCP server. How cool can it get?
- Another nice ‘visualization-like’ technique used in Cisco UCS is the way memory is addressed in the blade and rack servers. Usually memory speed decreases when using large quantities of DIMMs. Cisco UCS blade and rack servers deliver 1333MHz memory access speeds with all 48 DIMMs populated by using Extended Memory Technology. Instead of directly connecting one DDR3 channel to the memory controller, four DDR3 subchannels are connected to the memory controller indirectly. Each DDR3 subchannel can support two single-, dual-, or quad-rank DDR3 DIMMs. The extended memory is transparent to software. Cisco Extended Memory Technology overcomes the electrical issues associated with high DIMM counts using Cisco application-specific integrated circuits (ASICs) interposed between the processor and the memory DIMMs. This technology enables an increase in the memory capacity of conventional two-socket systems.
Currently, other mainline 2 socket architectures drop to 800 or 1066MHz speeds when using more than 12 DIMMs. The result is approximately 30% decrease in memory access performance for non Cisco UCS servers. Moreover, because this increased density is gained through additional DIMM slots, lower density DIMMS can be used at significantly lower cost to reach large amounts of memory. How much RAM can your blade servers handle, and how much will that cost you?
Last but certainly not least is the availability of Cisco Validated Designs. Cisco creates these designs together with their partners like VMware, NetApp, EMC, Microsoft. So if you want to create a VDI environment, an Exchange 2010 infrastructure, a virtualization infrastructure or an Oracle database server, Cisco already has a reference architecture for you so you don’t need to worry about sizing, connecting, interoperability, etc. Cisco has already done most of the work for you.
If I got you interested, check out one of the many Technical Videos of the Cisco Data Center YouTube channel, contact a local Cisco partner for a demo or visit a local event where Cisco demonstrates their UCS server platform, you will see that it’s a unique way of handling blade server resources.