HCI ClonedHyperConverged Infrastructure or HCI is hot. Or not? I notice that when I’m talking to my customers, all of them are looking at HCI in one form or another. Though some of them end up going for the “old ways” because they don’t see the fit. And where HCI used to be the realm of “the new ones” like Nutanix, it now is picked up by HPE with Simplivity and Dell-EMC with VxRail. If so many big players are betting on it and so many people are talking about it, maybe it’s worth a closer look. What is the state of things in the fast changing world of HCI?

What is HCI?

Let’s do a little trip down memory lane, for those of you who aren’t busy with infrastructures all day. Where does HCI come from? The way we have experienced it; it’s the evolution of the “Converged Infrastructure”, which in turn was the evolution of “build your own” infrastructure, which was the way we started modern IT datacenters back in the 2000’s.

Build your own: The name says it all. You pick the servers you want, you install the network interfaces, the local or centralized storage solution, the hypervisor and you are responsible for making them all play nice together. This is the way we did it for a decade or more. The big benefit: you pick and choose. You pick your favorite server vendor, your favorite network vendor, your favorite storage vendor based on the features you want and need and you make it work for you. This is still the most flexible way of making an infrastructure fit your needs. Old school architects might say it prevents “vendor lock-in”. Vendor lock-in, in short, means that you are completely dependent upon one IT brand to make your whole infrastructure work and updated. The other side of that coin is that “Build Your Own” can also lead you into “Vendor Sprawl”. A situation where you have to manage a multitude of IT brands for your infrastructure. That might eventually end in “Vendor Hell”, where during a disaster the different IT brand support services point at each other for the cause of your outage and it’s up to you to figure out how to get out of the misery you’re in. Freedom of choice comes with a price, one might say. Maintaining a BYO infrastructure, also known as “day two operations”, can take a lot of time out of the admins day.
Converged Infrastructure: This really picked up pace with Cisco’s UCS combined with either EMC storage (a.k.a. the vBlock) and Netapp Storage (a.k.a. the Flexpod). Shortly after that, other vendors also introduced their own versions of Converged Infrastructure. The difference with a build your own is that the hardware (the server, network and storage) and the software layer are made to fit. It basically comes down to; if you put it together like this (in the case of Flexpod) or if you buy it like that (in the case of a vBlock), it will play nice together and if not, “we” (the vendors) will sort it out and fix it for you. It comes with a validated design you need to follow and implement to make sure it is up to the specs of the harmony the vendors tested. This actually was a big step forward as everybody was used to doing it their own way. With CI, you gave up a slice of your freedom of choice for the assurance that it will not cause you any problems. As mentioned, converged infrastructures usually come with a design that states what versions of hardware and software you can use together. If you deviate because of a specific problem or demand, you risk not getting support in case of problems as you do not comply with the validated design.
Converged infrastructures also made it clear that if you prep an infrastructure by following a set of validated rules, it can greatly reduce the time you spend on building and implementing an infrastructure. Cisco has movies on YouTube that show a server deployment from unboxing to a running server including hypervisor within 25 minutes. It meant that companies started dreaming of provisioning datacenters in days rather than in weeks or months. Less choices, less time, more assurances.
Hyperconverged Infrastructure: A hyperconverged infrastructure is defined by the fact that everything is in one box and you just put in the plug and press power-on. It leaves you with a very limited set of choices (how much RAM do you want, what CPU power should be in there, how much storage space do you need). The deployment can be fully automated and the result is a working infrastructure with, for instance, four nodes within 15 minutes after the initial power-on. Do you want more power? Just stack the appliances. It scales linear. With every node you add, you add a specific amount of compute, memory and storage to your infrastructure. The software versions are set. It works with what the vendor supplies you with, which might not be the latest and greatest but your vendor guarantees that it will work. You depend upon that vendor to support you and provide you with (critical) fixes and updates. Everything is supported by the one vendor. HCI solutions generally come with a software management layer that manages all appliances together and offers a way to carve up your HCI into the slices you need. That dream of provisioning datacenters in days might actually come true. Limited choice for the guarantee of a very quick time to deploy and low running cost.

Around the years 2010-2011, when everyone was just getting used to the idea of Converged Infrastructures, a few ‘start-ups” came up with the idea of putting it all in one box. At that time, start-ups popped up like mushrooms in autumn with appliances of all sorts. A few visionaries predicted HCI was a hype and would not last. And the facts initially supported that. A lot startups died quickly. But a couple of those startups survived and the evangelization of HCI continued, despite what those visionaries said.

Today, HCI is very much alive. Nutanix is one of those startups that is still strong in the HCI market. But a couple of the big IT brands also jumped into that market. Very recently HP bought Simplivity, Dell-EMC has VxRail that it co-developed with VMware, Cisco has it’s HyperFlex.

Why is HCI successful

When HCI was first introduced, a lot of people said words like “inflexible” and “Vendor Lock-in”. Over time, the market has learned that flexibility and vendor sprawl comes with a price. A price a lot of IT managers are not willing to pay for any longer. Another factor is time. HCI can greatly reduce your time to deploy. Many system integrators are not a fan of HCI because it cuts into their most profitable business: service hours.

Then there is maintenance or as it’s called nowadays: Day Two Operations. Day two operations cost a lot of money. With previous infrastructures, companies needed skilled and trained IT technicians to keep the ‘engine’ running. And they had to pay for it. CCNA- MCSE- VCP- certified personnel was needed to make sure that one business application was running. And then there is  the downtime. A lot of downtime. Downtime for updates. Updates to the firmware, updates for the hypervisor, updates for the storage, updates to the network. It all meant that the infrastructure was not available. HCI promises to reduce that cost and improve on that downtime.

HCI actually is the ultimate result of the Software Defined DataCenter or SDDC. You don’t care about the underlying tech, it just works. The software layer provides the flexibility you need to make it work in the scenario you pick. And it also makes sure it will work when you add another HCI appliance by redistributing your workload.

Companies that embrace HCI are doing it because it has a fast time to deploy and a low day two operations model. Their IT folks can focus on what runs on top of the HCI solution rather than getting caught in deep-technical conflicts with different vendors. They are also doing it because in case of problems there is only one party to converse with, one “throat to choke”, which should result in a much quicker solution in case of trouble.

Everyday more companies come to the conclusion that what runs of top of that infrastructure is much more important than what it is running on. Cloud offerings confirm that believe. The only things that really matter are the performance and availability of the company apps and data at all times, any hardware will do to satisfy that demand. The SDDC mindset, one could state.

Current players in the HCI marketspace

Where this used to be the field of start-ups, it’ has now become the battlefield for the large IT brands of this world. Take a look at the list below of the most obvious brands and their proposition

Brand Proposition Extra info Link
Dell-EMC VxRail series. In total there are 5 series, the G, E, V, P and S series, all with different sizing and CPU configurations. The VxRail Appliance architecture consists of modular nodes, including models based on Dell PowerEdge servers, and VMware Virtual SAN. Check here
Cisco HyperFlex series Cisco has three different series in its portfolio under the hyperflex brand, the hybrid node, the all flash node and a branch office model. Hyperflex combines the software-defined networking and computing power of Cisco UCS. Check here
Nutanix NX Series Nutanix is the only startup that kept itself alive in the HCI market. The appliances come in a variety of possible configurations with several software options. There are the NX1000, NX3000, NX6000 and NX8000 machines. Next to those, they also sell as OEM equiptment to Dell, Lenovo and IBM. Check here
Hewlett Packard Enterprise -Simplivity SimpliVity 380 HPE is quite a new player in the HCI field. Their recent purchase of Simplivity has to date resulted in 1 product, the HPE SimpliVity 380. The appliance is based on HPE ProLiant DL380 Gen9 Servers and available in a variety of configurations.  Check here

These are the largest ones currently in the HCI market. Of course there are other players but these do determine the current playing-field.

How about VMware? Wasn’t there a product in the past that did something similar? Yes there was. It was called EVO Rail and it was introduced by VMware on VMworld in 2014. We actually did a few articles on it. In 2016 VMware decided to move this initiative into the federation with EMC which, more or less, resulted in the VxRail productline. VMware is focusing on the SDDC software side of things with NSX, vSAN and the vRealize suite and not involved any more in the hardware side of it. You could say that the all new Cloud Foundation is VMware’s HCI solution, but without the hardware. Then again, HCI is really all about SDDC and SDDC is the basis of cloud, so no surprise there.

Keep in Mind

Is it all gold on the horizon? Mostly, it looks like it is. Does it come with a disclaimer? Of course it does. Even though it sounds like HCI is a “fire and forget” form of infrastructure, if you follow the links in the previous paragraph, you found out that there still remains a lot to think about. HCI does not dismiss you from having a proper plan for your datacenter. It does rely on you to know what you want so you can size it the right way. So you need to know everything:

  • How much compute power do I currently consume?
  • How much memory do I currently consume?
  • How much storage do I currently consume?
  • Does all that storage need to be in my HCI?
  • How much growth will we see the next period of 1, 2, 3, 5 years?
  • How much power does everything I need consume?
  • How should my disaster recovery scenario be with HCI?

These are just a few questions a proper infrastructure architect will ask you when he’s planning your next datacenter. And remember that HCI is the means to an end and not the destination. It should help you reach what you want, not be the goal itself.

And yes, there are scenarios where HCI does not fit the bill. If you have special hardware you need to make your business work and this hardware needs to be in the server, it might not fit. There are lots of options to host, for instance, USB dongles in your network but that might not work for you. Another option could be that you have large amounts of data but not all of it is hot data. Then you might want to look for an additional storage solution and not just rely on what the HCI solutions brings you. Or maybe the linear scaling of HCI does not work for your compute demand as you do lots of large data calculations and your compute demand grows much quicker than other resources in your datacenter. It all boils down to proper planning and design. Stuff you still need to do, even if you are aiming for an HCI solution.

Future of HCI

In 2016, Gartner said HCI will be mainstream in the IT market within the next 5 years. When we look at the growth rate of HCI in 2017, my guess is that we will reach that point earlier. The challenge of current HCI solutions is that they do not fit any workload just yet. HCI still has different flavors and use-cases.

When you look at cloud and cloud models, the future of HCI is that it will host any workload any time anywhere, on premises and off premises, because the true intelligence is being provided by the software layer on top. We might eventually go to an even dumber model where there virtually no intelligence left in the hardware and everything is software controlled. Running an application in the cloud or not is just a matter of location, not of infrastructure. The differences between the different hardware vendors would consist of how much performance will fit in a box, how many boxes work together and at what price-point. As a customer, you just pick and choose. Where have we heard that before?