HCI - State of the Union

HCI ClonedHyperConverged Infrastructure or HCI is hot. Or not? I notice that when I’m talking to my customers, all of them are looking at HCI in one form or another. Though some of them end up going for the “old ways” because they don’t see the fit. And where HCI used to be the realm of “the new ones” like Nutanix, it now is picked up by HPE with Simplivity and Dell-EMC with VxRail. If so many big players are betting on it and so many people are talking about it, maybe it’s worth a closer look. What is the state of things in the fast changing world of HCI?

What is HCI?

Let’s do a little trip down memory lane, for those of you who aren’t busy with infrastructures all day. Where does HCI come from? The way we have experienced it; it’s the evolution of the “Converged Infrastructure”, which in turn was the evolution of “build your own” infrastructure, which was the way we started modern IT datacenters back in the 2000’s.

Around the years 2010-2011, when everyone was just getting used to the idea of Converged Infrastructures, a few ‘start-ups” came up with the idea of putting it all in one box. At that time, start-ups popped up like mushrooms in autumn with appliances of all sorts. A few visionaries predicted HCI was a hype and would not last. And the facts initially supported that. A lot startups died quickly. But a couple of those startups survived and the evangelization of HCI continued, despite what those visionaries said.

Today, HCI is very much alive. Nutanix is one of those startups that is still strong in the HCI market. But a couple of the big IT brands also jumped into that market. Very recently HP bought Simplivity, Dell-EMC has VxRail that it co-developed with VMware, Cisco has it’s HyperFlex.

Why is HCI successful

When HCI was first introduced, a lot of people said words like “inflexible” and “Vendor Lock-in”. Over time, the market has learned that flexibility and vendor sprawl comes with a price. A price a lot of IT managers are not willing to pay for any longer. Another factor is time. HCI can greatly reduce your time to deploy. Many system integrators are not a fan of HCI because it cuts into their most profitable business: service hours.

Then there is maintenance or as it’s called nowadays: Day Two Operations. Day two operations cost a lot of money. With previous infrastructures, companies needed skilled and trained IT technicians to keep the ‘engine’ running. And they had to pay for it. CCNA- MCSE- VCP- certified personnel was needed to make sure that one business application was running. And then there is  the downtime. A lot of downtime. Downtime for updates. Updates to the firmware, updates for the hypervisor, updates for the storage, updates to the network. It all meant that the infrastructure was not available. HCI promises to reduce that cost and improve on that downtime.

HCI actually is the ultimate result of the Software Defined DataCenter or SDDC. You don’t care about the underlying tech, it just works. The software layer provides the flexibility you need to make it work in the scenario you pick. And it also makes sure it will work when you add another HCI appliance by redistributing your workload.

Companies that embrace HCI are doing it because it has a fast time to deploy and a low day two operations model. Their IT folks can focus on what runs on top of the HCI solution rather than getting caught in deep-technical conflicts with different vendors. They are also doing it because in case of problems there is only one party to converse with, one “throat to choke”, which should result in a much quicker solution in case of trouble.

Everyday more companies come to the conclusion that what runs of top of that infrastructure is much more important than what it is running on. Cloud offerings confirm that believe. The only things that really matter are the performance and availability of the company apps and data at all times, any hardware will do to satisfy that demand. The SDDC mindset, one could state.

Current players in the HCI marketspace

Where this used to be the field of start-ups, it’ has now become the battlefield for the large IT brands of this world. Take a look at the list below of the most obvious brands and their proposition

These are the largest ones currently in the HCI market. Of course there are other players but these do determine the current playing-field.

How about VMware? Wasn’t there a product in the past that did something similar? Yes there was. It was called EVO Rail and it was introduced by VMware on VMworld in 2014. We actually did a few articles on it. In 2016 VMware decided to move this initiative into the federation with EMC which, more or less, resulted in the VxRail productline. VMware is focusing on the SDDC software side of things with NSX, vSAN and the vRealize suite and not involved any more in the hardware side of it. You could say that the all new Cloud Foundation is VMware’s HCI solution, but without the hardware. Then again, HCI is really all about SDDC and SDDC is the basis of cloud, so no surprise there.

Keep in Mind

Is it all gold on the horizon? Mostly, it looks like it is. Does it come with a disclaimer? Of course it does. Even though it sounds like HCI is a “fire and forget” form of infrastructure, if you follow the links in the previous paragraph, you found out that there still remains a lot to think about. HCI does not dismiss you from having a proper plan for your datacenter. It does rely on you to know what you want so you can size it the right way. So you need to know everything:

  • How much compute power do I currently consume?
  • How much memory do I currently consume?
  • How much storage do I currently consume?
  • Does all that storage need to be in my HCI?
  • How much growth will we see the next period of 1, 2, 3, 5 years?
  • How much power does everything I need consume?
  • How should my disaster recovery scenario be with HCI?

These are just a few questions a proper infrastructure architect will ask you when he’s planning your next datacenter. And remember that HCI is the means to an end and not the destination. It should help you reach what you want, not be the goal itself.

And yes, there are scenarios where HCI does not fit the bill. If you have special hardware you need to make your business work and this hardware needs to be in the server, it might not fit. There are lots of options to host, for instance, USB dongles in your network but that might not work for you. Another option could be that you have large amounts of data but not all of it is hot data. Then you might want to look for an additional storage solution and not just rely on what the HCI solutions brings you. Or maybe the linear scaling of HCI does not work for your compute demand as you do lots of large data calculations and your compute demand grows much quicker than other resources in your datacenter. It all boils down to proper planning and design. Stuff you still need to do, even if you are aiming for an HCI solution.

Future of HCI

In 2016, Gartner said HCI will be mainstream in the IT market within the next 5 years. When we look at the growth rate of HCI in 2017, my guess is that we will reach that point earlier. The challenge of current HCI solutions is that they do not fit any workload just yet. HCI still has different flavors and use-cases.

When you look at cloud and cloud models, the future of HCI is that it will host any workload any time anywhere, on premises and off premises, because the true intelligence is being provided by the software layer on top. We might eventually go to an even dumber model where there virtually no intelligence left in the hardware and everything is software controlled. Running an application in the cloud or not is just a matter of location, not of infrastructure. The differences between the different hardware vendors would consist of how much performance will fit in a box, how many boxes work together and at what price-point. As a customer, you just pick and choose. Where have we heard that before?