VMworld 2014 logo

 

 

At VMworld 2014 US VMware’s EVO:Rail was introduced. EVO:Rail is the next step in converged infrastructures, it’s the world’s first hyperconverged infrastructure appliance, fully run on VMware. A number of hardware vendors are on board with EVO:Rail. Dell, Fujitsu and Supermicro are on board. Recently, at VMworld 2014 Europe, HP stepped up to the plate as well. It will be a single SKU, one single product that will include all hardware, software and licenses you need to deploy and run a software defined datacenter with an EVO:Rail appliance, built on trusted technology.

A hyperconverged appliance, what does that mean, how does it do what it does, what’s your benefit and wat might be your pitfall? Here’s a brief deepdive in what you should know about EVO:Rail before you jump into the rabbithole. All information in this article is based on the current information available. Changes may occur when time goes by.

How it came to be

in 2013, Mornay Van Der Walt, who is the vice president of the emerging solutions group for the SDDC division at VMware, pitched the idea to Bogomil Balkansky who, at that time, was the senior vice president for cloud infrastructure products.

“With VSAN on the horizon, we now have all the core components, virtualized compute, network and storage, to build a 100% VMware powered appliance that can deliver time to value to first VMs in minutes. It can be as simple as setting up a TiVo box.”

Bogomil’s response: “Sounds interesting, let’s start with a prototype”

And so Project Marvin was born.

Over time, Marvin evolved into EVO:Rail, with a web based setup and management interface. The whole concept of the appliance is “Cut features, not corners”. So the interface of EVO:Rail is not overwhelmed with options, checkboxes and pull-down lists. It’s neat and clean and simple.

Screen Shot 2014-10-16 at 17.31.08

What it is – hardware specs

Every EVO:Rail appliance is based on the same set of parameters. It’s a 2U chassis containing 4 small form factor servers. With it, EVO creates a 4 node cluster. Per node you get:

  • 2 Intel E5-2620v2 six-core CPUs
  • 192 GB of memory
  • 1 SLC SATADOM or SAS HDD for the ESXi™ boot device (Choice is up to the vendor)
  • 3 SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • 1 400GB MLC enterprise-grade SSD for read/write cache
  • 1 Virtual SAN-certified pass-through disk controller
  • 2 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • 1 1GbE IPMI port for remote (out-of-band) management

To connect it to your infrastructure, you need a top of rack switch solution (either one or a stack, that’s up to you). Every node will need 2 connections to the production network and 1 connection for management, so for 1 appliance you will have 8 times 10 Gbit data connections plus 4 times 1 Gbit for management. You can stack EVO:Rail up to 4 appliances, giving you a total of 16 nodes in one cluster.  This would effectively give you the following amount of resources:

1 EVO:Rail Node 1 EVO:Rail Appliance Maximum 16 nodes
CPU Cores (Native) 12 48 192
CPU Cores (HT enabled) 24 96 384
CPU Ghz (Total) 25,2 100,8 403,2
Memory (Gb) 192 768 3072
Disk (RAW in Tb) 3,6 14,4 57,6
NICs (production) 10Gbit 2 8 32
NICs (Management) 1Gbit 1 4 16

What it is – software specs

So, how about that EVO:Rail (or Project Marvin, as it was called before)? VMware states that the basis for creating EVO:Rail was simplicity. Anyone, with any skillset in IT, should be able to deploy EVO:Rail within 15 minutes from plugging it in to deploying the first VM. You don’t need any knowledge about VMware products. The EVO engine does all the heavy lifting: configuring ESX, configuring the clusters, configuring HA, DRS, etcetera. From a VMware products point of view, EVO:Rail contains the following products:

  • VMware ESXi (v5.5)
  • VMware vCenter Server Appliance (v5.5)
  • VMware Virtual SAN (v1)
  • VMware Log Insight
  • VMware EVO:Rail Engine

The EVO:Rail engine enables the appliance to be patched and upgraded without any disruption. When a patch is uploaded, it will automatically start a sequence where it will put every node in turn into maintenace mode (vacate the node and all VM’s on it), deploy the patch, reboot when needed and exit maintenance mode again.

What it is – the guts

The VMware products are the foundation of EVO:Rail. But to glue it all together, glue was required. The glue, in the case of EVO:Rail consists of these ingredients:

For the frontend:

  • HTML 5
  • CSS 3
  • Javascript

For the backend:

  • Java
  • Spring Platform
  • Python (LoudMouth and Scripts)
  • BASH Scripts

Further core components:

  • Backbone.JS
  • jQuery
  • Websockets

EVO:Rail uses mDNS en DNS-SD to ‘advertise’ itself to other appliances. This way a new node or appliance is detected by the EVO:Rail engine even before it is configured. This is where Zeroconf and LoudMouth come in. It works more or less the same way like Apple’s Bonjour or iTunes sharing service. Through DNS Multicast a service is broadcasted, the EVO:Rail engine picks this up and offers you a choice of what to do.

So, basically, if you are a skilled programmer and if you had access to VMware’s internal API’s, you could build such an appliance by yourself. The components are mainly opensource. Workflows were worked out by VMware to enable rollout and management functions to work perfectly. A lot of time and effort went into tuning the software to make sure the end user experience was flawless. Actually, this appliance is built by VMware from the ground up. Every part is programmed and tested by VMware. And thanks to all the modern technology with Websockets, every browser should give you the same experience. No need to refresh screens for updates or reconfigurations. So, browser support is for any modern browser (FireFox, Chrome, IE10 or newer, Safari).

Things you should know

This is a vendor supported product. You are dependent on the vendor for all patches of the product, as it is tightly integrated with the hardware and firmware of the appliance. When you experience problems, you need to call your vendor, NOT VMware.

Technically, you should be able to mix vendors and still have a harmonized experience. But, as EVO:Rail is very tightly integrated with the hardware of the appliance, you cannot mix and match vendors. Dell might put a BIOS update in it’s patch for EVO:Rail. When you deploy that onto an HP based system, you would flash a Dell BIOS into an HP server. You can probably imagine the trouble you’re in when that happens.

When you patch, bare in mind that every EVO:Rail node also is a VSAN node. Putting a node into maintenance will also cause stored data to be put in maintenance mode. This data must be made available to the other hosts before patching can continu. Depending on available resources, this might take a while. Now, this is true for all VSAN deployments, but in this case, you don’t have a choice. There is no alternative.

No hardware vendor at this time was able to give us an exact price for the EVO:Rail appliance. HP is planning to introduce their appliance, which is based on their SL2500, in february. Dell will introduce pricing soon. SuperMicro had no pricing available. Fujitsu is currently testing with selected clients and will introduce their appliance in Q1 2015. Ballpark figure we heard from vendors for purchasing an appliance is between 120.000 and 140.000 Euro’s. This includes all licensing.

So, you think this is the solution for you to automate all your remote and branch offices. Can you centrally manage all those single instances of EVO:Rail? Short answer: No you can’t (right now). Long answer: This is a function that VMware is working on and will come in the near future. But it’s not there now.

Do you want to add your own tools and services into the appliance? Do you want to rebrand it so you can actually make it your own? Short answer: You can’t (right now). Long answer: This is a function that VMware is working on and will come in the future. But it’s not there now.

And last but not least: Do you want to integrate EVO:Rail into your existing VMware infrastructure so you can add it to your already running vCenter and manage it like the rest of your hosts? Short answer: You can’t (right now). Long answer: VMware is aware of this. The whole idea of EVO:Rail however, is for it to function in it’s own way and not be a part of an existing infrastucture. Then again, you need integration. VMware is working on a solution for this, but for now there is no definitive timeframe.

Do you want to extend the hardware? For instance, would you like to add an NVidia Grid card to support VDI graphics? You can’t. The hardware solution is delivered as is. You can use the soft rendering engine, but extending the hardware with PCI Express cards is, for now, not an option.

Do you want to attach your central storage? Would you like to access LUN’s from the appliance? You can, as long as it’s NFS based. For instance, Dell is shipping a Nexenta appliance connected to EVO:Rail via NFS for file access. There are no current plans to integrate external block storage.

Does EVO:Rail use NSX for networking? No it doesn’t. EVO:Rail uses core vSphere networking to do connect the dots. There is no NSX integration for now, as NSX consists of too many parts and this is contra the design principle of EVO:Rail of keeping things as simple as possible.

Conclusion

EVO:Rail will redefine the way datacenters are deployed. As JP Morgan stated “This could be an iPhone moment for enterprise use of cloud computing“. That is a big deal. For every greenfield deployment in SMB and small enterprises, this could become the defacto standard for running a local infrastructure. No more tiresome building of local compute, storage and networking. No more worrying about picking the right licenses for the job, everything is integrated. No more expensive specialists sitting around keeping lights burning for a company who’s core business isn’t IT at all.

Then again, there are some things to consider. Choosing your vendor wisely should be your first step, as you will be even more dependent on their services to patch your problems and replace your hardware than you already are right now. Central management over several locations would be a huge benefit. This is not here yet and there is no definitive timeframe publicly available when it will be. That could break your case. Also, your workload has to fit the box. If you need any additional hardware like flashcards or graphics cards, you have to build your own solution. And last but not least, the storage profile for EVO:Rail is desbribed as a Tier 2 storage. If your workload needs more, you need to build that “more” yourself.

We think EVO:Rail will permanently change the way we look at datacenter deployments. It’s a very big first step into automating tasks that were never properly automated and sold as a full service solution before. But we also think that it will take a bit more time for EVO:Rail to mature and, more importantly, for the market to get used to the idea of this level of automation.

Technical Whitepaper by VMware