Containers Containers Containers even in 2017?
Last year, Docker suddenly was everywhere. Containers were the new thing. Scalable, fast and agile. The future is boxed! Last month, vSphere Integrated Containers went 1.0, so VMware is ready for this new way of life. But is it really the new way of life? Some say containers will be the end of virtualisation. I was always under the impression that it may be good to deploy apps, but apps are only half the story. What you really want is data. And as containers are clones that deploy rapidly but contain no specific data like customer information or order data, I always had my reservations about the practical usability. And apps and data do not address all enterprise challenges. All the more reason to take a dive with the blue whale and find out how far along this trend is.
What are containers?
Containers differ from hardware virtualization but share a simular vision of consolidating multiple secured environments on one host. The big difference is that a standard hypervisor emulates hardware so the “instance” can install it’s own operating system and application where containers use the same operating system to run different applications side by side in a secured contained environment. The advantage over a full blown virtual machine: a container is much more agile as it does not need to replicate a complete OS. It just has the same API’s every time again so you can run 12 instances of Apache in a container and clone up to 20 in the blink of an eye.
Why are containers such a big thing all of the sudden? Because it fixes what is called “dependency hell”. A developer builds an app and it runs fine on his system. He deploys it to a test system and all of a sudden, it doesn’t work like it should. This leaves de system admin and the developer pointing fingers why it doesn’t work and it takes time and effort to fix what really never was broken. Containers solve the problem as the developer can pack every dependency in the container and as long as the host offers the correct API’s, it will run one or multiple instances of the application until the host runs out of resources. Operating systems like Mesos or CoreOS make it possible to join multiple servers into one pool of compute and memory to run containers on a bit like vSphere clusters servers to act like one compute and memory platform to run VM’s on.
How it all began – a very short story
The idea of a contained environment is anything but new. Unix offered a “change root” or “chroot” option by the end of the seventies, changing the root directory of a process and its children to a different location in the filesystem, securing access with file level rights.
Fast forward to the year 2000. FreeBSD implements “Jails”. A FreeBSD Jail allows administrators to partition a FreeBSD computer system into several independent, smaller systems, called “jails”, with the ability to assign an IP address for each system and configuration.
A short jump to 2006: Sun released Solaris 10. Included in that release were Solaris Zones and Solaris Containers. A Solaris Container is the combination of system resource controls and the boundary separation provided by Zones. Zones act as completely isolated virtual servers within a single operating system instance. By consolidating multiple sets of application services onto one system and by placing each into isolated virtual server containers, system administrators can reduce cost and provide most of the same protections of separate machines on a single machine. One could say this is the original model Docker derived from.
Then, fast forward to 2013. Docker sees the light of day. Linux has implemented it’s own form of containers with LXC in 2008 and Docker takes advantage of these technologies. Later versions replaced the LXC technologies with its own library, libcontainer. With Docker, developers can create and run application containers quickly. And with the release of Docker Hub, developers can download and run application containers even faster. Docker is offering an entire ecosystem for container management and the Docker format is more or less a standard.
Where it is now
Docker’s success did not go unnoticed. More technologists present their take on the container concept. In 2014, CoreOS CEO Alex Polvi introduced the company’s new container project called Rocket as a direct response to Docker’s “fundamentally flawed” approach. Docker’s technical approach is not secure, because it requires a central Docker daemon, Polvi said. Docker’s format is widely regarded as a standard but that remains to be seen.
“It remains to be seen what the official standard for containers is going to be,” 451 Research’s Lyman said. “I think we’ll see something more like what we’ve seen with hypervisors. VMware is the most prominent and widespread, but it’s certainly not the standard, and we’re likely to see a similar thing with Docker and Rocket, and maybe others.”.
“In practice, Docker is implemented in a vast majority of cases where containers are running on top of virtual machines. The notion that containers are a replacement for VMs is certainly not one that we perpetuate,” said David Messina, a marketing vice president at Docker. “The core values of Docker containers are complete portability of applications … as opposed to higher densities on hardware.”
According to figures published at Dockercon 2016, the conference by and about Docker, there is an increased adoption by enterprises and for production use. Currently, the Docker library contains over 460.000 containers and more than 4 billion containers have been pulled from that repository. Another interesting point was brought up by Datadog, Datadog is mostly used for infrastructure monitoring. According to Datadog, they see these facts with their users:
- A 30% increase in Docker adoption in one year
- Docker runs on 10% of the hosts, up from 2% compared to 18 months ago
- Docker is mostly used by large companies with a large number of hosts
Finally, the increasing use of Docker in enterprise environments is also displayed in the Rightscale 2016 State of Cloud report, with a survey of DevOps tools. They claim to see these figures:
- Overall Docker adoption more than doubles to 27% vs. 13% in 2015, and another 35% have plans to use Docker
- An even higher percentage of enterprises use Docker (29%) and plan to use it (38%)
- 26% have workloads already running in containers: 8% percent in development and 18% in production
Being a sceptic, my first question while looking at these statistics is, 27% of what? When you look more closely at the report, it states:
- Docker is the fastest growing DevOps tool, with adoption more than doubling year-over-year from 13 percent in 2015 to 27 percent in 2016. In the enterprise, Docker also saw more than 2x growth (from 14 percent to 29 percent).
- Docker could soon be the most used DevOps tool in the enterprise as 38 percent of enterprises have plans to use it. This compares to 20 percent that plan to use Chef and 19 percent that plan to use Puppet.
When you look at their stats, it becomes clear that more than 60% of the respondents in their survey is USA based and more than 30% works in DevOps. If I compare that to the companies I see on a regular basis, who are in Europe only and do not have any DevOps people employed, I am a bit puzzled by the figures presented. Also, we do not perceive Chef and Puppet as DevOps tools only. Customers I see, use Chef and/or Puppet for configuration management and consistency. Not quite a DevOps task but rather a systems management task, I would say. Comparing one to the other feels a bit odd.
But there we are, January 2017. Containers are here to stay. VMware released VIC 1.0 December 2016. You can officially run any Docker container in your vSphere environment, supporting DevOps in your enterprise environment while staying in control of your resource access and usage. Both can coexist wonderfully.
VIC and Docker
When you look at all of the above, one might get the impression that the hypervisor is bound to go extinct and replaced by containers. VMware thinks it will not go quite so fast and I agree. There are a lot of enterprise IT challenges you can’t and probably would not want to solve with containers. But VMware thinks containers are too important to ignore, hence vSphere Integrated Containers was born. What is VIC? Here’s the introduction from the installation document that describes it better than I could have:
What is vSphere Integrated Containers?
vSphere Integrated Containers Engine allows you, the vSphere administrator, to provide a container management endpoint to a user as a service. At the same time, you remain in complete control over the infrastructure that the container management endpoint service depends on. The main differences between vSphere Integrated Containers Engine and a classic container environment are the following:
- vSphere, not Linux, is the container host:
- Containers are deployed as VMs, not in VMs.
- Every container is fully isolated from the host and from the other containers.
- vSphere provides per-tenant dynamic resource limits within a vCenter Server cluster
- vSphere, not Linux, is the infrastructure:
- You can select vSphere networks that appear in the Docker client as container networks.
- Images, volumes, and container state are provisioned directly to VMFS.
- vSphere is the control plane:
- Use the Docker client to directly control selected elements of vSphere infrastructure.
- A container endpoint Service-as-a-Service presents as a service abstraction, not as IaaS.
vSphere Integrated Containers Engine is designed to be the fastest and easiest way to provision any Linux-based workload to vSphere, if that workload can be serialized as a Docker image.
vSphere Integrated Containers comprises the following major components:
- vSphere Integrated Containers Engine: A container engine that is designed to integrate of all the packaging and runtime benefits of containers with the enterprise capabilities of your vSphere environment.
- vSphere Integrated Containers Registry: A Docker image registry with additional capabilities such as role-based access control (RBAC), replication, and so on.
Both components currently support the Docker image format. vSphere Integrated Containers is entirely Open Source and free to use. Support for vSphere Integrated Containers is included in the vSphere Enterprise Plus license.
vSphere Integrated Containers is designed to solve many of the challenges associated with putting containerized applications into production. It directly uses the clustering, dynamic scheduling, and virtualized infrastructure in vSphere and bypasses the need to maintain discrete Linux VMs as container hosts.
Compatibility
If you would ask a die-hard container fan he (or she) would state that this is not the way containers should be run and it violates the concept, a container is not a VM. And that is true. But from an operational standpoint, creating a huge Linux VM and handing over full control to a DevOps department would infringe more security and management concepts than I can think of right now. VIC is a compromise that offers the interface DevOps wants with the control Operations needs. I wouldn’t call it the best of both worlds but certainly the best compromise you can think of.
The current version supports the Docker image format and a control plane that offers container management similar to a native Docker installation to which you can talk to using your Docker client.
Getting and Installing VIC
Getting VIC is easy. You log on to My VMware using this link, click goto downloads and you should be presented with the 2 parts VIC consists of; vSphere Integrated Containers Engine and vSphere Integrated Containers Registry. After downloading, you can find the installation and configuration manual on the Github pages of VMware. This installation manual is written on version 0.8 but will serve you fine for v1.0.
Challenges with Containers
So there we are. Downloaded VIC yet and getting ready to dive into containers? Hold your horses for a moment as it’s not all golden. As with any emerging technology, there are some drawbacks and issues you need to be aware of:
If your core application currently runs natively on Windows, it will not run in a container just like that. Containers are based on Linux. Therefore, if your application is Windows based, you have to stick to that or have a DevOps team rewrite your app into a containerised version. That might be a challenge.
- If you end up with data in your container you need to save, you need to take action yourself. Docker containers offer backup and restore but it’s all manual. It requires either manually creating those backups and recoveries each and every time or writing scripts. Although scripting is not that difficult, they do have a tendency to break when software changes, infrastructure is altered and so on. Rings a bell? You bet it does.
- Building an app in a container can be problematic when your application requires access to specialized devices such as GPU, or when your application needs to work across multiple nodes, or when you want to run containers in a multi-tenant environment with arbitrary third party code inside, or when your application has a GUI, or when you don’t have a reliable centralized data storage (see also point 2). It is not said that it cannot be done but it brings special challenges where you might wonder if it’s not simpler just installing your app the “traditional” way in a VM rather than fiddling it in a container and be done with it.
Closing thoughts
In my opinion, containers do have a wonderful future. Especially with service providers who need to service tens of thousands of requests for websites, webshops, weblogs and vlogs. However, the current state of technology makes me wonder if a generic enterprise will have a use for an extended containerised infrastructure. Containers still struggle with data you want or need to keep. I don’t think it’s enterprise ready just yet. But that being said, development is going faster than ever. The problems of yesterday are the solutions of tomorrow.
Cloud Native Apps and containers are here to stay. It would be wise to observe the tech and make sure you know your part when the time comes that a vendor or developer knocks on your door to have your support and a container deployed. Will containers replace hypervisors? I don’t think so. The use-cases for both tech are different and current. Coexistence will be the key word. And your world will get more complicated. Private, hybrid and public clouds, local containers and public containers, securing access and making sure data is kept safe throughout all platforms.
My advice to you:
- Download VIC, play around, make sure you know what it takes to deploy and manage a container and its resources.
- Watch the IT landscape for that moment where this knowledge is needed. It might be sooner than you think.
- Prepare for a multi platform multi cloud world. Take a look at (multi cloud) management platforms and make sure you stay in control.
References:
- Courtesy of Aquablog for the history of Docker at http://blog.aquasec.com/a-brief-history-of-containers-from-1970s-chroot-to-docker-2016
- Courtesy of Techtarget for the history of Docker at http://searchservervirtualization.techtarget.com/feature/A-brief-history-of-Docker-Containers-overnight-success
- Courtesy of Techtarget for Container your Enthousiasm at http://searchservervirtualization.techtarget.com/feature/Container-your-enthusiasm-Docker-doesnt-want-to-replace-the-VM
- Rightscale DevOps Usage Report for 2016: http://assets.rightscale.com/uploads/pdfs/rightscale-2016-state-of-the-cloud-report-devops-trends.pdf
Related Posts
4 Comments
Leave a Reply Cancel reply
You must be logged in to post a comment.
Unless you are calling mounting persistent volumes a script, you do not need to do anything magical to maintain data within the context of a container. The container itself does not hold the data, but it can consume and update it with ease. It requires a little different thinking but it isn’t hard nor is it impossible. Virtual Machines also have issues with giving you access to hardware directly. The hypervisor isn’t really going to allow every virtual machine to take ownership of the one GPU on the device. Containers have their place, as do virtual machines and raw hardware. Finding the right balance is what IT architects and systems professionals do.
It is unfortunate that misinformation is being passed as fact to sell a product when truthfully showing the limitations and the advantages of virtualization in specific use cases would have been sufficient.
Dear MidnightCPT,
I’ve done my best to research all aspects of containers in the enterprise. I find it a bit disappointing that you reduce this article to a sales pitch with “misinformation”. Also, it’s an opinion editorial and you are allowed to disagree. Stating, however, that this is trying to sell you something, is simply incorrect.
What also is incorrect is the fact you state that a hypervisor does not allow for GPU access. It does. It even does this exclusively if you want a VM to take full advantage. It does have drawbacks and therefore Xen and ESX have a graphics abstraction layer that can be shared among multiple VM’s allowing them the advantage of the GPU power and capabilities without the hardware connected drawbacks. You can also assign direct connections to storage devices from within a VM and if this is networked too, still move the VM within the cluster using live migration. So, it seems you have a couple of facts off to make an incorrect statement.
Generally I am always open for discussion as I am very aware that there are so many rapid developments that I can’t possibly keep track of them all. Maybe you can change your approach the next time so we can have that open conversation to the benefit of us all.
Alex – perhaps you should reread your article and see how it comes off to someone from outside. It is selling the VMWare new product offering. If you had started out with simply saying that VMware has this new container offering that would allow you to blend some of the best of container models with virtualization models within a single control system that would have been fine, but it comes off with you criticizing docker containers for something they are actually fully capable of doing and in some cases without any real difficulty at all. Then you talk about downloading your copy today. This comes off as a salespitch even if that isn’t your intent.
And it is still true that you can’t have every VM within a single machine connecting directly to a single piece of hardware on the machine and expecting exclusive access, this just isn’t possible. You can abstract it and then offer it to multiple guests but even that is sort of a kludge and removes some of the advantages of direct access.
I am generally accepting of the fact that people make mistakes or don’t 100% understand all the technologies out there, I mean really who does. But just as you ask that we be accepting you, you also have to accept that if you pitch your stuff as a product highlight you are likely to get people questioning when you say one product does something that other can’t when in reality it can.
“This comes off as a salespitch even if that isn’t your intent.”
“And it is still true that you can’t have every VM within a single machine connecting directly to a single piece of hardware on the machine and expecting exclusive access, this just isn’t possible. You can abstract it and then offer it to multiple guests but even that is sort of a kludge and removes some of the advantages of direct access.”