Containers
|The ultimate deployment tool, or just another tech fad?
Voice
Feature
Ok, let’s start with an easy one: what is Docker?
That’s simple: it’s a tool set for managing deployments of containers.
You’re stretching the definition of ‘simple’ a bit. Can you break it down for me a bit more? Let’s start by explaining what a container is.
OK. Remember that Linux is just the kernel and the operating system is this plus all the tools that sit on top of it?
Yes, but I thought I got to ask the questions?
Sorry. A mild digression only. The Linux kernel is the bit that sits on top of the hardware and controls access to the CPU, memory, and all the other stuff that makes up your computer. Normally, you can only use one kernel on a computer at a time. However, you can have many copies of everything else. Containers are a way of encapsulating this ‘everything else’ so that they can share a kernel, and this enables you to run multiple distros on the same hardware.
Like having a dual-booting Linux system?
No. Using containers, you can run multiple distros at the same time, or, as is more common, using the same distro multiple times.
Ah, so containers are a form of virtualisation, like Virtualbox or Qemu?
From a user’s perspective they’re pretty similar. You have a host OS, and within that host OS, you can boot more versions of Linux. However, at a technical level, they work in very different ways. In virtualisation, you have an application on the host OS that simulates a CPU, then you have another entire OS (including kernel) that runs on this simulated CPU.
In containers, you only ever have one kernel. It’s the same for the host operating system and the other operating systems that you run. The containers have their own chunk of the filesystem where they keep all their data, and behave exactly like independent OSes, but it all runs atop the same kernel.
There are advantages and disadvantages to this. Because they run the same kernel, you can’t run different operating systems like you can with virtualisation. However, on the other hand, because they don’t simulate the CPU, the performance is better.
Great! Now I understand it, and we’ve still got a page to go. Shall we just stick a picture of Linus Torvalds in there and nip off to the pub early?
Not so fast! That’s containers. We need to get onto Docker itself.
Oh right, yes. You said before that it’s a tool set for managing deployments on containers. I know what containers are, but why would you want to deploy them anywhere?
The big advantage of containers is that you can encapsulate an entire environment into a single block. This enables developers to pull all the libraries, data, and software into a single container and distribute this. By sending the container, rather than just the software, it means they don’t need to worry about dependencies, different configurations, or anything like that.
So it’s a bit like statically compiling software, but including the whole OS?
I’d never thought of it like that, but I suppose it is really.
Sounds awesome! What’s the inevitable downside?
Nothing major, but the containers will take up more disk space than just the plain software, and they have to be updated separately to keep them current with the latest bugfixes and security patches.
So is Docker poised to become a universal replacement for apt-get, Yum, and all the other package managers?
Not really. No one’s suggesting that containers are a sensible way of installing all your regular software. The main target market for Docker is people providing services across a network. For example, you could have a Docker image for OwnCloud, and another for WordPress. They would each have their own environments with everything installed, set up, and ready to run.
By keeping everything contained in this way, it’s really easy to customise and deploy. A developer could pull the Docker image to his or her development machine, make any changes they like, then push it to the server. They don’t need to worry about the development environment being different to the live environment, because the whole environment is included in the container. It it doesn’t matter if it’s developed on bleeding-edge Arch Linux, and deployed on ultra-stable Centos, it will always run the same. As well as making it easy to develop, this should remove much of the hassle of setting up test servers, or migrating to a new environment.
The developer can also make any changes they need to the environment without worrying about how these may affect other software running on the server, because that software will be in a separate container.
I almost understand your first point now: ‘a tool set for managing deployments of containers’. What sort of tools are typically in the set?
The main tool is (unsurprisingly) called docker, and it has options for getting and manipulating containers. There’s a repository of containers that have been made for common purposes. You can grab these with:
docker pull <name>
Then, once you’ve got one installed, you can run commands on it with:
docker run <image-name> <command>
That’s the basic use. There are also a few options to help you manage the containers. It’s complex technology, but it’s surprisingly easy to use.
This all sounds so good, you must have found it really useful when setting up the Linux Voice web services.
Actually, no. We didn’t use Docker. It is, as you say, really good, but it’s also really new and still under heavy development. When we set up our system at the start of 2014, the current version was 0.8, and it wasn’t quite ready for production use. It was quite stable, but not yet as mature as we like our server software to be.
Ah… so is it going to be another of these projects that keeps promising, but never seems to get to a stable release
No! The first release was in March 2013, so wasn’t yet a year old when we set up LinuxVoice.com. In that time it had gone all the way to version 0.8. By August 2014, it was up to version 1.2, and it’s looking more and more stable every day. Of course, just because it’s called version 1.2 and the team behind it say that it’s production-ready doesn’t mean it’s ready for everyone. Sysadmins are a conservative species by nature, so we don’t expect many people to start using it in important services for a while yet.
I’m not a conservative sysadmin, I’m a reckless maverick programmer. How can I get started with Docker?
There are packages and instructions for most major Linux distributions at http://docs.docker.io/en/latest/installation.
You said it ran on Linux containers. I have this friend who runs a commercial OS and won’t listen to reason. Can he run it?
Sort of. You can run Linux on OS X or Windows in VirtualBox, then run Docker in this. There are instructions at the installation website above. Of course, it’s far better just to give your friend a talking to about the advantages of open source systems.
The “FAQ: DOCKER” link opens http://www.linuxvoice.com/bitcoin instead of the appropriate FAQ page.
Fixed. Thanks for letting us know.
You’re welcome. This writeup on Docker is very helpful. I’ve been wanting to learn more about Linux containers and Docker. Thanks!
I look at containers as being a glorified chroot, like solaris zones.
Often used to isolate java web applications running in their middleware servers like JBoss or Weblogic.
Resource sharing mostly with memory being chewed up by a single zone taking away the resources from all zones on the global being a common issue.
Is this also an issue that happens with Containers on Linux?
If a Hypervisor father, say, VirtualBox (headless) married Github, their child would be Docker