Intro to Docker: Why and How to Use Containers on Any System

If you have your ear even slightly to the ground of the software community, you’ll have heard of Docker. Having recently enjoyed a tremendous rise in popularity, it continues to attract users at a rapid pace, including many global firms whose infrastructure depends on it. Part of Docker’s rise to fame can be attributed to its users becoming instant fans with evangelical tendencies.

But what’s behind the popularity, and how does it work? Let’s go through a conceptual introduction and then explore Docker with a bit of hands-on playing around.

What is Docker?

Docker allows you to run software in an isolated environment called a container. A container is similar to a virtual machine (VM) but operates in a completely different way (which we’ll go into soon). Whilst providing most of the isolation that a VM does, containers use just a fraction of the resources.

Why it’s great

Before we dive into technical details, why should you care?

Consistency

Let’s say you’re coding a web app. You’re developing it on your local machine, where you test it. You sometimes run it on a Raspberry Pi, and soon you’re going to put it on a live server for the world to see.

Wouldn’t it be great if you could consistently spin up the same exact environment on all of your devices? If your web app runs correctly inside of a Docker container on your local box, it runs on your Pi, it runs on the server, it runs anywhere.

This makes managing project dependencies incredibly easy. Not only is it simple to deal with external libraries and modules called directly by your code, but the whole system can be configured to your liking. If you have an open source project which a new user wants to download and run, it’s as simple as starting up a container.

It’s not just running code that benefits from a reproducible environment — building code in containers is commonplace as well; we wrote about using Docker to cross compile for Raspberry Pi.

Scalability

If you use Docker to create services which have varying demand (such as websites or APIs), it’s incredibly easy to scale your provisioning by simply firing up more containers (providing everything is correctly architected to do so). There are a number of frameworks for orchestrating container clusters, such as Kubernetes and Docker Swarm, but that’s a story for another day.

How it works

Containers are pretty clever. A virtual machine runs on simulated hardware and is an entirely self-contained OS. However, containers natively share the kernel of the host. You can see the differences below.

This means that containers perform staggeringly better than virtual machines. When we talk about Docker, we’re talking about how many Linux processes we can run, not how many OSes we can keep afloat at the same time. Depending on what they’re doing, it would be possible to spin up hundreds if not thousands of containers on your PC. Furthermore, starting up a container will take seconds or less, compared to minutes for many VMs.  Since containers are so lightweight, it’s common practice to run all aspects of an application in different containers, for overwhelming maintainability and modularity. For example, you might have different containers for your database server, redis, nginx and so on.

But if containers share the host kernel, then how are they separated? There’s some pretty neat low-level trickery going on, but all you need to know is that Linux namespaces are heavily leveraged, resulting in what appears to be a fully independent container complete with its own network interfaces and more. However, the barrier between containers and hosts is still weaker than when using VMs, so for security-critical applications, some would advise steering clear of containers.

How do I use it?

As an example, we’re going to build a minimal web server in a Docker container. In the interests of keeping it simple we’ll use Flask, a Python web microframework. This is the program that we want to run:

 """main.py"""
from flask import Flask app = Flask(__name__) @app.route('/')
def home(): return "Hello from inside a Docker container!" if __name__ == '__main__': app.run(host='0.0.0.0', port=80)

Don’t worry if you’re not familiar with Flask; all you need to know is that this code serves up a string on localhost:80.

Running a Process Inside a Container

So how do we run this inside of a container? Containers are defined by an image, which is like a recipe for a container. A container is just a running instance of an image (this means you can have multiple running containers of the same image).

How do we acquire an image? The Docker Hub is a public collection of images, which holds official contributions but also allows anyone to push their own. We can take an image from the Docker hub and extend it so that it does what we want. To do this, we need to write a Dockerfile — a list of instructions for building an image.

The first thing we’ll do in our Dockerfile is specify which image we want to use/extend. As a starting point, it would make sense to pick an image which has Python already installed. Thankfully, there’s a well maintained Python image, which comes in many flavours. We’ll use one with Python 3.7 running on Debian stretch.

Here’s our Dockerfile:

FROM python:3.7-stretch COPY app/ /app
WORKDIR /app RUN pip install Flask
CMD ["python", "main.py"]

After the first line which we discussed, the rest of the Dockerfile is pretty self explanatory. We have a hierarchy setup like so:

app
└──main.py
Dockerfile

So our app/ directory gets copied into the container, and we run the rest of the commands using that as the working directory. We use pip to install Flask, before finally specifying the command to run when the container starts. Note that the process that this command starts will be inherently tied to the container: if the process exits, the container will die. This is (usually) a good thing if you’re using Docker properly, with only a single process/chunk of a project running inside each container.

Building an Image

Ok, we’ve written some instructions on how to build our image, so let’s build it.

$ docker build .

This tells Docker to look for a Dockerfile in the current directory (.). But to make it a bit easier to run, let’s give our built image a name, called a tag.

$ docker build -t flask-hello .

Since this is the first time we’ve used this Python image from the hub, it takes a minute or two to download, but in future, it will be available locally.

Now we have a shiny new image, which we can run to produce a container! It’s as simple as this:

$ docker run flask-hello

Our container runs successfully, and we get some output from Flask saying that it’s ready to serve our page. But if we open a browser and visit localhost, there’s nothing to be seen. The reason is of course that our server is running inside the container, which has an isolated network stack. We need to forward port 80 to be able to access our server. So let’s kill the container with CTRL+C and run it again with a port forward.

$ docker run -p 80:80 flask-hello

…and it works!

Summary

The first time you see this process, it might seem a bit long winded. But there are tools like docker-compose to automate this workflow, which are very powerful when running multiple containers/services.

We’ll let you in on a secret: once you get your head around Docker, it feels like being part of an elite club. You can easily pull and run anyone’s software, no setup required, with only a flourish of keystrokes.

  • Docker is a fantastic way to run code in a reproducible, cross-platform environment.
  • Once you’ve set up your image(s), you can run your code anywhere, instantly.
  • Building different components of an app in different containers is a great way to make your application easy to maintain.
  • Though this article focused on web services as an example, Docker can help in an awful lot of places.

Containers, container orchestration, and scalable deployment are exciting areas to watch right now, with developments happening at a rapid pace. It’s a great time to get in on the fun!