Docker has become one of those buzzwords that really annoys me. All buzzwords do because there are too often no substance behind it.
But in fact, Docker is really useful and I like it.
What is Docker and what can it do for you?
Docker it built upon Linux Containers, LXC, which is an virtual environment on an operating-system-level.
It let you run multiple isolated systems (called containers) on a single host. All these systems are totally separated
with respect to processes, networking, users, mounted filesystem and so on.
This combined with cgroups allows limitation and prioritization of resources like CPU, memory and networking.
LXC with cgroups is quite complex, but Docker is not.
So, it is a virtual machine, right?
Kind of… but no. Docker is using your currently running kernel, so it does not load a complete operating system like VMs do.
It just isolate the environment from your host and other containers.
This makes Docker quite lightweight and there is no big deal to have several running containers in parallel, and they are fast!
Possible use cases could be:
- Run services, like a webserver, in an isolated container. If there is any vulnerabilty in the software, it will not affect the host system at all. It is also incredible easy to replicate and distribute the container.
- Put your build environment in a container. I use Jenkins a lot at work. One big drawback is that the Jenkins server needs to have all build-environments installed and it is a mess to reinstall all these environment if the server needs to be migrated.A solution to this is to use docker for all kind of build-environments and setup them as Jenkins slaves. For example, one container could have a cross toolchain for ARM, another for PPC. The obvious advantage is that the environment may be replicated on any number of machines with no effort. Another big advantage is that I may share this container to our customers and they will have exactly the same environment as we are using on our buildserver.
- Experiment with your system without being afraid to break things. If you screw things up, just discard the image and start over.
Create a container
Ok, lets create an container from the beginning. There are several public container images that docker may use, and you can of cause create your own repos. Here we will take one of those public Ubuntu-images and put an ARM toolchain on it.
Create a Dockerfile, this file will contain all changes that you will overlay the Ubuntu image with. All commands will run sequentially.
Here is a example of an Dockerfile:
MAINTAINER Marcus Folkesson <email@example.com>
RUN apt-get update
RUN apt-get install -y vim
RUN apt-get install -y openssh-server
ADD gcc-linaro-5.1-2015.08-x86_64_arm-linux-gnueabihf.tar.xz /opt
ADD start_services.sh /
RUN useradd -s /bin/bash -m build
ADD authorized_keys /home/build/.ssh/authorized_keys
RUN mkdir /var/run/sshd
CMD ["/bin/bash", "/start_services.sh"]
EXPOSE 8080/tcp 22/tcp
This Dockerfile will:
- Download an ubuntu image
- Run several apt-get commands
- Put an ARM toolchain (that I have in the same directory as the Dockerfile) into /opt
- Put my start_services into /
- Add my public RSA key so that I don’t need to type a password when I ssh
- Start my start_services.sh
- Expose port 8080 and 22
Docker will automatically create snapshots on each step, so you could magically jump inte the middle of two steps.
Create an image called ‘experimental’.
#docker build -t experiment <path/to/Dockerfile>
This will do all the steps in Dockerfile.
You can now inspect your image with:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
experiment latest 68a982bf6ebe About a minute ago 188.7 MB
Start the container in interactive mode with bash;
#docker run -ti experiment /bin/bash
bin boot devetc home liblib64 media mnt optproc roo t run sbin srv sys tmp usr var
There is of cause many other things that you may look at.
For example remap ports from your host to the container (otherwise a running webserver would be pretty useless), share folders between host and a container (or between containers!), controlling your machine with stop, start, exec (see docker manual) and much much more.
And also, try to keep your fingers away from doing things inside the machine (unless you are experimenting with some obscure stuff), instead, describe it all in your Dockerfile. Otherwise its hard to reproduce the same container again.
With that said, let’s start doing some obscure stuff in our own isolated container!