The people who make Docker like to describe it using a metaphor to a pretty ancient piece of technology: the shipping container.
While we don't even think of or notice them much now, the shipping container was actually a pretty revolutionary piece of technology in its time. No matter what shape or size the original item was, by using a standardized container, the owner of the boat/plane/truck/what-have-you, was able to easily figure out how many resources they needed to allocate.
Docker tries to take this same level of convenience and bring it to the server world. It's the natural extension of tools like Vagrant that allow you to deploy the same virtual machine you use in development to production environments. Vagrant style virtual machines are great, but they're heavyweight. They take a lot of resources to run, much of which is redundant: a Vagrant image loads an entire new copy of Linux inside an existing one. Wouldn't it be better if you could use Vagrant's convenience and uniformity but not have to reload the entire operating system? Well, that's exactly what Docker does.
Introduction
In this tutorial, I'll walk you through the entire Docker workflow. We'll first go through the steps to get a simple Python web app that has a couple Python dependencies and depends on a Redis database for persistence up and running. Then we'll install Docker and install all the web app's requirements (Redis, Python, and Python's dependencies) into one Docker image. We'll then use that image and deploy it onto a different server.
We'll just be deploying a toy sample app, but the steps to deploy your own real apps would be very similar.
To get started, you'll need either a Linux box running a recent version of Ubuntu or a virtual machine running a recent version of Ubuntu. If you want to fully follow the tutorial and deploy the app, then you'll also need a second machine (or second virtual machine) to deploy to.
Installing Docker
The first step is to install Docker itself. Docker is under very quick development, so the easiest way to install it often changes quite quickly. Check out Docker's getting started section if you want to check out the cutting edge.
Otherwise, follow the steps below and we'll set up a Vagrant virtual machine based install of Docker that will work on any of the major operating systems. First head over to Vagrant's website and install the latest Vagrant and VirtualBox for your OS.
Once Vagrant is installed, make a new folder, open a command prompt there and do the following:
vagrant init hashicorp/precise64 (... wait a while ...) vagrant up vagrant ssh
Vagrant just took care of creating a virtual machine running Ubuntu 12.04 for you and you're now SSH'd into its prompt. We can now follow Docker's Ubuntu installation instructions. Check the website in case there have been any changes since this was written, but most likely, you can directly paste the following commands into the terminal:
# install the backported kernel sudo apt-get update sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring # reboot sudo reboot
You'll get dropped back to your local machine's prompt when the VM reboots, so wait a few moments and do another:
vagrant ssh
... to SSH back into your VM. Now that Docker's prerequisites have been installed successfully, we need to go ahead and install Docker itself. Paste in the following command:
curl -s https://get.docker.io/ubuntu/ | sudo sh
... which will grab a simple Docker install script from Docker's site and run it. Docker should now be installed successfully, so let's start playing with it.
Getting Started With Docker
Once apt-get
has finished its magic, do the following:
sudo Docker run -i -t ubuntu /bin/bash
... to check and see that the installation was successful. If it works, Docker will proceed to download an Ubuntu Docker image and after some time, you'll end up at what looks like a root prompt. Feel free to play around for a bit, you'll notice that you're in an environment that's completely separate from your host machine. You probably noticed the root
and #
sign in the prompt. You're running as the root user in a new virtual environment. If you issue a users
command, you'll see that your other users are no longer present.
It's worth taking a minute to explain what the docker
command you just typed did and how this magic happened.
The run
Command
The Docker utility seems to have taken a lot of inspiration from git
's command line interface and as a result, it makes use of subcommands. In this case, we ran the run
subcommand. The run
command requires two arguments: an image and a command.
It's also smart, so if (as in this case) you don't have that image installed, it will query the central Docker repository and download one for you. Here we told it to run an ubuntu
image and informed Docker that it should start /bin/bash
inside that image. The -t
and -i
tell Docker to assign a TTY and run in "interactive mode", in other words, to get you a command prompt. The reason for this is that Docker works a bit differently from other virtualization software you might be familiar with. Docker images don't "boot", they just run. They're using the existing Linux installation, so starting a Docker image can be immediate. In some ways Docker is closer to Linux's chroot
command than to more traditional virtualization tools like VMWare or VirtualBox.
There are some other key differences from standard virtualization tools. Let's do a quick experiment and create a file and print out the contents of it:
echo An experiment > experiment.txt
Now when you do:
cat experiment.txt
It will happily print out:
An experiment
So far so good, our silly experiment is working exactly as expected. Let's exit Docker and get back to our host machine's command prompt:
exit
If you restart Docker with the same command you used before:
sudo Docker run -i -t ubuntu /bin/bash
... you'll notice that things are no longer behaving quite the way you would expect. If you try to cat the file we created last time, you now get an error message:
root@e97acdf4160c:/# cat experiment.txt cat: experiment.txt: No such file or directory
So what's going on? Changes to Docker images don't persist by default. To save your changes to a Docker image, you have to commit
them, git
style. This might take a little getting used to, but it's quite powerful because it means you can also "branch" them git
style (more on that later).
Saving New Images
For now, let's do something a little more useful. Let's install python
, redis
and a few other utilities that we'll use to run our demo app shortly. Afterwards, we'll commit
to persist our changes. Start a copy of Docker up on the latest Ubuntu image:
docker -t -i ubuntu /bin/bash
The Ubuntu base image may not include Python, so check if you've got a copy by typing python
at the prompt. If you get an error message, then let's install it:
apt-get update apt-get install python
So far, so good. It's possible that later we'll want to make other projects that make use of Python, so let's go ahead and save these changes. Open up another command prompt (if you're using the Vagrant install recommended above, you'll have to vagrant ssh
again from a separate prompt) and do the following:
docker ps
You'll get a list like below, of all the Docker containers that are currently running:
ID IMAGE COMMAND CREATED STATUS PORTS 54a11224dae6 ubuntu:12.04 /bin/bash 6 minutes ago Up 6 minutes
The number under the ID column is important: this is the ID of your container. These are unique, if you exit your container and run the same image again, you'll see a new number there.
So now that we have Python installed, let's save our changes. To do this you use the commit
command which takes two arguments: the container whose changes you want to store and the image name. The Docker convention is to use a userid followed by a /
and the short name of the image. So in this case, let's call it tuts/python
. Issue the following command to save your Python installation, making sure to substitute the ID for your container from the last step
docker commit tuts/python
After a few seconds, it will return with a series of letters and numbers. This is the ID of the image you just saved. You can run this image whenever you want and refer to it either by this ID number or by the easier to remember tuts/python
name we assigned to it.
Let's run a copy of the image we just made:
docker run -t -i tuts/python /bin/bash
At this point, you should have two terminal windows open running two separate Docker sessions.
You'll notice now that if you type python
in either one, you won't get an error message anymore. Try creating a file in the second window:
touch /testfile
Now switch back to your original Docker window and try to look at the file:
cat /testfile
You'll get an error message. This is because you're running an entirely different "virtual machine" based on the image you created with the docker commit
command. Your file systems are entirely separate.
If you open up yet another terminal (again, you'll have to run vagrant ssh
if using Vagrant) and do the following:
docker ps
... you'll see that docker
now lists two running images, not just one. You can separately commit to each of those images. To continue with the git
metaphor, you're now working with two branches and they are free to "diverge".
Let's go ahead and close the last window we opened. If you run docker ps
again, there will now only be one ID listed. But what if you want to get back to a previous container? If you type:
docker ps -a
Docker will list out all previous containers as well. You can't run a container that has exited, but you can use previous container's IDs to commit new images. Running the new image will then effectively get you back to your previous container's state.
Let's close the new windows we opened and switch back to the terminal for the first Docker session that we started. Once back, go ahead and install some more tools for our little app. In this case, we need to install the Python package manager, two Python modules to let Python act as a web server and interact with redis
, and the Redis server itself.
apt-get install python-pip redis-server pip install redis bottle
Once these finish, let's commit this image. From another terminal window, run the following command:
docker ps
... and take a note of the ID and commit it under the name tuts/pyredis
:
docker commit tuts/pyredis
So we now have a Docker image that contains the necessary tools to run a small Python web app with Redis serving as a backend. If you have any future projects that will use the same stack, all you have to do to get them started is: docker run -t -i tuts/pyredis /bin/bash
and commit once you've added your source code.
Ok, so our backend is set up. Now to set up the app itself!
Getting Your Source App Into the Image
I've created a small sample app that makes use of the Redis and Python modules we've installed so far. The app is quite simple, all it does is display a list of the Redis keys and provides a rudimentary interface to add and edit them. Let's get the source code onto your host machine (the vagrant ssh
session) first:
cd git clone https://github.com/nikvdp/tuts-docker.git pyredis
In your host machine's home directory you'll now have a pyredis
folder which contains the Python script we'll be using. So, how do we go about copying this app into our Docker image?
Well, Docker has a nice feature that lets you mount a local directory inside your container. Let's run another Docker image and mount the folder:
docker run -v ~/pyredis:/tuts:rw -t -i tuts/pyredis /bin/bash
This is just like our run
commands from before, with the addition of the -v
parameter.
In effect, this command lets you share a folder between Docker and your host machine. The :
's indicate the paths to share. In our case, we're sharing our pyredis
folder, located at ~/pyredis
on our machine, and mounting it at /tuts
inside the Docker image. The rw
on the end is for 'read-write' and means that changes made on the Docker image will also show up on our machine.
At your Docker prompt, you can now do:
cd /tuts ls
... and see the contents of the ~/pyredis
folder on your machine.
This share is temporary though, if you run this Docker image on another computer or re-run this image without the -v
option, the image will no longer have access to it. Let's copy it to another location inside the actual Docker image:
cp -R /tuts/ /pyredis
Since changes to Docker file systems are ephemeral by default, let's save this to the image by again doing docker ps
to get our container ID and committing our changes:
docker commit tuts/pyredis
You'll notice here we committed to the same image name we committed to last time, tuts/pyredis
. Docker will update the image and it will keep a log of all your changes for you. Like git
, if you mess up, you can go back to a good version simply by docker run
'ing its ID. To see the history of an image, try the following:
docker history tuts/pyredis
You'll see something like this:
ID CREATED CREATED BY tuts/pyredis:latest 17 seconds ago /bin/bash 4c3712e7443c 25 hours ago /bin/bash ubuntu:12.10 6 months ago /bin/bash 27cf78414709 6 months ago
This is a history of all the commits we made in the process of creating the tuts/pyredis
image, including those we committed to different names like tuts/python
. If you want to go back to the commit right before we copied our pyredis
app into
/pyredis
you could try (changing the IDs to match what yours shows):
docker run -t -i 4c3712e7443c /bin/bash
... and you'll find there's no /pyredis
directory.
Running the App
So now we have all the pieces in place. The next step is to actually run the app from inside its container. Since we're deploying a web app, we're also going to need to specify some way to access the app over the web. The run
command has got you covered (again). Docker's run command supports a -p
option that lets you specify how ports will be mapped.
If you're using Vagrant to run Docker, you'll need to set up Vagrant's port forwarding before we can do any meaningful tests. If you're not using Vagrant, then just skip this step.
Setting Up Vagrant Port Forwards
If you're using Vagrant in order to test this, you'll need to set up port forwarding so that your local machine's web browser can access ports on the Vagrant VM, which in turn will forward to the Docker instance's port. So in our case, we'll set up our local machine's port 9000 to forward to our Vagrant VM's 9000, which in turn forwards to our tuts/pyredis
Docker instance's port 8080.
On your local machine, go back to the folder where you first typed vagrant init
. You'll find a text file there called simply Vagrantfile
. Open it up in your favorite text editor and look for the following portion:
# Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine. In the example below, # accessing "localhost:8080" will access port 80 on the guest machine. # config.vm.network "forwarded_port", guest: 80, host: 8080
Uncomment the final line and change the ports from 80 and 8080 to 8080
and 9000
. The result should look like this:
# Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine. In the example below, # accessing "localhost:8080" will access port 80 on the guest machine. config.vm.network "forwarded_port", guest: 8080, host: 9000
Now run:
vagrant reload
... which will cause the Vagrant VM to restart itself with the correct port forwards. Once this is complete, you can run vagrant ssh
again and continue the tutorial.
Our little pyredis
app by default, opens a small web server on port 8080. The following command will let you access port 8080 via port 9000 on your host machine:
docker run -t -i -p 9000:8080 tuts/pyredis /bin/bash
You'll get a Docker root prompt, so let's start up our app:
redis-server 2>&1 > /dev/null & python /pyredis/app.py
If all goes well, you'll see the following:
Bottle v0.11.6 server starting up (using WSGIRefServer())... Listening on http://localhost:8080/ Hit Ctrl-C to quit.
This means the server is running. On your local machine, fire up a web browser and point it to localhost:9000
(if you're doing this tutorial on a remote server, then make sure you have network access to port 9000 and replace localhost
with the address of your web server).
With a little luck, you should see the main screen for our tiny app. Go ahead and add a few keys and change some values. The data should persist. However, if you exit out of your Docker prompt and restart Docker, the database will be empty again which is something to keep in mind if you plan to host your database inside a Docker container.
Saving Your Run Config
So this is all great for testing, but the goal here is to be able to deploy your app. You don't want to have to type in the commands to start your app manually each time.
Docker again comes to the rescue. When you commit, Docker can automatically save some run info, such as which ports to map and what commands to run when the image starts. This way, all you have to do is type docker <image_name>
and Docker will take care of the rest. True containerization.
For our script, we only have two commands to run at startup:
redis-server 2>&1 > /dev/null & python /pyredis/app.py
The easiest way to do that is to create a small launch script that runs these two commands. Let's start our tuts/pyredis
again and add a small launch script (just directly copy and paste the below, into the Docker prompt):
docker run -t -i tuts/pyredis /bin/bash cat > /pyredis/launch.sh <&1 > /dev/null & #!/bin/sh python /pyredis/app.py EOF chmod +x /pyredis/launch.sh
This saved the commands we use to launch our Python server into a small bash script called launch.sh
and sets the executable bit so that it's easier to run.
Now that the script is in the image successfully, from another terminal, commit it so that it will persist (remember to do a docker ps
to get your latest container's ID first):
docker commit tuts/pyrdis
Let's test this. If you exit out of your Docker prompt and run it again with the following command, you should be able to access the pyredis web app at localhost:9000
, just like last time.
docker run -t -i -p 8000:8080 tuts/pyredis /bin/bash /pyredis/launch.sh
Ok, so now we can run our little app with a single command. But there's more! Docker lets you save some default config information with your commits. That way, we don't have to remember to type our port mapping and launch command information each time and you can just give a Docker image to somebody else. They can then run it with a simple docker run <image_name>
and Docker takes care of the rest.
To configure this, you need to pass in some JSON information to the commit command. There are a lot of parameters you can use, but for now we'll just concern ourselves with mapping ports and initialization scripts. Fire up your favorite text editor and paste in the following:
{ "cmd": [ "/bin/bash", "/pyredis/launch.sh" ], "PortSpecs": [ "9000:8080" ] }
This represents the information we typed in the -p
option as well as the path to the launch script. One important bit to note is that for the cmd
option, every place where you would normally use a space is actually being passed as a separate parameter.
Save that JSON snippet to a file called runconfig.json
and let's update Docker to use it.
docker commit -run=$(cat runconfig.json) tuts/pyredis
Now if you do:
docker run tuts/pyredis
You'll see bottle
start up and you can access the app via the browser.
Deploying Public Images to a Server Via the Public Docker Registry
Docker's creators have created a public registry that anyone can push and pull Docker images from. This means that deploying your new app to a remote server is as easy as pushing it to Docker's central registry and then pulling it from a server where you have Docker installed.
This is pretty straightforward, so I'll refer you to Docker's own documentation. If you instead want to deploy privately, then read on to the next section(s).
Deploying Private Images to a Server (the Easy Way)
Great, so now we've got an easy to use Docker image running on your machine. The next step is to deploy it to a server!
This part is a bit complicated. Docker's distribution model is based around the idea of repositories. You can push and pull your Docker images to a Docker repository as much as you'd like and different servers can all happily pull different images. This is great, but unfortunately a bit of work is required to host your own repository. If you're hosting or creating open source software, then you can just use the public Docker repository directly to store your images. However, if you're deploying proprietary code, you probably don't want to do that. This leaves you with two choices:
- You can bypass Docker's repository features entirely and manually transfer images.
- You can create your own repository.
The first is simpler, but loses many of Docker's cooler features such as keeping the histories of your images and the ability to store the port mapping and run configuration inside the image. If these are important to you, then skip to the next section to learn how to set up your own (private) Docker repository. If you just want to be able to deploy your images to your servers, then you can use this method.
The first step is to export your container into a .tar
archive. You can do this via Docker's export
command. To deploy the example app we've been using in this tutorial, you would do something like this:
docker export > pyredis.tar
Docker will sit and process for some time, but afterwards you will have a pyredis.tar
file that contains the image you created. You can then copy pyredis.tar
to your server and run the following:
cat pyredis.tar | Docker import -
Docker will again sit for a while and eventually spit out the ID of the new image it has created. You can commit
this to a more memorable name by doing this:
docker commit tuts/pyredis
Our tutorial app is now deployed and you can run it with the same run command as before:
docker run -t -i -p 8000:8080 tuts/pyredis /bin/bash /pyredis/launch.sh
Deploying Private Images to a Server (the Cool Way)
The cooler way to deploy your app is to host your own Docker repository. Get Docker installed on a machine and run the following command:
docker run -p 5000:5000 samalba/docker-registry
Wait a bit for it to download the pieces and you should soon see some messages about starting unicorn and booting workers.
This means your Docker registry is up and running (inside its own Docker container), and is accessible to your local machine at port 5000. Slightly mind-bending, but awesome. Let's set our login credentials first and then push the Docker image we created earlier in the tutorial to our new registry. In a new terminal, run the following
docker login localhost:5000
Go ahead and enter in the username, password and email you'd like to use with your Docker repository.
In order to push the tuts/pyredis
app into the repo, we first have to "tag" it with the private repository address information like so:
docker tag tuts/pyredis localhost:5000/tuts/pyredis
This tells Docker to create a new "tag" of tuts/pyredis
and associate it with the repo running at localhost:5000
. You can think of this tag as this image's name in the repository. For consistency, I have kept the names the same and tagged it localhost:5000/tuts/pyredis
, but this name could easily be something completely different (like localhost:5000/pyredis_prod
.)
If you run docker images
now, you will see that there's a new image listed with the name localhost:5000/tuts/pyredis
. Docker's mechanism for specifying repositories is closely linked to its mechanism for naming (or tagging as Docker puts it), so this is all you need.
To push the image we've created into our repository, just do docker push
and the full tagged image name (including the address):
docker push localhost:5000/tuts/pyredis
Docker will connect to your repository running at localhost:5000
and start to push your changes. You'll see a lot of messages about the various HTTP requests involved showing up in the other terminal window (the one running samalba/docker-registry
), and information about the upload will go by in this one. This will take awhile, so you might want to grab a coffee.
One caveat, since our Docker repository is itself running inside a Docker container, we need to commit the repository's image after we finish pushing. Otherwise, since the Docker file system changes are ephemeral, by default the image we pushed to the repo will disappear as soon as we shut down our local samalba/docker-registry
Docker container.
To do this, do the usual docker ps
to get the ID of the running samalba/docker-registry
container and then commit it to a new container. This isn't ideal, if doing this in production you would want to configure Docker volumes or use the -v
option from above to persist the repo's file directly to the server, instead of inside the container, but that's outside the scope of this tutorial.
Now for the fun part: deploying our new Docker image on a new server. Since at the time of this writing, Docker repositories don't have any security or authentication mechanisms, we'll do our work over secure SSH tunnels. From the virtual machine where you set up the tuts/pyredis
tutorial, ssh
into your production server and forward port 5000 like so:
ssh -R 5000:localhost:5000 -l
The -R
flag to ssh
means that when you connect to localhost:5000
on your production server, SSH will forward the connection back to port 5000 on your virtual machine, which is in turn getting forwarded to the samalba/docker-registry
container where our repo is living.
If Docker is not installed on this server, go ahead and install it as per the installation directions. Once you have Docker running, deploying your image is as simple as:
docker pull localhost:5000/tuts/pyredis
Since we created a tunnel via SSH, Docker will think it is pulling from the remote server's localhost:5000, but this will in fact get tunneled to your local VM's localhost:5000, which in turn gets redirected to Docker. Give it some time to download, but once it's done you should be able to run our pyredis app in exactly the same way we ran it on the original VM, including the saved run config and ports:
docker run tuts/pyredis
Wrap-Up & Next Steps
So that's the basics of Docker. With this information you can now create and manage Docker images, push and pull them to public and private repos and deploy them to separate servers.
This is an intro tutorial, so there are lots of Docker features that weren't covered here. Two of the most notable ones are Dockerfiles and volumes.
Dockerfiles are text files which Docker can run to initialize and provision Docker images. They can make the process of creating a lot of Docker images significantly faster, but since the focus here was on how to go about creating and deploying one single image, they weren't covered. If you'd like to learn more about them, check out Docker's own documentation.
The second feature is volumes. Volumes work a bit like the shared folders we covered with the -v
option, in that they allow Docker containers to persist their data. Unlike folder sharing with -v
, they don't share with the host system, but they can be shared between images. You can check out this tutorial for a good introduction.
Comments