Docker

Docker in Action – Fitter, Happier, More Productive

by Real Python devops docker flask web-dev
Tweet Share Email

With Docker you can easily deploy a web application along with it’s dependencies, environment variables, and configuration settings - everything you need to recreate your environment quickly and efficiently.

This tutorial looks at just that.

We’ll start by creating a Docker container for running a Python Flask application. From there, we’ll look at a nice development workflow to manage the local development of an app as well as continuous integration and delivery, step by step …

I (Michael Herman) originally presented this workflow at PyTennessee on February 8th, 2015. You can view the slides here, if interested.

Updated 04/04/2019: Upgraded Docker (v18.09.2), Docker Compose (v1.23.2), Docker Machine (v0.16.1), Python (v3.7.3), and CircleCI (v2). Thanks Florian Dahlitz!

Updated 02/28/2015: Added Docker Compose and upgraded Docker and boot2docker to the latest versions.

Workflow

  1. Code locally on a feature branch
  2. Open a pull request on Github against the master branch
  3. Run automated tests against the Docker container
  4. If the tests pass, manually merge the pull request into master
  5. Once merged, the automated tests run again
  6. If the second round of tests pass, a build is created on Docker Hub
  7. Once the build is created, it’s then automatically (err, automagically) deployed to production

Docker steps

This tutorial is meant for Mac OS X users, and we’ll be utilizing the following tools/technologies - Python v3.7.3, Flask v1.0.2, Docker v18.09.2, Docker Compose v1.23.2, Docker Machine 0.16.1, Redis v5.0.4

Let’s get to it…

First, some Docker-specific terms:

  • A Dockerfile is a file that contains a set of instructions used to create an image.
  • An image is used to build and save snapshots (the state) of an environment.
  • A container is an instantiated, live image that runs a collection of processes.

Be sure to check out the Docker documentation for more info on Dockerfiles, images, and containers.

Why Docker?

You can truly mimic your production environment on your local machine. No more having to debug environment specific bugs or worrying that your app will perform differently in production.

  1. Version control for infrastructure
  2. Easily distribute/recreate your entire development environment
  3. Build once, run anywhere - aka The Holy Grail!

Docker Setup

To be able to run Docker containers on our Mac OS X system we need to install Docker Desktop for Mac. If you are using a Windows system, make sure to checkout Docker Desktop for Windows . If you are using older versions of Mac OS X or Windows, you should try Docker Toolbox instead. No matter which installation you pick based on your OS, you will end up having the three main Docker tools installed: Docker (CLI), Docker Compose, and Docker Machine.

Now let’s check your Docker installation:

$ docker --version
Docker version 18.09.2, build 6247962
$ docker-compose --version
docker-compose version 1.23.2, build 1110ad01
$ docker-machine --version
docker-machine version 0.16.1, build cce350d7

Create A New Machine

Before we can start developing, we need to create a new Docker machine. As we want to develop something, let’s call the new machine dev:

$ docker-machine create -d virtualbox dev;
Creating CA: /Users/realpython/.docker/machine/certs/ca.pem
Creating client certificate: /Users/realpython/.docker/machine/certs/cert.pem
Running pre-create checks...
(dev) Image cache directory does not exist, creating it at /Users/realpython/.docker/machine/cache...
(dev) No default Boot2Docker ISO found locally, downloading the latest release...
(dev) Latest release for github.com/boot2docker/boot2docker is v18.09.3
(dev) Downloading /Users/realpython/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.09.3/boot2docker.iso...
(dev) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(dev) Copying /Users/realpython/.docker/machine/cache/boot2docker.iso to /Users/realpython/.docker/machine/machines/dev/boot2docker.iso...
(dev) Creating VirtualBox VM...
(dev) Creating SSH key...
(dev) Starting the VM...
(dev) Check network to re-create if needed...
(dev) Found a new host-only adapter: "vboxnet0"
(dev) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env dev

Let the Docker client point to the new machine via:

$ eval $(docker-machine env dev)

Run the following command to view the currently running Machines:

$ docker-machine ls
NAME   ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
dev    *        virtualbox   Running   tcp://192.168.99.100:2376           v18.09.3

Compose Up!

Docker Compose is an orchestration framework that handles the building and running of multiple services (via separate containers) using a simple .yml file. It makes it super easy to link services together running in different containers.

Start by cloning the repository from GitHub using git:

$ git clone https://github.com/realpython/fitter-happier-docker
$ cd fitter-happier-docker
$ tree .
.
├── article.md
├── circle.yml
├── docker-compose.yml
├── presentation
│   ├── images
│   │   ├── circleci.png
│   │   ├── docker-logo.jpg
│   │   ├── fig.png
│   │   ├── figup.png
│   │   ├── heart.jpg
│   │   ├── holy-grail.jpg
│   │   ├── oh-my.jpg
│   │   ├── rp_logo_color_small.png
│   │   └── steps.jpg
│   └── presentation.md
├── readme.md
└── web
    ├── Dockerfile
    ├── app.py
    ├── requirements.txt
    └── tests.py

3 directories, 18 files

Now let’s get our Flask application up and running along with Redis.

Let’s have a look at the docker-compose.yml in the root directory:

version: '3'

services:
    web:
        build: ./web
        volumes:
            - ./web:/code
        ports:
            - "80:5000"
        links:
            - redis:redis
        command: python app.py
    redis:
        image: redis:5.0.4
        ports:
            - "6379:6379"
        volumes:
            - db-data:/data

volumes:
    db-data:

Here we add the services that make up our stack:

  1. web: First, we build the image from the “web” directory and then mount that directory to the “code” directory within the Docker container. The Flask app is ran via the python app.py command. This exposes port 5000 on the container, which is forwarded to port 80 on the host environment.
  2. redis: Next, the Redis service is built from the Docker Hub “Redis” image. Port 6379 is exposed and forwarded. Furthermore, we persist the data to our host system via the db-data volume.

Did you notice the Dockerfile in the “web” directory? This file is used to build our image, starting with an official Python base image, the required dependencies are installed and the app is built.

Build and Run

With one simple command we can build the image and run the container:

$ docker-compose up --build

Suspended shipping containers

This command builds an image for our Flask app, pulls the Redis image, and then starts everything up.

Grab a cup of coffee. Or two. This will take some time the first time you build the container. That said, since Docker caches each step (or layer) of the build process from the Dockerfile, rebuilding will happen much quicker because only the steps that have changed since the last build are rebuilt.

If you do change a line/step/layer in your Dockerfile, it will recreate/rebuild everything as of that line - so be mindful of this when you structure your Dockerfile.

Docker Compose brings each container up at once in parallel. Each container also has a unique name and each process within the stack trace/log is color-coded for readability.

Ready to test?

Open your web browser and navigate to the IP address associated with the DOCKER_HOST variable - i.e., http://192.168.99.100/, in this example. (Run docker-machine ip dev to get the address.)

You should see the text, “Hello! This page has been seen 1 times.” in your browser:

Test Flask app running on Docker

Refresh. The page counter should have incremented.

Kill the processes (Ctrl+C), and then run the following command to run the process in the background.

$ docker-compose up -d

We don’t need to attach the --build flag as the images were already built.

Want to view the currently running processes?

$ docker-compose ps
            Name                           Command               State           Ports         
-----------------------------------------------------------------------------------------------
fitter-happier-docker_redis_1   docker-entrypoint.sh redis ...   Up      0.0.0.0:6379->6379/tcp
fitter-happier-docker_web_1     python app.py                    Up      0.0.0.0:80->5000/tcp  

Both processes are running in a different container, connected via Docker Compose!

Next Steps

Once done, kill the processes via docker-compose down. Commit your changes locally, and then push to Github.

So, what did we accomplish?

We set up our local environment, detailing the basic process of building an image from a Dockerfile and then creating an instance of the image called a container. We tied everything together with Docker Compose to build and connect different containers for both the Flask app and Redis process.

Now, let’s look at a nice continuous integration workflow powered by CircleCI.

Docker Hub

Thus far we’ve worked with Dockerfiles, images, and containers (abstracted by Docker Compose, of course).

Are you familiar with the Git workflow? Images are like Git repositories while containers are similar to a cloned repository. Sticking with that metaphor, Docker Hub, which is repository of Docker images, is akin to Github.

  1. Signup here, using your Github credentials.
  2. Then add a new automated build. This can be done by clicking on “Create Repository”, scrolling down and clicking on the GitHub symbol. This let’s you specify an organization (your GitHub name) and the repository you want to create an automated build for. Just accept all the default options, except for the “Build Context” - change this to “/web”.

Once added, this will trigger an initial build. Make sure the build is successful.

Docker Hub for CI

Docker Hub, in itself, acts as a continuous integration server since you can configure it to create an automated build every time you push a new commit to Github. In other words, it ensures you do not cause a regression that completely breaks the build process when the code base is updated.

There are some drawbacks to this approach - namely that you cannot push (via docker push) updated images directly to Docker Hub. Docker Hub must pull in changes from your repo and create the images itself to ensure that their are no errors. Keep this in mind as you go through this workflow. The Docker documentation is not clear with regard to this matter.

Let’s test this out. Add an assert to the test suite:

self.assertNotEqual(four, 102)

Commit and push to Github to generate a new build on Docker Hub. Success?

Bottom-line: It’s good to know that if a commit does cause a regression that Docker Hub will catch it, but since this is the last line of defense before deploying (to either staging or production) you ideally want to catch any breaks before generating a new build on Docker Hub. Plus, you also want to run your unit and integration tests from a true continuous integration server - which is exactly where CircleCI comes into play.

CircleCI

Circleci

CircleCI is a continuous integration and delivery platform that supports testing within Docker containers. Given a Dockerfile, CircleCI builds an image, starts a new container, and then runs tests inside that container.

Remember the workflow we want? Link.

Let’s take a look at how to achieve just that…

Setup

The best place to start is the excellent Getting started with CircleCI guide…

Sign up with your Github account, then add the Github repo to create a new project. This will automatically add a webhook to the repo so that anytime you push to Github a new build is triggered. You should receive an email once the hook is added.

The CircleCI configuration file is located in the .circleci directory. Let’s have a look at the config.yml:

version: 2
jobs:
  build:
    docker:
      - image: circleci/python:3.7.3

    working_directory: ~/repo

    steps:
      - checkout

      - setup_remote_docker:
          docker_layer_caching: true
          version: 18.06.0-ce

      - run:
          name: Install Docker client
          command: |
            set -x
            VER="17.03.0-ce"
            curl -L -o /tmp/docker-$VER.tgz https://download.docker.com/linux/static/stable/x86_64/docker-$VER.tgz
            tar -xz -C /tmp -f /tmp/docker-$VER.tgz
            sudo mv /tmp/docker/* /usr/bin

      - run:
          name: run tests
          command: |
            docker image build -t fitter-happier-docker web
            docker container run -d fitter-happier-docker python -m unittest discover web

      - store_artifacts:
          path: test-reports
          destination: test-reports

  publish-image:
    machine: true
    steps:
      - checkout

      - deploy:
          name: Publish application to Docker Hub
          command: |
            docker login -u $DOCKER_HUB_USER_ID -p $DOCKER_HUB_PWD
            docker image build -t fitter-happier-docker web
            docker tag fitter-happier-docker $DOCKER_HUB_USER_ID/fitter-happier-docker:$CIRCLE_SHA1
            docker tag fitter-happier-docker $DOCKER_HUB_USER_ID/fitter-happier-docker:latest
            docker push $DOCKER_HUB_USER_ID/fitter-happier-docker:$CIRCLE_SHA1
            docker push $DOCKER_HUB_USER_ID/fitter-happier-docker:latest

workflows:
  version: 2
  build-master:
    jobs:
      - build
      - publish-image:
          requires:
            - build
          filters:
            branches:
              only: master

Basically, we define two jobs. First, we set up our Docker environment, build the image and run the tests. Second, we deploy the image to Docker Hub.

Furthermore, we define a workflow. Why do we need a workflow? As you can see, the build job is always executed whereas the publish-image job is only run on master as we don’t want to publish a new image when opening a new Pull Request.

You can set the environment variables DOCKER_HUB_USER_ID and DOCKER_HUB_PWD in the project settings of CircleCI.

With the config.yml file created, push the changes to Github to trigger a new build. Remember: this will also trigger a new build on Docker Hub.

Success?

Before moving on, we need to change our workflow since we won’t be pushing directly to the master branch anymore.

Feature Branch Workflow

For these unfamiliar with the Feature Branch workflow, check out this excellent introduction.

Let’s run through a quick example…

Create the Feature Branch

$ git checkout -b circle-test master
Switched to a new branch 'circle-test'

Update the App

Add a new assert in tests.py:

self.assertNotEqual(four, 60)

Issue a Pull Request

$ git add web/tests.py
$ git commit -m "circle-test"
$ git push origin circle-test

Even before you create the actual pull request, CircleCI starts creating the build. Go ahead and create the pull request, then once the tests pass on CircleCI, press the Merge button. Once merged, the build is triggered on Docker Hub.

Conclusion

So, we went over a nice development workflow that included setting up a local environment coupled with continuous integration via CircleCI (steps 1 through 6):

  1. Code locally on a feature branch
  2. Open a pull request on Github against the master branch
  3. Run automated tests against the Docker container
  4. If the tests pass, manually merge the pull request into master
  5. Once merged, the automated tests run again
  6. If the second round of tests pass, a build is created on Docker Hub
  7. Once the build is created, it’s then automatically (err, automagically) deployed to production

What about the final piece - delivering this app to the production environment (step 7)? You can actually follow another one of my Docker blog posts to extend this workflow to include delivery.

Comment below if you have questions. Grab the final code here. Cheers!


If you have a workflow of your own, please let us know. I am currently experimenting with Salt as well as Tutum to better handle orchestration and delivery on Digital Ocean and Linode.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About The Team

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

What Do You Think?

Real Python Comment Policy: The most useful comments are those written with the goal of learning from or helping out other readers—after reading the whole article and all the earlier comments. Complaints and insults generally won’t make the cut here.

Keep Learning

Related Tutorial Categories: devops docker flask web-dev