As our company begins to transition from an older Chef/Capistrano based deployment model and delving into Docker, those of us not familiar with Docker (myself included) have had to learn a lot to be able to keep up. Taking 20+ legacy projects, Docker-izing them all, standardizing the deployment model and getting all of the apps to play nicely together in this new paradigm is no small undertaking. However, even if you or your company aren't quite ready to fully commit to Docker, there's no reason you can't start using Docker today for your own development work, to both make your life easier and give you some insight into how powerful of a tool Docker can be.

A Brief Intro to Docker

For anyone not overly familiar with Docker, a brief introduction is in order.

Docker is a lot like a virtual machine, but without the overhead of having to virtualize all the basic system functionality. Any Docker application you run is granted more or less direct access to the system resources of the host computer, making it significantly faster to run and allowing for much smaller image sizes, since you don't have to duplicate all the OS stuff.

In practice, when you boot up something in Docker, you'll start with an image you either created yourself or downloaded off the internet. This image is basically a snapshot of what the application and surrounding system looked like at a given point in time. Docker takes this image and starts up a process that they call a container, using the image as the starting point. Once the container is running, you can treat it just like any normal server and application. You can modify the file system inside the container, access the running application, edit config files, start and stop the processes that are running... anything you'd do with a normal application. Then, at any point, you can discard the entire container and start a new container with the original image again.

These independent and disposable containers are at the heart of what makes Docker such a powerful tool. In production, this allows you to scale your system rapidly, as well as reducing the burden of configuring new hosts, since the majority of your application specific configuration will now be stored inside your container. In this manner, Docker images can conceivably be run on any system capable of running Docker, without any specific per application setup involved. Even if your company's applications aren't yet running on Docker, you can still leverage these traits to make you development environment trivially easy to set up.

Using Docker Compose to Bootstrap Your Computer

Setting up your workspace for the first time can be fairly tedious, depending on the number of services your application needs to have running in order to work. A simple Rails app could easily have several such dependencies, just to respond to simple requests. Most of our applications at Avvo require things like Redis, Memcached and MySQL... and that's before we even get into anything unusual that an application might require. When you jump into working on an application that you haven't touched before, it can sometimes take the better part of a day just to get the app to boot up locally. Luckily for us, Docker can help to greatly reduce this burden, with a little bit of help from Docker Compose.

While Docker itself gives us a great starting point for building and running images, starting up and configuring containers manually can be a little tricky. Docker Compose provides an easy and much more readable way to configure and run your containers. We can set up Docker Compose for those three services listed above, by creating a docker-compose.yml file like so:

    version: '2'
    services:
      mysql:
        image: mysql:5.6
        ports:
          - "3306:3306"
        environment:
          MYSQL_ROOT_PASSWORD: supersecretpassword

      memcached:
        image: memcached:latest
        ports:
          - "11211:11211"

      redis:
        image: redis:latest
        ports:
          - "6379:6379"

Even if you're not all that familiar with Docker Compose, the above file is fairly self-explanatory.

  • We declare three services: mysql, memcached, and redis
  • Tell Docker Compose to use the Docker images of the corresponding names for these services.
  • Declare port numbers for each service that we want to be able to access from the host machine, so that we can access each service from outside of their containers.
  • Apply some small configuration settings via environment variables, such as MYSQL_ROOT_PASSWORD.

To start these services, you just need to give the "up" command to Docker Compose from the same directory as the docker-compose.yml file above:

   ~/workspace:> docker-compose up
   Pulling redis (redis:latest)...
   latest: Pulling from library/redis
   357ea8c3d80b: Pull complete
   7a9b1293eb21: Pull complete
   f306a5223db9: Pull complete
   18f7595fe693: Pull complete
   9e5327c259f9: Pull complete
   72669c48ab1f: Pull complete
   895c6b98a975: Pull complete
   Digest: sha256:82bb381627519709f458e1dd2d4ba36d61244368baf186615ab733f02363e211
   Status: Downloaded newer image for redis:latest
   Pulling memcached (memcached:latest)...
   latest: Pulling from library/memcached
   357ea8c3d80b: Already exists
   1ef673e51c1f: Pull complete
   5dfcd2189a7d: Pull complete
   32d0f07db7eb: Pull complete
   fced47673b60: Pull complete
   e7d3555f9ff2: Pull complete
   Digest: sha256:58f4d4aa5d9164516d8a51ba45577ba2df2a939a03e43b17cd2cb8b6d10e2e02
   Status: Downloaded newer image for memcached:latest
   Pulling mysql (mysql:5.6)...
   5.6: Pulling from library/mysql
   357ea8c3d80b: Already exists
   256a92f57ae8: Pull complete
   d5ee0325fe91: Pull complete
   a15deb03758b: Pull complete
   7b8a8ccc8d50: Pull complete
   1a40eeae36e9: Pull complete
   4a09128b6a34: Pull complete
   587b9302fad1: Pull complete
   c0c47ca2042a: Pull complete
   588a9948578d: Pull complete
   fd646c55baaa: Pull complete
   Digest: sha256:270e24abb445e1741c99251753d66e7c49a514007ec1b65b47f332055ef4a612
   Status: Downloaded newer image for mysql:5.6
   Creating redis
   Creating memcached
   Creating mysql
   Attaching to mysql, memcached, redis
   mysql        | Initializing database
   mysql        | 2016-08-30 22:51:47 0 [Note] /usr/sbin/mysqld (mysqld 5.6.32) starting as process 30 ...
   redis        | 1:C 30 Aug 22:51:48.345 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
   ...
   redis        | 1:M 30 Aug 22:51:48.346 * The server is now ready to accept connections on port 6379
   mysql        | 2016-08-30 22:51:48 30 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
   ...
   mysql        | 2016-08-30 22:51:55 1 [Note] mysqld: ready for connections.
   mysql        | Version: '5.6.32'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

You should get output similar to that above, indicating that all three services are running. If you didn't already have the images stored in Docker locally, they should get downloaded automatically from Docker Hub. At this point, you should have fully usable services running locally, without having had to do any manual downloading, configuring, compiling, etc... Docker takes care of everything for you. You can even easily share these Docker Compose files around your company for easy bootstrapping of new workstations.

Another added benefit of running Dockerized services is that it eliminates a pet peeve of mine, where MySQL will unexpectedly get itself into a bad state and refuse to restart. If you're running MySQL natively, you're probably going to have to either do some surgery on the MySQL file system to remedy things, or else completely uninstall/reinstall MySQL to get it back into a working state, which can be pretty tedious and error prone. With Docker, you simply delete the container and the next time you boot it up, it'll create a new container from the original image. You'll still have to set up your DB tables and data again, but it's still far simpler and faster than reinstalling the native application.

Using Docker Compose for bootstrapping can also be great in cases where you have some obscure service that's required for an application. For example, one app that we use depends on Neo4j. What does it do? I have no idea, something to do with graphs I think. And I'm pretty sure the 'j' stands for Java. But assuming I'm not touching any of the graph stuff in the code that I need to work on, it would be really nice to not have to spend hours getting this thing running locally. Docker Compose makes this a cinch, even if the application that depends on Neo4j isn't yet Dockerized:

version: '2'
services:
  neo4j:
    image: neo4j:3.0
    ports:
      - 7474:7474
    environment:
      NEO4J_AUTH: none

Now just a call to "docker-compose up" from the same directory as the above Compose file and we have a running Neo4j instance that our app can use.

In Summary

Just about every commonly used service will already have a Docker image publicly available. Combining that with the power of Docker Compose can make complicated and tedious bootstrapping of your workstation or project a thing of the past. Even if your company hasn't fully committed to Docker or if the particular application you're working on isn't Dockerized, getting immediate benefits out of Docker is something that you can start enjoying today.