In this post I’m going to write down some useful Docker snippets I frequently find myself copy/pasting. There won’t be too much substance to this post; this will mainly be a rather self serving record for my own use cases that I’ll try to update from time to time :)

Run local code without installing anything

In Dockerfile:

FROM golang:1.15-alpine as dev
WORKDIR /scratch

Then, at the command line:

docker build --target dev . -t go
docker run -it -p 80:80 -v ${PWD}:/scratch go sh

Build and run application


#! /usr/bin/env sh
set -e

CGO_ENABLED=0 GOOS=linux go build -o server .

Then, in Dockerfile:

FROM scratch
COPY server .
CMD ["./server"]

To inject a file containing your env variables, just use --env-file. There’s some nuance here when it comes to docker-compose and passing in envs to your compose file. You should just read the docs for those details; --env-file does what you’d expect though. Putting it together at the command line: && docker run --rm --env-file .env -p 80:80 $(docker build -q .)

Inline Evaluation and Process Substitution

A couple neat tricks that often useful when working with the docker CLI is inline evaluation of shell commands and Process Substitution. For instance, suppose you’re using a command that accepts a docker-compose.yml, you can do something like: docker whatever -c <(docker compose config), which will effectively pipe the docker compose config into your -c flag. This particular example is handy when you need to parse envs into your compose input for some reason (e.g., if the whatever command doesn’t do it for you). When you do <(cmd), the output of cmd is written to a temporary file; the filename is returned by the <(...) operator. Similarly, remember that with bash you can do inline evaluation like echo "Today is $(date)".

You can do some cool stuff with these operators and the --format flag that’s available on a number of docker subcommands to chain operations together. For example, remove all containers: docker rm $(docker ps -a --format '{{ .ID }}').

Communicate between containers

This one is pretty trivially solved when you’re using docker-compose, but with just the docker CLI it’s somewhat unintuitive IMO. Suppose you have two containers: one container running your application, and a second running your state (e.g., MySQL, Redis, whatever). You want to communicate between these (namely, connect to your DB over the network). Docker has some default networks: bridge, host, none. By default, containers are run on the bridge network. If you’re familiar with docker-compose, you’ll know that the service name gets its own DNS entry, so you can pretty easily access your state container by name (e.g., you could inject the connection string as an env; it might be something like mydb:5432).

The problem with running standalone containers is that you don’t get this nice name resolution (a.k.a. service discovery). Your containers should still be accessible via their IP address since they’re all on the bridge network by default, but this is quite brittle/error prone since the IPs are likely to change (and it’s just plain annoying to have to look them up with docker network inspect). The trick is to create a network and run your containers on that new network; then you’ll get the namespace resolution/service discovery.

Run your stateful container:

docker network create foo
docker run --rm -p 6379:6379 --network foo --name redis redis:7.0-alpine

Then also run your application like above on the same network: && docker run --rm -p 80:80 --env-file .env --network foo $(docker build -q .)

When it comes to networking in Docker Swarm mode, there’s some subtle differences because you’re going to be communicating between containers across hosts, so Docker automatically handles most of the networking to facilitate this, except for some firewall rules you may have to adjust. Here’s some relevant text from the Docker docs about Swarm mode networking:

  • Overlay networks manage communications among the Docker daemons participating in the swarm. You can create overlay networks, in the same way as user-defined networks for standalone containers. You can attach a service to one or more existing overlay networks as well, to enable service-to-service communication. Overlay networks are Docker networks that use the overlay network driver.

  • The ingress network is a special overlay network that facilitates load balancing among a service’s nodes. When any swarm node receives a request on a published port, it hands that request off to a module called IPVS. IPVS keeps track of all the IP addresses participating in that service, selects one of them, and routes the request to it, over the ingress network.

  • The ingress network is created automatically when you initialize or join a swarm. Most users do not need to customize its configuration, but Docker allows you to do so.

  • The docker_gwbridge is a bridge network that connects the overlay networks (including the ingress network) to an individual Docker daemon’s physical network. By default, each container a service is running is connected to its local Docker daemon host’s docker_gwbridge network.

  • The docker_gwbridge network is created automatically when you initialize or join a swarm. Most users do not need to customize its configuration, but Docker allows you to do so.

  • You need the following ports open to traffic to and from each Docker host participating on an overlay network:

    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • UDP port 4789 for overlay network traffic

For simple projects, the default networks should be fine. You may need to set some firewall rules; this is easy with ufw (useful DigitalOcean blog post here):

sudo apt install ufw
# vim /etc/default/ufw and set `IPV6=yes` if desired
# reset firewall to default
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 2377 tcp
sudo ufw allow 7946 tcp
sudo ufw allow 7946 udp
sudo ufw allow 4789 udp
sudo ufw enable

One cool thing to note about this ingress network is that you can connect to a port (say, 8000) on any node in the network and you’ll get routed to an appropriate task that listens on that port.

No certificates in scratch image

This snippet is partly targeted at showing how to do multi-stage builds. If you’re using the scratch Docker image (e.g., if you’re just running some pre-built static binary), then you might run into issues doing SSL verification because the image doesn’t have SSL certs; you’ll probably see an error like x509: certificate signed by unknown authority. To fix this, you can just copy the certs from an image that does have them:

FROM golang:alpine as builder
RUN apk update && apk upgrade && apk add --no-cache ca-certificates
RUN update-ca-certificates

FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY server .
CMD ["./server"]

Removing old containers

The old way used to be:

docker rm $(docker ps -q -f status=exited)

But now you can probably just do docker container prune and/or docker system prune. The useful bit from above is that -q returns just the container IDs, and -f lets you filter.

Docker Swarm mode is a cluster management system for running small to medium sized projects on a cluster. It’s a great middle ground for running applications on a cluster without the overhead of K8s; you can develop locally and deploy to your cluster using a docker-compose.yml file. You define services that you can scale to some arbitrary number of tasks that run as containers on nodes in the swarm. Here’s some of the snippets I found useful when working with Docker Swarm mode:

  • This is a great starter: DockerSwarmRocks
  • You may need to host a registry (or use DockerHub). If you want to host your own, you can just do:
    • docker service create --name registry --publish published=5000,target=5000 registry:2
    • docker service rm registry
  • Check its status:
    • docker service ls
  • You can test the app (e.g., you’re developing locally) with compose:
    • docker compose up -d
    • docker compose ps
    • curl your endpoints, etc.
    • docker compose down --volumes
  • Distribute the image to the swarm via the registry:
    • docker compose build
    • docker compose push
  • Deploy the stack to the swarm:
    • docker stack deploy --compose-file docker-compose.yml mystackname
    • docker stack services mystackname
    • curl your endopints, etc. Note that you can curl localhost or any other any node in the swarm on port 8000 and you’ll get routed to your app (assuming you have a service listening on port 8000).
  • Bring things down:
    • docker stack rm
    • docker service rm registry
    • docker swarm leave

The docker service create is a very rich command you can use to deploy replicated/horizontally scalable services onto your swarm cluster. Here’s some related snippets/notes:

  • Example: docker service create --name redis redis:3.0.6
  • If your image is available on a private registry which requires login, then:
    • docker login
    • docker service create --with-registry-auth --name my_service
  • Use the –replicas flag to set the number of replica tasks for a replicated service
    • docker service create --name redis --replicas=5 redis:3.0.6
    • Actual scaling of the service may take some time.
  • Anything more in depth than this, just read the docs, they’re perfect. You can create services with:
    • Secrets
    • Configs
    • Update policy
    • Envs
    • Hostname
    • Metadata
    • Bind mounts, volumes, memory
    • Mode replicated (N services), or mode global (1 service per node)
    • Constraints (to guarantee node properties) and placement preferences to spread over node groups (defined by labels)
    • Specific memory requirements and constraints
    • Maximum replicas per node
    • Attach a service to an existing network