Docker II: Many Containers!
Last class we covered how to set up a Docker container: building an image using a Dockerfile and making an app out of it. This time we're going to be looking at how to set up relationships between containers. We'll cover two common ways that containers interact - via the network, and via common files on the host system. Finally, we'll cover how to set up multiple containers as a coherent application using docker-compose.
As last time, you can use our web-105 repository to get up the basic apps you'll need for this class.
Docker Volumes
Docker volumes are a way for containers to share data by creating a shared storage area that is accessible to all containers that are attached to the volume. Using Docker volumes we can make sure that we don't lose data between container restarts, which is important for nearly all web applications - but especially anything with a database involved. They can also be used to share data between containers that are running simultaneously, such as in the case where we want to have local file storage (e.g. a simple file server).
To create a Docker volume, use the docker volume create
command, followed by a name for the volume. For example, to create a volume named myvolume
, you would run:
docker volume create myvolume
To attach a volume to a container, use the -v
flag when starting the container, followed by the name of the volume and the path inside the container where the volume should be mounted. For example, to attach the myvolume
volume to a container and mount it at /data
, you would run:
docker run -v myvolume:/data myimage
To use a volume from multiple containers, simply attach the same volume to each container using the -v
flag. For example, to attach the myvolume
volume to two containers, you would run:
docker run -v myvolume:/data myimage1
docker run -v myvolume:/data myimage2
To remove a Docker volume, use the docker volume rm
command, followed by the name of the volume. For example, to remove the myvolume
volume, you would run:
docker volume rm myvolume
Docker Networks
Docker networks are a way for containers to communicate with each other by creating a virtual network that all attached containers can use to send and receive data. Networks can be used to isolate containers from each other, or to create a shared network that allows containers to communicate with each other.
To create a Docker network, use the docker network create
command, followed by a name for the network. For example, to create a network named mynetwork
, you would run:
docker network create mynetwork
To attach a container to a network, use the --network
flag when starting the container, followed by the name of the network. For example, to attach a container to the mynetwork
network, you would run:
docker run --network mynetwork myimage
To use a network from multiple containers, simply attach each container to the same network using the --network
flag. For example, to attach two containers to the mynetwork
network, you would run:
docker run --network mynetwork myimage1
docker run --network mynetwork myimage2
To remove a Docker network, use the docker network rm
command, followed by the name of the network. For example, to remove the mynetwork
network, you would run:
docker network rm mynetwork
Using Multiple Apps
Next, let's use the git repository to set up the reader-writer application with Docker containers that interact via a shared volume and network. We'll be using three Docker containers: one for the writer, one for the reader, and one for the frontend.
What This Demo Does
There are three separate services in this demo. They're all flask services, and they all run in separate containers, but each has a separate role. In our next class on microservices architecture and deployment approaches, we'll explain why you might want to set up an app this way despite it being more complicated than just building a single service.
The frontend service is available to the user on port 5000 of the host machine. It provides a simple interface for the user to make requests with. If you go to localhost:5000
(i.e. send a GET
request to the /
route) you'll see a simple submission form. If you send a POST
request to the /submit
route, it'll be forwarded to the writer service. This is similar to a real frontend or to a public-facing API.
The writer service is not available to the user, but it is available to the frontend service via a shared Docker Network. When it receives a POST
request to its /
route, it writes it out to the /data/
folder in the shared volume. This is similar to a logging service in reality.
Finally the reader service is also not available to the user, but like the writer service is available on the Docker Network. It reads the /data/
file in the volume and prints out the last line that it finds every few seconds. Every few times it reads the file, it also writes out a message with {"source": "reader", "text": "checkup"}
as content to the file. This is just to check that the connection between the reader and the writer is intact.
Now let's go ahead and setup our demo.
Build the Docker images
First, let's build the Docker images for the reader, writer, and frontend. Enter into the subfolders reader
and writer
, and run the following commands:
docker build . -t reader
docker build . -t writer
docker build . -t frontend
We'll use a shared volume to allow both the reader and writer containers to access a common data file. Let's create the shared volume using the following command:
docker volume create demo
We'll also create a shared network to allow the reader and writer containers to communicate with each other. Let's create the shared network using the following command:
docker network create reader-writer
Run the writer container
Now, let's run the writer container.
docker run -dit -v demo:/data -e DATA_PATH="data/log.txt" --network=reader-writer writer
This command creates a new Docker container using the writer
image. We're using the -p
flag to expose the container's port 5000 to the host machine, and the -v
flag to mount the demo
volume to the /data
directory in the container. We're also setting the DATA_PATH
environment variable to data/log.txt
, which specifies the path to the file that the writer container will write to. Finally, we're using the --network
flag to attach the container to the reader-writer
network.
Finally, find the local endpoint of writer by running docker network inspect reader-writer
.
Run the reader container
Now, let's run the reader container. We'll pass in the IP address of the writer container so that the reader container can communicate with it. Run the following command:
docker run -it -v demo:/data -e WRITER_ENDPOINT="http://<IP-ADDRESS>" --network=reader-writer reader
This command creates a new Docker container using the reader
image. We're using the -v
flag to mount the demo
volume to the /data
directory in the container. We're also setting the WRITER_ENDPOINT
environment variable to the IP address and port of the writer container. Finally, we're using the --network
flag to attach the container to the reader-writer
network.
Run the frontend container
Finally, let's run the frontend container. We'll pass in the IP address of the writer container so that we can communicate with it. Run the following command:
docker run -it -v demo:/data -e WRITER_ENDPOINT="http://<IP-ADDRESS>" --network=reader-writer frontend
This has exactly the same setup as reader.
Summary
Here's a summary of the steps:
- Create the shared volume:
docker volume create demo
- Create the shared network:
docker network create reader-writer
- Build the Docker images for reader, writer, and frontend:
docker build . -t reader
,docker build . -t writer
,docker build . -t frontend
- Run the writer container:
docker run -dit -p 5000:5000 -v demo:/data -e DATA_PATH="data/log.txt" --network=reader-writer writer
- Find the local endpoint of writer by running
docker network inspect reader-writer
- Run the reader container:
docker run -it -v demo:/data -e WRITER_ENDPOINT="http://<writer_ip>:5000" --network=reader-writer reader
- Run the frontend container:
docker run -it -v demo:/data -e WRITER_ENDPOINT="http://<writer_ip>:5000" --network=reader-writer frontend
Docker Compose
You'll notice that the previous step took 7 different commands, some of them fairly complex, to execute correctly. But one of our goals in using Docker was to simplify our deployment process! What if it were possible to do this with a single command and just write a config to deal with it? Good news! This is what Docker Compose lets us do.
Docker Compose is a tool that allows you to define and run multi-container Docker applications. It uses a YAML file to define the services that make up the application, and manages the containers that run those services. Using Docker Compose can make it much easier to manage complex applications, as it simplifies the process of starting, stopping, and scaling multiple containers.
To use Docker Compose, you first need to define your services in a docker-compose.yml
file. You can find an example at the top level of the docker-multicontainer-demo
folder.
In this file, each service is defined with a name (demo-writer
, demo-reader
, and demo-frontend
), a hostname, a build location, a network to connect to, environment variables, and other configuration options.
The services
section of the file defines the three services that make up the Docker application:
services:
demo-writer:
...
demo-reader:
...
demo-frontend:
...
Each service has a unique name (demo-writer
, demo-reader
, and demo-frontend
) and is defined with the following properties:
hostname
: Specifies the hostname for the container.build
: Specifies the location of the Dockerfile to build the image for the container.networks
: Specifies the networks that the container should be connected to.volumes
: Specifies the volumes that should be mounted inside the container.environment
: Specifies environment variables to be set inside the container.ports
: Specifies ports to be exposed on the host system.
The networks
section of the file specifies the networks that should be available for the containers to connect to:
networks:
default:
driver: bridge
In this case, the default
network is defined as a bridge
network, which means that each container on the network can communicate with each other using their hostnames. This avoids issues with container IPs changing every time they're run.
The volumes
section of the file defines the Docker volume that the services will share.
volumes:
demo: {}
In this case, the demo
volume is created, which will be mounted as /data
inside the containers. This volume will allow the containers to share data between each other.
Once you have your docker-compose.yml
file defined, you can start your application by running the docker-compose up
command in the same directory as the file. Docker Compose will create and start all the containers defined in the file, and connect them to the specified networks.
docker-compose up
You can also stop and remove all the containers using the docker-compose down
command:
docker-compose down
Congratulations! You've successfully set up your first multi-container app using docker-compose.
Further Reading
Assignment
Just like last time, there are several different assignments.
- Experiment with networks and volumes using your docker-compose file.
- Try setting up one of your previous apps (especially one with a frontend & backend component) and putting it up using
docker-compose
.
Next time we're going to focus on theory applications of containers including how containers can make deployments simpler to execute and when we might want to use them.