1. Introduction

In modern software development, we often encounter scenarios where a running web application uses Vue, Java 8, Java 15, Tomcat, Nginx, PHP, MySQL, and Redis. If we want to migrate this application to a new server, we need to reinstall all software and configure environment variables on the new machine, which is a painful process.

Docker perfectly solves this problem. Through containerization technology, we can package applications and their runtime environments into images, achieving the goal of “build once, run anywhere.” Docker-compose further simplifies the deployment and management of multi-container applications.

2. Environment Installation

2.1 Installing Docker

On Linux systems, you can use the official installation script to quickly install Docker:

# Download and execute Docker installation script
curl -fsSL https://get.docker.com | sh

# Start Docker service
sudo systemctl start docker

# Enable Docker to start on boot
sudo systemctl enable docker

# Add current user to docker group (avoid needing sudo every time)
sudo usermod -aG docker $USER

# Verify installation
docker --version

2.2 Installing docker-compose

docker-compose is Docker’s official orchestration tool for defining and running multi-container applications:

# Download docker-compose
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Grant execute permission
chmod +x /usr/local/bin/docker-compose

# Verify installation
docker-compose --version

For Windows and macOS users, it’s recommended to install Docker Desktop, which already includes Docker and docker-compose.

3. Core Concepts

Before diving into Docker, you need to understand several core concepts.

3.1 Image

A Docker image is a special filesystem that provides not only the programs, libraries, resources, and configuration files needed for container runtime, but also some configuration parameters prepared for runtime (such as environment variables, users, etc.).

Image characteristics:

  • Images are read-only; content doesn’t change after building
  • Uses layered storage architecture for easy reuse and customization
  • Can be understood as an application’s “backup” or “snapshot”

Docker uses Union FS technology to design images with layered storage architecture. Each layer doesn’t change after building, and changes in later layers only occur in their own layer.

3.2 Container

A container is a running instance of an image. An image is a static definition, while a container is the runtime entity of an image.

Container characteristics:

  • A container is essentially a process, but runs in an isolated namespace
  • Has its own root filesystem, network configuration, process space, etc.
  • The container storage layer’s lifecycle is the same as the container; data is lost when the container is destroyed

The relationship between images and containers is similar to classes and instances in object-oriented programming - images are classes, containers are instances.

3.3 Repository

A repository is a service for storing and distributing images. A Docker Registry can contain multiple repositories, and each repository can contain multiple tags.

Naming convention:

  • Full format: <repository>:<tag>
  • Examples: ubuntu:20.04, nginx:latest
  • When tag is omitted, latest is used by default

4. Creating Images

Docker image building is done by reading Dockerfile files. It’s essentially a text file containing instructions, where each instruction builds a layer, so each instruction’s content describes how that layer should be built.

4.1 Writing Configuration Files

Using an open-source project’s backend service as an example, create a Dockerfile in the project root directory:

# Specify base image
FROM tomcat:9.0.41-jdk8-openjdk

# Copy WAR package to Tomcat's webapps directory
COPY ./chat-system-server.war /usr/local/tomcat/webapps/

# Copy configuration file
COPY ./tomcat/conf/server.xml /usr/local/tomcat/conf/server.xml

# Declare runtime port
EXPOSE 8080

Instruction descriptions:

  • FROM specifies the base image
  • COPY copies files into the image
  • EXPOSE declares the service’s runtime port

4.2 Common Instructions

Instruction Purpose Example
ADD Get file from URL and place at target path ADD app.tar.gz /app/
RUN Execute command line commands RUN apt-get update
CMD Program to run when container starts CMD ["java", "-jar", "app.jar"]
ENV Set environment variables ENV PATH=/usr/bin:$PATH
WORKDIR Specify working directory WORKDIR /app

Note: When you need to execute multiple RUN-like instructions, use && to concatenate them to avoid creating multiple image layers:

RUN apt-get update && \
    apt-get install -y gcc && \
    apt-get clean

4.3 Building Images

Open a terminal, navigate to the directory containing the Dockerfile, and execute the build command:

docker build -t chat-system-server:1.0.0 -f Dockerfile .

Parameter descriptions:

  • -t specifies the container name
  • -f specifies the configuration file (optional, defaults to Dockerfile)
  • . represents the current directory, specifying the build context path

5. Starting Containers

There are two ways to start a container: create a new container from an image and start it, or start a container that’s in a stopped state.

5.1 Create and Start

Use docker run image_name to create and start a container:

# Basic start
docker run chat-system-server:1.0.0

After the container starts, you’ll find that you can’t access it through the port 8080 declared in the image. This is because no port mapping was done when starting the container. You need to add the -p parameter to the start command:

# Port mapping
docker run -p 127.0.0.1:8080:8080 chat-system-server:1.0.0

5.2 Common Parameters

Run in background:

docker run -d -p 127.0.0.1:8080:8080 chat-system-server:1.0.0

Name the container:

docker run --name local_chat_system_server -d -p 127.0.0.1:8080:8080 chat-system-server:1.0.0

Start a stopped container:

docker container start container_name

5.3 Container Management

# Stop container
docker container stop container_name

# Remove container
docker container rm container_name

# Enter container
docker exec -it container_name bash

5.4 Data Mounting

Data stored inside a container is lost when the container is terminated. You need to mount data volumes to achieve persistent data storage. There are typically two approaches:

Data volumes:

# Create data volume
docker volume create chat-system-data

# Start container and mount data volume
docker run -d --name local_chat_system_server \
    --mount source=chat-system-data,target=/usr/local/data \
    chat-system-server:1.0.0

Directory mapping (recommended):

docker run -d --name local_chat_system_server \
    -v /host/path:/container/path \
    chat-system-server:1.0.0

Directory mapping associates the specified host path with the target path inside the container. Operations on the local host are reflected inside the container, and vice versa.

6. Container Orchestration

Now let’s return to the scenario mentioned at the beginning of the article. If everything is packaged into one image, future maintenance and expansion will become a nightmare. Generally, we use Docker Compose for such scenarios.

In short, Docker Compose’s role is to combine multiple independent containers, allowing containers to easily access each other, ultimately achieving our requirements.

6.1 Writing Configuration Files

Container orchestration is achieved by writing a docker-compose.yml configuration file, which is typically created in the project’s root directory. The configuration file contains the following main parts:

  • version - Specifies the Docker Compose file format version
  • networks - Used for custom networks
  • services - Defines various services (MySQL, Redis, Nginx, etc.)

6.2 Defining Networks

When deploying services on physical machines, multiple services need to be on the same gateway to access each other. The process is the same in docker-compose, so we need to define a network first:

networks:
  app-network:
    external: true
    name: app-network
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 192.168.30.0/24
          gateway: 192.168.30.1

Network configuration description:

  • 192.168.30.0/24 represents IP address range from 192.168.30.1 to 192.168.30.254
  • gateway specifies the gateway address as 192.168.30.1
  • driver: bridge specifies the network connection mode as bridge

6.3 Defining Services

We can define needed services under the services directive, connecting them to networks, mounting data volumes, setting timezones, defining access ports, etc. Using MySQL as an example:

services:
  mysql:
    image: mysql:5.7.42
    container_name: local_mysql
    volumes:
      - /host/mysql_data:/var/lib/mysql
      - /host/mysql_conf/my.cnf:/etc/my.cnf
    ports:
      - 3306:3306
    networks:
      app-network:
        ipv4_address: 192.168.30.11
    environment:
      - MYSQL_ROOT_PASSWORD=xxxx
      - TZ=Asia/Shanghai

Configuration description:

  • mysql - Service name
  • image - Image name
  • container_name - Container name
  • volumes - Mounted data volumes
  • ports - Port mapping
  • networks - Network the service connects to and assigned IP address
  • environment - Environment variable settings

With these few lines of configuration, we have a MySQL service that other services can access via 192.168.30.11:3306.

6.4 Complete Example

Here’s a complete configuration example with multiple services:

version: '3.8'

networks:
  app-network:
    driver: bridge
    ipam:
      config:
        - subnet: 192.168.30.0/24
          gateway: 192.168.30.1

services:
  mysql:
    image: mysql:5.7.42
    container_name: local_mysql
    volumes:
      - ./mysql_data:/var/lib/mysql
    ports:
      - 3306:3306
    networks:
      app-network:
        ipv4_address: 192.168.30.11
    environment:
      - MYSQL_ROOT_PASSWORD=rootpassword
      - TZ=Asia/Shanghai

  redis:
    image: redis:6-alpine
    container_name: local_redis
    ports:
      - 6379:6379
    networks:
      app-network:
        ipv4_address: 192.168.30.12
    environment:
      - TZ=Asia/Shanghai

  app-backend:
    image: tomcat:9.0.41-jdk8-openjdk
    container_name: chat_system_server
    ports:
      - 8080:8080
    volumes:
      - ./webapps:/usr/local/tomcat/webapps
      - ./data:/usr/local/data
    environment:
      - TZ=Asia/Shanghai
    networks:
      app-network:
        ipv4_address: 192.168.30.13
    depends_on:
      - mysql
      - redis

  nginx:
    image: nginx:1.18.0
    container_name: local_nginx
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./nginx_config:/etc/nginx
    environment:
      - TZ=Asia/Shanghai
    networks:
      - app-network
    depends_on:
      - app-backend

Note: The configuration above uses the depends_on directive to define service startup order, ensuring the database starts first.

6.5 Starting Services

Start all defined services via the following commands in the terminal:

# Start all services
docker-compose up

# Start in background
docker-compose up -d

# Stop services
docker-compose down

6.6 Environment Variable Management

In actual use, local paths are typically injected via variables:

# Define in .env file
MY_VOLUME_PATH=/path/to/your/volume

# Use in docker-compose.yml
volumes:
  - ${MY_VOLUME_PATH:-/default/path}/webapps:/usr/local/tomcat/webapps

Pass variables when starting:

MY_VOLUME_PATH=/path/to/your/volume docker-compose up

7. Reference Resources