Setting up databases and services noways can get very complex and time-consuming, downloading things here and there, setting up environment variables, dealing with pre-installed versions, updating, downgrading, and configuring everything on its own, especially when working on large-scale projects with multiple services.

Docker Compose aims to solve this problem by providing a simple and efficient way to define and manage multi-container applications. In this blog post, we will explore how Docker Compose can boost productivity by simplifying environment setup and making it easier to work with databases and services.

Defining the issue

At the moment, I'm working on a NestJS application (AI platform) and on the server, I have:

  • a NestJS server
  • a Postgres Database
  • an Nginx reverse-proxy
  • a Redis caching service
  • few Elasticsearch nodes

installing each of these tools one by one would be a nightmare, and can take up hours, not considering the bugs and changes to customization that needs to be made for the OS, especially when I'm hopping between OSX, Windows, and Linux.

Enter Docker Compose

Docker Compose is a tool that allows you to define your application's services, networks, and volumes using a single YAML file, making it easy to manage and share your application's configuration. you can quickly spin up your entire application environment with just a few commands, eliminating the need for manual configuration and setup of individual components.

How does it work?

Simply, you configure all of your containers in one file called docker-compose.yml

the structure of the YAML file is very standard,

version: '3.7'

services:
  nestjs:
    container_name: nestjs
    build:
      context: .
      target: development
    volumes:
      - .:/usr/src/app
      - /usr/src/app/node_modules
    ports:
      - "3000:3000"
    command: npm run start:dev
    env_file:
      - .env
    depends_on:
      - postgres
      - redis
      - elasticsearch
    networks:
      - webnet
  postgres:
    container_name: postgres
    image: postgres:13
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      PGDATA: /var/lib/postgresql/data
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - webnet
  redis:
    container_name: redis
    image: redis:6
    networks:
      - webnet
  nginx:
    container_name: nginx
    image: nginx:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./certs:/etc/ssl/certs
    depends_on:
      - nestjs
    networks:
      - webnet
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - esdata:/usr/share/elasticsearch/data
    networks:
      - webnet

networks:
  webnet:

volumes:
  pgdata:
  esdata:

This Docker Compose file defines the following services:

  • nestjs: The NestJS server container, built from the current directory with hot reloading enabled. It depends on the postgres, redis, and elasticsearch containers, and is exposed on port 3000.
  • postgres: The Postgres Database container, using the latest version of the Postgres image. It is exposed on port 5432 and has a volume for persistent data.
  • redis: The Redis caching service container, using the latest version of the Redis image.
  • nginx: The Nginx reverse-proxy container, using the latest version of the Nginx image. It depends on the nestjs container and is exposed on ports 80 and 443. The volume ./nginx.conf is mounted to the container's /etc/nginx/nginx.conf path and ./certs is mounted to /etc/ssl/certs path.
  • elasticsearch: The Elasticsearch nodes container, using the latest Elasticsearch image. It is exposed on ports 9200 and 9300 and has a volume for persistent data.

The Docker Compose file also defines a webnet network, to which all services are connected to. This allows the containers to communicate with each other using their service names as hostnames.

And Voila! This is all that you need to set up your environment, that runs anywhere (Just make sure to choose the correct image if you are running AArch)

the next step is to run our services, simply (after having docker installed) is to just run

docker compose up

or

docker compose up -d

if you want to run the services detached from the terminal.

you can also add the --build argument to rebuild the services that are already running.

Custom Launch Profiles

In addition to the standard Docker Compose functionality, you can further optimize your development workflow by creating custom launch profiles. These profiles allow you to define which services should start and under what conditions, enabling you to better manage your application's startup behavior in various scenarios.

Conclusion

Docker Compose is too powerful for what it does, it simplifies the process of setting up and managing multi-container applications. It also makes it easier to work with databases, services, and environment variables, ultimately boosting productivity and streamlining the development process.