Assumptions

  • An app deployed via AWS Elastic Beanstalk with an RDS database
  • Deploy are handled via AWS Code Pipeline

In my case, the app is a frontend + a backend running in a multi-container Docker Elastic Beanstalk environment.
The multi-container Docker environment allows you to run several containers in the same EC2 host which make
the most of the resources of your EC2 hosts and should save a few bucks compared to Single Container Docker environment.
If you have money to spare and have stronger performance requirements a Single Container Docker environment could still make sense to have a dedicated host for your container.
The associated Dockerrun.aws.json you need to use a multi-container environment looks like this.

Dockerrun.aws.json

{
    "AWSEBDockerrunVersion": 2,
    "volumes": [
        {
            "name": "compulsivecoders-volume",
            "host": {
                "sourcePath": "/var/app/current/compulsivecoders-volume"
            }
        },
        {
            "name": "compulsivecoders-api-volume",
            "host": {
                "sourcePath": "/var/app/current/compulsivecoders-api-volume"
            }
        }
    ],
    "containerDefinitions": [
        {
            "essential": true,
            "image": "xxxxxxxxxx.dkr.ecr.eu-west-3.amazonaws.com/compulsivecoders:latest",
            "memory": 250,
            "name": "compulsivecoders",
            "mountPoints": [
                {
                    "sourceVolume": "compulsivecoders-volume",
                    "containerPath": "/usr/src/app",
                    "sourcePath": "",
                    "readOnly": true
                }
            ],
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 3000
                }
            ]
        },
        {
            "essential": true,
            "image": "xxxxxxxxxx.dkr.ecr.eu-west-3.amazonaws.com/compulsivecoders-api:latest",
            "memory": 250,
            "name": "compulsivecoders-api",
            "mountPoints": [
                {
                    "sourceVolume": "compulsivecoders-api-volume",
                    "containerPath": "/usr/src/app",
                    "sourcePath": "",
                    "readOnly": true
                }
            ],
            "portMappings": [
                {
                    "hostPort": 3001,
                    "containerPort": 3001
                }
            ]
        }
    ]
}

An explanation for some the config file:

  • AWSEBDockerrunVersion should be set to 2 if you want to go with multi-container (for single container you should still use version 1).
  • The volumes config is similar to what you would define in a docker-compose.yml,
    just don't forget to define the mountPoints mapping to the volumes you've defined for each app in the containerDefinitions if you want to use them.
  • in a container definition, essential means does my app need this container to run properly or can it crash and I don't care.
  • image is the Docker image location. It's required and Elastic Beanstalk can't build the Docker image, you have to do it somewhere else, for instance using CodePipeline and pushing the images to AWS ECR.

For more detailed information, read the doc at https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun_format

Now we want to run database migrations after each deploy. To do it you have to leverage platform hooks.
You've got an extensive doc here https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platform-hooks.html.

TLDR the doc.

Create and folder .ebextensions at the root of your repo and create a file named 10_run_migrations.config inside it.
In this file, put the following code. Disclaimer: this method doesn't work with Amazon Linux 2 which is currently in beta.

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/10_run_migrations.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      !/usr/bin/env bash
      api_container_id=$(docker ps | grep -e compulsivecoders-api | awk '{print$1;}') && docker exec -d $api_container_id yarn run typeorm migration:run

This will run the shell script from the EC2 host, in my case, this will get the backend container id and execute the command yarn run typeorm migration:run inside it via a docker exec.

Disclaimer: this method doesn't work with Amazon Linux 2 which is currently in beta for Elastic Beanstalk.

Bonus:
I have a docker-docker.yml for local development but my frontend and backend are living in the same git repository.
I've got a folder for frontend and another one for backend.
My repo looks like this

  • frontend/
  • backend/
  • docker-copose.yml

One trick I found to have a cleaner Dockerfile is to define in each folder a Dockerfile so that I can ignore backend requirements when I define my frontend and vice versa.
Then in your docker-compose.yml, you can define for each service which Dockerfile it should use like this with the build keyword.

version: '3.7'
services:
  frontend:
  	image: myFrontend
    build: ./frontend
  backend:
  	image: myBackend
    build: ./backend   
...