Express & MariaDB With Docker Compose

Al Javier
10 min readDec 15, 2020

The Microservice architecture is on the rise and if you haven’t had the chance to pick up the relevant skills as a web developer for it, fret not, I got you covered on this short guide. I will assume you already know what Docker does, and what a Container is. If not, consider those as prerequisites before approaching this guide. I mean it’s not completely required, but it’s to help you understand the flow much better.

In order to get started, you’ll of course need the Docker Engine on your operating system. Windows and Mac have Docker for Desktop, if you’re on Linux, go ahead and check out the handy documentation on how to install the Docker Engine and Docker Compose — it’s super fast and easy (alternatively if you like a GUI on Linux, there’s Dockstation).

Project Setup

We’re first going to create a small REST API using the Express.js framework. All it will do is just display text when we send a GET request. Nothing too fancy. After that we’ll proceed to doing the Docker stuff.

Normally you’d want to split up app.js to bootstrap the application at server.js to aid you during writing test cases. But for simplicity’s sake, we won’t be doing that.

docker-express-api
|-- src
| |-- app.js
| |-- db.js
`-- package.json

Inside of app.js we’re just going to create a simple GET route that displays the name of our application.

But before doing anything else, fire up your terminal and run the command npm init to help us finish the setup of our project. Then go right ahead and install Express and MariaDB.

$ npm install express mariadb

If you’ve noticed that I’m using import/export statements. At the risk of making this guide overly bloated with Babel, let’s create our own custom start script inside of package.json and make things a little easier for ourselves.

"scripts": {
"start": "node --es-module-specifier-resolution=node src/app.js"
},

I’m using Node version 12.20.0 for this one, so you may not even need a bunch of those extra stuff to run your app if you’re using the later versions.

And then below the name key, add this:

"type": "module",

That should be about it and everything should run smoothly without additional modifications.

First Run

Go ahead and start your application and check it using either cURL, Postman, or just a web browser really. You should see the output.

That’s not a whole lot but that’s really all we needed to see. Let’s quickly move on to setting up our database connector.

Database Connection Pool

Over at db.js we’ll import the database driver and define our connection pool object. Normally you wouldn’t put something sensitive like database credentials directly to your source code. But to keep things simple, just add that in and you should be done.

This won’t do much for now as the database container doesn’t even exist yet. One important detail to note is that in order to connect to your MariaDB container, you must use that Service’s name as the hostname. What that means is instead of localhost, you’ll use whatever you’ll define in docker-compose.yml later.

Let’s go create a route to query some dessert foods:

Alright, with our app’s code out of the way, let’s dive straight into doing the Docker parts of this project — starting with our database.

Docker Images

Images are how containers are created, and you can use them to create your own. If you want, you can even start with a blank Ubuntu-based container to create your database container! To save time, we’ll be using the official MariaDB image from Docker Hub.

$ docker pull mariadb:latest

This command should pull the latest version of the mariadb image which we will use to create the database container.

There’s a command we can use to help run, expose the ports, and even name that new running container. We’ll need to run this right now, Docker Compose will pick it up later and attach it to our Node.js app as a dependency.

$ docker run -p 3306:3306 -d --name docker-mariadb -e MYSQL_ROOT_PASSWORD=password mariadb:latest

In this one line, we’ll be running a container that exposes the internal port to our host machine. Then named it “docker-mariadb”. Next, we assigned an environment variable, the MYSQL_ROOT_PASSWORD portion after -e. Finally, we’re telling the run command which image to use, in this case it’s mariadb:latest.

There’s also the -d flag which denotes the process is running in detached mode — in simple terms, this means when the root process that started the container is exited, your container will be stopped as well. By default it’s started in foreground mode.

Now that we have a new container running, let’s go confirm that it’s there using the following command:

$ docker ps

Once you see your new database container there, you did everything correctly.

Interacting With Containers

There are a multitude of ways to run commands directly in your containers, especially a database one. If your OS is compatible with Docker for Desktop, you can easily access the terminal for that container, or use docker exec from your terminal.

For Linux-based ones, you can either use your host’s mysql/mariadb client, or use docker exec as well.

Host MySQL/MariaDB Client

You first need the IP Address of your MariaDB container. Getting that is simple, just run the command:

$ docker inspect docker-mariadb | grep 'IPAddress'
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.18.0.2",

In this case, it’s 172.18.0.2 . Next we’ll connect using the mysql client right from our terminal.

$ mysql -u root -p -h 172.18.0.2

When you have access to the MySQL Monitor, you’ve done everything correctly.

Docker Exec

This approach actually works on any platform and is generally easier to do so.

$ docker exec -it docker-mariadb /bin/bash

And that’s it, you’re inside the container, ready to run some commands. All you have to do now is login to the MariaDB Monitor.

$ mariadb -u root -p

Let’s get to making that food database now.

Creating The Database

To let our Express app communicate with the database, we first need the data. Starting off with the database itself:

MariaDB> CREATE DATABASE food;

Then we create the desserts table:

MariaDB> CREATE TABLE food.desserts (name VARCHAR(100));

Finally, we’ll populate our new table with some desserts:

MariaDB> INSERT INTO food.desserts VALUES ('churros'), ('gelato'), ('halo-halo'), ('mochi');

You’re done, just type quit and then exit to fully close all the terminals and active sessions. Time to create our own Docker image to run.

The Dockerfile

In simple terms, the Dockerfile is what allows us to create our image. It uses a lot of the same commands from your terminal as you’ll see in a bit. Create a new file called, well, Dockerfile in your project root directory and open it up.

We have a bunch of stuff going on here so let’s go break it down one at a time. These commands are known as Layers and run from top to bottom.

FROM

This is what will specify the image that you want to use to speed things up in development. In this example, we’re pulling the official Node.js image that’s version 12.20.0. The two basic parts are first the name of the image, then the next one, after the colon, is the version of that image. If the version or tag is not specified, it will default to latest. You may check out the Docker Hub for more information regarding official images.

WORKDIR

The working directory is located inside the container, so in this instance it’s in the root of the container. If you use ~/app then that will be the container’s home directory.

COPY

Copy is kind of self-explanatory. The first parameter is your host or app directory, the second is the destination in your container’s directory.

The reason why we want to grab package.json first and install dependencies is because of docker build's ability to cache steps that it has already succeeded in before. This will greatly increase the speed of building your image when you make changes to your application.

RUN

This allows you to run commands as you would in the terminal, which in this example is used to install dependencies with npm install.

CMD

There can be only a single CMD layer in a Dockerfile, as it’s the default for container execution. Basically put it at the last of the Dockerfile, below all the other layers. While there are exceptions, a simple app like ours will make do with this.

Building The Dockerfile

In order to have our image ready, we have to build it and thankfully that’s pretty easy.

$ docker build -t alphonso/docker-express-api .

The -t flag denotes your tag, or what’s essentially the name of this new image. You’ll normally come across the standard of naming it after one’s host machine username, or a nickname, or whatever workspace you and your team have. I just gave this one my name then the name of the image.

You’ll notice the dot at the end, that specifies the source of our Dockerfile, which is at the root of the project folder.

Once the build is successful, we’ll spin up a database and our application together using docker-compose to finish our stack. But before that, don’t forget to check out your new image using the following command:

$ docker image ls

You should see a list of image names, including yours.

Docker Compose

In this section we’ll combine both our app’s image and the docker-mariadb container into a single application stack. We’ll also define their ports, networks, and mounted volumes to preserve data.

Create a new file called docker-compose.yml at your project’s root.

Once you have this in place, let’s go ahead and discuss the important bits of this file.

Version is based on the Docker Compose version API, the documentation has a full list of what version run which commands. We’re using version 3 in this one.

Services are the complete list of applications that you will run with Docker Compose. Their names are entirely up to you and what makes the most sense to you and your team. I called them node and mariadb to make things clear and easy to understand.

Each Service needs an Image — the one built from the Dockerfile or the one pulled from Docker Hub. The Container Name is what this service will be called as a running container as seen with docker ps.

Ports are just like your standard application ports, you first define the container’s application port, then its equivalent at the host. Without this, you won’t be able to connect to your application as a container.

The depends_on key is what defines service dependency across your multi-container application. It makes docker-compose up start your services in specific order, in this case the mariadb service starts first before node.

Then you have Environment, which is the environment variables to use for that service. You may also use a .env file via an env_file: key.

Networks

We’re going to have to look at this a little bit more in-depth compared to the others, as they’re a lot more self-explanatory. Both networks and volumes require more detail to grasp the basics.

You first define a network name, in this case it’s docker-service. Next you have a driver, which uses the default one called bridge — which is commonly used on a single host.

Then you’ll define which Network to use on your Services, by using the networks key, followed by its name.

Volumes

Much like Networks, you’ll define a name for your Volume, then specify a driver to use. The local driver is the default. Why is a volume required for our app? It’s to preserve data in your database container. There are internal and external volumes available to use, in this example it’s internal, as set by default.

To use your created volume, define it under the service such as a database, and then its name followed by the directory (after the colon).

Running The Containers

That was a lot, but we’re not done yet. We’ll have to run our containers now using the command:

$ docker compose up

Once everything goes well, you should end up with both services outputting logs

Testing The Desserts Route

Alright, we should be about done at this point. All you have to do now is perform a GET request at the /desserts route.

And there you have it! A fully functioning multi-container application with Docker Compose.

Conclusion

Okay, that was a lot to take in, but hopefully you learned something new. If you made any mistakes during the Docker Compose sections, or one of your containers crashed, it’s okay to delete that container and re-run docker compose. No need to delete both of them, Docker will automatically attach existing containers to your services.

As always if you can suggest any improvements, please do so in the comments section. I’ll appreciate any feedback.

Also if you’re feeling particularly generous today, why not grab me a cup of coffee?

This would help me a lot and would allow me to keep making more easy-to-follow guides and tutorials for everyone! 😄️

The Github repository for this project is right here.

Anyway, thanks for reading, see you on the next one.

--

--

Al Javier

Alphonso ‘Al’ Javier is a web developer and rookie entrepreneur that enjoys food, travel, and a good dose of gaming.