docker

How to organize Docker on Synology

DSM7 Feb 3, 2023

When I’m posting how to run a particular container on Synology, For some containers, I’m providing two ways of running them on Synology, basic and advanced. Besides these differences, I always follow the same overall setup and configuration I will outline here.

Docker Compose

I use docker-compose for all my container configurations. Docker compose a way of organizing multi-container applications. Overall, container management easily, instead of dealing with all those command-line arguments. I will briefly go over the setup I’m using.

The docker shipped with Synology comes with the docker-compose command. In the future, on the next update, we will see that the compose specification will be included in the docker command itself. When that update is rolled out to Synology, your docker-compose command will be updated to docker compose where compose is a subcommand of docker.

For more information about docker-compose and its specification, you can read more here.

Docker Compose Configuration

For a lot of my container configurations, I make use of a .env file. (Pronounced: ‘dotenv). A dotenv (.env) is special file. When detected by the docker-compose, it will be automatically loaded. The only requirement for auto-loading .env is that it has to be present next to a docker-compose.yml. This allows for two convenient situations. We can give docker-compose some additional configuration and provide additional or masked variables to our services.

Frequently used configuration variables

COMPOSE_PROJECT_NAME

This variable sets the project name, which usually defaults to the directory name which holds the docker-compose.yml file.

COMPOSE_FILE

This variable will define the name of the file holding the docker-compose configuration. This can also be used when the configuration is split between multiple files. This defaults normally to docker-compose.yml. Multi files can be defined with the colon separator. (:)

COMPOSE_HTTP_TIMEOUT

Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers it failed. It defaults to 60 seconds. When using docker on Synology where system resources might be limited. This variable is frequently used in my projects to extend the time for docker operations.

Mask Sensitive Information

Sometimes we have to put sensitive information into the docker configuration. For example, passwords or other values which are considered sensitive. I make use of the .env (dotenv) file to store these values.

Organize Storage

I follow the following standard when deploying docker containers to my NAS. Everything gets stored in the docker shared folder. Within the docker shared folder, I have folders for each docker container. I do have a single exception. When a product contains multiple containers, I have a group folder where everything gets stored. For example, all my GitLab containers get stored in a docker/gitlab.

Storage Layout

Furthermore, I organize the contents of these folders always the same way. A folder for a container always contains two items. Firstly, a docker-compose.yml file. Secondly, a .env (dotenv) file. Optionally, a folder containing the persistent data of the container needs to be preserved between restarts.

  • container_folder
    • data (optional folder for holding persistent data)
    • .env (file holding global configuration and sensitive values)
    • docker-compose.yml (docker-compose configuration)
Persistent Data

Always store the data of your container on your host. Your host is your Synology. This doesn’t sound very clear, but I mean the following. Docker has two options for storing data. Docker volumes and docker mounts. Volumes can be created manually beforehand, and docker mounts are directories on your Synology (in a shared folder) that mount directly in a container, so when a container writes data, it writes data to the data volume on your Synology.

Always use docker mounts!

Personal story data loss

You might wonder why I’m such an advocate for docker mounts. Let me disclose a story from a few years ago where I experienced data loss. I was running docker on my Synology and using docker volumes for a particular container. This was not vital data, but still, it was horrible.

I also set up a scheduled task to clean the docker environment once a week. I ran this task because of other containers like GitLab, which download many containers and cause many leftover containers. This job also cleaned up leftover volumes. Now here is what happened. The clean-up task was configured to remove all volumes that were not attached, which makes sense because it would clean up everything leftover by the GitLab runner job.

So here is what happened, one of my containers which used a docker volume to store its persistent data, got an error and exited; I only noticed for some time. So I brought the container down to get it fixed. When you bring a container down, it’s removed from the overview. Your volume data remains. And because I used named volumes, there was no problem; I could fix the problem and get it up and running with the previous data volume. However, when I was working on it, the clean-up task ran. It removed the volume because it was no longer attached to a container.

Always use docker mounts to store the data on the host (your Synology), don’t rely on volumes. Second, when the data is mounted, it also allows for easy configuration edits in the data mounted.

Organize Network

Some of my larger dockerized applications I run on separate networks. Docker can define networks. These networks can be assigned to containers. This allows containers on a specific network, isolated from other containers, or for example, to be connected to multiple networks if communication is required between them. It also allows a way of organizing them combined with a layer of isolation. This is one of the reasons why I use docker networks.

So why do I need to create a docker network? Why are some of my guides more extensive? The answer is easy. Some guides and posts I write are intended for people who like to run their services for a long time, whether in a private or business environment. We need to ensure we have a plan in place and documentation. This way, we can enjoy our services for a long time without any maintenance. I’m a fond believer in doing things right the first time. Second, docker networks allow you to use port forwarding without a locally mapped port.

You can read my post on how to set up docker networks here.

Logging

Containers can be running for a long time. This might cause your disk space to fill up with things like logging of a container. I do not use the docker UI application in Synology DSM. To avoid disk space filling, I always use a specific set of configurations in my docker-compose that will limit the amount of logfiles and also the size of each logfile. Not having logging can be quite difficult if you need to find a problem. Therefor I prefer limiting logfiles over completely disabling them.

Final Thoughts

I hope this post gives insight into how I organize the containers running on my Synology. Feel free to send me an email or leave a comment.

Tags