Josselin Dionisi - Freelance developer

Guide: Automate your processes in a Docker container

ionicons-v5-k Josselin Dionisi Sep 16, 2024
65 reads Level: Confirmed

Hello to you and thank you for clicking on this back-to-school article! 😁

And yes, back to work and all those tickets you've been putting aside since the start of summer 😛

Come on, as I'm being nice I'm going to write you an article to stay still (a bit) on vacation.

When you're a dev you love one thing in particular

"Drinking coffee?"

So, yes too, but yet another thing: automating.

The benefits of automation

From hosting to deployment and even within programs, it would never occur to anyone to have to perform certain actions manually on a regular basis. There are several reasons for this:

  • The risk of making mistakes. We're human, and if we had to do everything manually, especially repetitive things, we'd run the risk of "breaking production".
  • Save time: As I said in the introduction to this article, automated tasks are a vacation (or almost 😛). This allows you to concentrate on more creative, value-added activities.
  • Standardize processes: automation ensures that every task is carried out consistently every time. This standardizes operations and ensures consistent end-product quality.
  • Easier scaling: Automation makes it easier to deploy large-scale applications across multiple servers or environments. This enables you to manage complex infrastructures efficiently.
  • Documentation and traceability: Automation scripts and tools can be versioned and documented, offering better traceability of changes and reliable documentation of the processes used.
  • Lower costs: In the long term, automation helps reduce operational costs, by requiring less manual intervention and reducing costly errors.

So, as you can see, there are many advantages to automation, and it makes our day-to-day work a lot easier 😀

"That's great, but how do you go about implementing it?"

Well, it all depends on your technical stack and its configurations, but here I've chosen to deal with a fairly common case, namely an application running on Docker services.

Let's say, for example, that you have a Symfony application and you want to automate the consumption of messages placed in a RabbitMQ instance.

Automation with Supervisor

Supervisor is a popular tool for managing background processes. It automatically restarts processes in the event of failure, which is ideal for message consumers such as Symfony Messenger.

Installing Supervisor (Ubuntu/Debian)

Simply with apt

Configuring Supervisor

Create a configuration file for the Symfony Messenger process.

Add the following configuration:

  • command: The command you want to run. Replace /path/to/php by the path of your PHP executable and /path/to/your/project by the path of your Symfony project.
  • autostart=true: The process will start automatically when Supervisor is started.
  • autorestart=true: The process will be restarted automatically in the event of a crash.
  • user=www-data: The user under which the process will run. Replace www-data with the appropriate user.

Starting the process with Supervisor

After creating the configuration file, reload Supervisor and start the process:

Automate with systemd

If your server uses systemd, you can create a systemd service to manage your order.

Create a systemd service

Create a service file:

Add the following configuration:

Start service with systemd

After creating the service file, start it and make sure it starts on system startup:

Automate with Docker

No, you're not dreaming, if you wish it's perfectly possible to use a Docker container to automate the tasks of a ... Docker container 😀

In fact, as you already know, one container can be "linked" to another, as is the case, for example, between a PHP container and a database container. So it's quite conceivable on this principle to have one container automatically act on another when it's launched.

Let's take a closer look:

Docker-compose example

Add a service to your docker-compose.yml file:

Then start this service with :

With the idea of keeping everything under Docker, this is the solution I chose in my situation, and it works perfectly. But you can also use one of the other methods mentioned above.

Why avoid cronjobs?

"Yes, but if I like doing cronjobs, is that possible or not?"

Yes, it's possible - there's nothing to stop you - but it's not the most recommended solution:

Advantages and disadvantages of using Cron

Advantages:

  • Simplicity: A cron job is easy to set up and requires no additional processes.
  • Time control: You can schedule execution at specific times.

Disadvantages:

  • Automatic restart: Unlike solutions such as Supervisor, a cron job will not automatically restart a process that stops unexpectedly.
  • Monitoring: You'll have less visibility and control over the ongoing execution of the consumer.

How can you be alerted in the event of a problem?

"Okay, I see, but your system can have errors and bugs too, can't it?"

Yes, as always in IT, there's no such thing as zero risk. So to compensate for this, we can plan to be alerted in the event of a problem, and here too there are several solutions.

Log-based alerts

You can use tools or scripts to detect errors in logs and send alerts.

Monitoring logs with a bash script

You can write a bash script that looks for specific errors in consumer logs, then sends an email or notification.

Example of a script to monitor errors :

You can then schedule this script to run regularly via cron :

Using a log monitoring service

If you have a log centralization system (such as ELK Stack, Loggly, Splunk, or another SaaS service), you can set up alerts based on specific conditions, such as errors or interruptions in the logs.

Process supervision with supervisor

If you're using Supervisor, you can set up e-mail alerts in the event of unexpected consumer downtime:

In the supervisor.conf configuration file for your consumer:

Container monitoring with Docker

If you're using Docker, you can use tools like Prometheus and Alertmanager to monitor container status and trigger alerts.

  • Prometheus collects metrics on container status (e.g. whether a container is down).
  • Alertmanager is used to send alerts by email, Slack, or other notification channels based on conditions defined in Prometheus.

Server monitoring with external tools

You can use monitoring services such as New Relic, Datadog, Nagios, or Zabbix to monitor the health of your application, including background processes like Symfony Messenger.

These tools can be configured to :

  • Monitor the state of processes or services (make sure the consumer is running).
  • Monitor resource usage (CPU, memory, etc.).
  • Trigger alerts based on thresholds or specific events (for example, if a process stops or fails).

Setting up alerts with Cron

If you've decided to use Cron anyway 😛 or are forced to by technical constraints, you can configure cron to send an email in the event of an error.

Add MAILTO to the top of your crontab to specify the destination email address:

With this configuration, if an error occurs during command execution, cron will send an e-mail to the specified address.

Conclusion

Now you know how to automate processes in a Docker container and also how to monitor them, i.e. how to keep an eye on them and be alerted in the event of a problem.

You'll gain time, performance and cleanliness in your configurations. What's more, you can duplicate these processes to your future applications, so you can have the same routine on virtually all your projects, depending on their technical stack.

Did I tell you we were going to talk about vacations? 😀

Well, on that note, I'll see you soon for another article. 🙂