Summary
Hello to you and thank you for clicking on this back-to-school article! 😁
And yes, back to work and all those tickets you've been putting aside since the start of summer 😛
Come on, as I'm being nice I'm going to write you an article to stay still (a bit) on vacation.
When you're a dev you love one thing in particular
"Drinking coffee?"
So, yes too, but yet another thing: automating.
From hosting to deployment and even within programs, it would never occur to anyone to have to perform certain actions manually on a regular basis. There are several reasons for this:
So, as you can see, there are many advantages to automation, and it makes our day-to-day work a lot easier 😀
"That's great, but how do you go about implementing it?"
Well, it all depends on your technical stack and its configurations, but here I've chosen to deal with a fairly common case, namely an application running on Docker services.
Let's say, for example, that you have a Symfony application and you want to automate the consumption of messages placed in a RabbitMQ instance.
Supervisor is a popular tool for managing background processes. It automatically restarts processes in the event of failure, which is ideal for message consumers such as Symfony Messenger.
Simply with apt
Create a configuration file for the Symfony Messenger process.
Add the following configuration:
After creating the configuration file, reload Supervisor and start the process:
If your server uses systemd, you can create a systemd service to manage your order.
Create a service file:
Add the following configuration:
After creating the service file, start it and make sure it starts on system startup:
No, you're not dreaming, if you wish it's perfectly possible to use a Docker container to automate the tasks of a ... Docker container 😀
In fact, as you already know, one container can be "linked" to another, as is the case, for example, between a PHP container and a database container. So it's quite conceivable on this principle to have one container automatically act on another when it's launched.
Let's take a closer look:
Add a service to your docker-compose.yml file:
Then start this service with :
With the idea of keeping everything under Docker, this is the solution I chose in my situation, and it works perfectly. But you can also use one of the other methods mentioned above.
"Yes, but if I like doing cronjobs, is that possible or not?"
Yes, it's possible - there's nothing to stop you - but it's not the most recommended solution:
Advantages:
Disadvantages:
"Okay, I see, but your system can have errors and bugs too, can't it?"
Yes, as always in IT, there's no such thing as zero risk. So to compensate for this, we can plan to be alerted in the event of a problem, and here too there are several solutions.
You can use tools or scripts to detect errors in logs and send alerts.
You can write a bash script that looks for specific errors in consumer logs, then sends an email or notification.
Example of a script to monitor errors :
You can then schedule this script to run regularly via cron :
If you have a log centralization system (such as ELK Stack, Loggly, Splunk, or another SaaS service), you can set up alerts based on specific conditions, such as errors or interruptions in the logs.
If you're using Supervisor, you can set up e-mail alerts in the event of unexpected consumer downtime:
In the supervisor.conf configuration file for your consumer:
If you're using Docker, you can use tools like Prometheus and Alertmanager to monitor container status and trigger alerts.
You can use monitoring services such as New Relic, Datadog, Nagios, or Zabbix to monitor the health of your application, including background processes like Symfony Messenger.
These tools can be configured to :
If you've decided to use Cron anyway 😛 or are forced to by technical constraints, you can configure cron to send an email in the event of an error.
Add MAILTO to the top of your crontab to specify the destination email address:
With this configuration, if an error occurs during command execution, cron will send an e-mail to the specified address.
Now you know how to automate processes in a Docker container and also how to monitor them, i.e. how to keep an eye on them and be alerted in the event of a problem.
You'll gain time, performance and cleanliness in your configurations. What's more, you can duplicate these processes to your future applications, so you can have the same routine on virtually all your projects, depending on their technical stack.
Did I tell you we were going to talk about vacations? 😀
Well, on that note, I'll see you soon for another article. 🙂