Summary
Hello 🙂,
We use the internet every day, gigantic amounts of data transit on the internet, all of this is possible thanks to the network, as you read these lines, the content of this article has transited the network to reach you.
Sometimes, however, it's possible to abuse this network, and carry out what are known as DDoS attacks (denial of service attacks). Here, for example, is feedback on a very large attack.
In this article, we'll try to explain as clearly as possible how this type of attack works, and what you can do to protect yourself.
First of all, there's the OSI model, which divides the network into 7 layers, but we're not going to use this model for our explanations. Now, there's a simpler model, the TCP/IP model, which groups the layers of the OSI model into just 4 layers.
Here are the layers of this model:
Keep these layers in mind, as they will come in handy later on...
I'm putting these layers together, because the basic principles of these attacks are basically the same.
Whether it's an attack with ICMP (network layer) or an attack with UDP (transport layer), the principle is always the same: send a whole heap of useless data to a target machine in order to saturate its bandwidth and make it inaccessible to others, and it's possible to have the attack come from several machines - this is a "distributed attack".
ICMP, more commonly known as "pinging", is a protocol used to test the connection between two machines. So, one machine sends a request and the other sends a reply, saying "I'm here", thus saturating the bandwidth in both directions.
The owner of the target machine can, for example, mitigate this attack by disabling the ping response, so that the machine is not saturated on the outgoing bandwidth.After that, it doesn't solve the problem, but since the response is specific to ICMP, it's worth noting.
UDP is very violent, for several reasons:
Amplification: because of UDP's design, there is no source verification, which means that it's perfectly possible to create a "data packet" with the target's IP address as the reply IP address, and so the target will receive data it hasn't requested.
Case in point:
An attacker with a lot of bandwidth uses the DNS application protocol, using one or more servers to request a very heavy piece of information from a DNS server, but in his request, he says "Target IP address" as the reply.
Let's imagine that his initial request is 10 kilobytes in size, and that the response is 100 kilobytes, he'll have multiplied his bandwidth by 10.
Now, taking the multiplier into account, all you have to do is send millions of DNS requests to vulnerable DNS servers (which have no rate limiting), and it will be these DNS servers that will saturate the target with useless data.
Let's keep it simple and basic so as not to lose anyone 🙂.
Since this type of attack simply comes down to saturating the target's connection, all that's needed is for the target to have greater capacity than the attacker. This is what Cloudflare does, for example; they have several "datacenters" all over the world and their total bandwidth amounts to over 209 Tera-bits per second.
And since their server sits between the users and the servers actually hosting the application, the attacker needs as much available bandwidth as Cloudflare can handle.
This only applies if the IP address of the server actually hosting the application is well hidden and does not leak.
This consists in sending a large number of SYN requests to a server. This is the request that is the first step in a TCP connection, so the server will respond with a SYN-ACK, but as the attacker won't send an ACK (which validates the opening of the connection for the client), this forces the server to keep the connection open, and therefore mobilizes resources unnecessarily.
Once all the server's ports (sending ports, not listening ports) are occupied, it can no longer function as it should.
In the case of Cloudflare, once again, since they put themselves between the attacker and your servers, the attacker's request goes into their data centers, and given the number of servers they have, good luck saturating everything.
And on your side, to protect yourself you can configure your server so that once your server is saturated with "semi-open" connections, it overwrites the oldest with a new one, so the attacker can't completely block your server.
There are other solutions, which I won't go into here, as they all have their drawbacks. Even the one I've just suggested has its drawbacks, but it's the "least restrictive".
We're not looking to saturate bandwidth or exploit protocols, the idea here is to exploit software or its configuration to bring down a service.
As a first example, let's take PHP-FPM, the PHP module often used with NGINX, whose job, as you'd expect, is to interpret PHP (for the uninitiated, PHP is a programming language).
In its default configuration, PHP-FPM has a limitation: after a certain number of requests, when it can no longer process them all, it will simply stop and the service offered by the server will cease.
As you can see, in this case, you just need to send a lot of simple HTTP requests, and after a while, when you've exceeded the PHP-FPM limit, the service will stop.
In the case of Cloudflare, once again, they've got what it takes, since they sit between the attacker and your servers, and when they have doubts about the legitimacy of a request, they can simply filter it out by presenting a "captcha (something to be resolved by a human), attacks targeting the application can reach your servers (unlike attacks via the protocol described above), so it's up to you to configure your WAF (Web Application Firewall) properly.
If you're not using Cloudflare, you need to pay attention to "how your applications are written, and how their environments are configured".For example, rather than sending every request to PHP, you can use a caching proxy before your back-end, such as Varnish, which simply gives the same response to every request and is therefore very lightweight.
You can also use a rate-limiting server like NGINX, which limits the number of requests per second per IP, because who makes more than 30 requests per second from the same IP address?
Finally, you can go even further, by making a mix, for example, on a user's first request, this is necessarily handled by the Varnish cache, and when the page is visited, a cookie is generated in the user's browser, this cookie being the key to accessing pages actually generated by Varnish.This is because classic DDoS scripts seeking to bring down a service are generally incapable of handling cookies, and are simply programs that send a simple HTTP request in a chain.
This article is not intended to be "the reference" on the subject, but I've tried to make it as simple as possible 😁.
Have a great week and see you next time.