The accumulation of logs and temporary file junk can cause the root partition to fill up noticeably. If the storage space is exhausted, the system can no longer install updates, for example, and in some cases this is the first perceptible sign that something has gotten out of hand.
A completely full root partition mainly occurs on systems where the system and user data are on different partitions. If this is not the case, because all data is stored on one and the same partition without distinction, it will take longer until the memory is completely exhausted. This is triggered by some processes that create more and more log files over time and do not automatically remove them again after a reasonable period of time.
A common problem are system logs that are created in systems with Systemd.
If the storage space of our root partition is full, it is worth checking its logs first.
journalctl --disk-usage
The memory consumption for Systemd can easily reach several dozen GB after some time of use. If we have reserved the recommended 30-50 GB for our system partition, it quickly becomes clear why this becomes a problem. We therefore instruct Systemd to keep only the most recent logs up to a maximum size.
sudo journalctl --vacuum-size=500M
In this case, only 500MB remain for logs and the rest is released. We then want to automate the process and limit the available storage space for Systemd logs.
sudo nano /etc/systemd/journald.conf
In the configuration file, the following line is uncommented (i.e. remove the preceding ‘#’) and supplemented to obtain only the latest 500MB.
SystemMaxUse=500M
The value 500M can be freely selected according to your own requirement.
Identify further consumers
Neben Systemd kann es natürlich andere Dienste geben, die ähnlich verschwenderisch mit Speicherplatz umgehen. Ein erster Schritt diese aufzufinden ist eine Analyse der Dateiverzeichnisse mit dem Größten Speicherverbrauch.
sudo du -hsx /* | sort -rh | head -n 40
In addition to Systemd, there may of course be other services that are similarly wasteful with memory. A first step in finding these is to analyze the file directories with the largest memory consumption.
du -ha /path/to/directory | sort -n -r | head -n 30
In this way, we can identify the biggest consumers and initiate appropriate measures.