Skip to content

Instantly share code, notes, and snippets.

@taw00
Last active September 20, 2022 19:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save taw00/67e4f2c8a7d74cb8ab3972a6214de6d1 to your computer and use it in GitHub Desktop.
Save taw00/67e4f2c8a7d74cb8ab3972a6214de6d1 to your computer and use it in GitHub Desktop.
HowTo Rotate Dash Masternode Log Files

HowTo Rotate Dash Node Log Files

Dash Masternodes use several log files. In particular, debug.log, and in many configurations, sentinel.log. Letting these log files grow indefinitely is not wise. Additionally, imagine if your Masternode is DDOS attacked. One aspect of a DDOS attack is that your Masternode may fill up diskspace with log messaging.

Linux sytems fortunately give you a really simple method of ensuring those log file sizes don't get out of hand.

What log rotation does is periodically, based on rules, copy the log file to a new name, and compress it for archival purposes and zero out the original file. It will do this periodically keeping only a fixed number of copies.

Note1: This is somewhat based on a more general HowTo I wrote. That can be found here: https://github.com/taw00/howto/blob/master/howto-logrotate.md

Note2: If you installed your Dash Masternode via the methods and packaging provided at https://github.com/taw00/dashcore-rpm and run dashd as a systemd enabled service, you have no further action to take. Managing your log files has already been done for you.
...for all the rest of you, continue reading...

What you need to configure your system...

  • Your username or the username of the owner of the Dash Core datadir. For example, for me, t0dd
  • Location of your debug.log file, for example /home/t0dd/.dashcore/debug.log
  • If you are redirecting Sentinel logging to a file, what's that location? For example, /home/t0dd/.dashcore/sentinel.log

Create logrotate configuration...

Log into your Masternode, and using the configuration options from above (replacing my values in this example), edit /etc/logrotate.d/dashcore (as root) and...

/home/t0dd/.dashcore/debug.log /home/t0dd/.dashcore/testnet3/debug.log {
    su t0dd t0dd
    rotate 5
    missingok
    notifempty
    compress
    maxsize 50M
    copytruncate
    nodateext
}

/home/t0dd/.dashcore/sentinel.log {
    su t0dd t0dd
    rotate 5
    missingok
    notifempty
    compress
    maxsize 100k
    copytruncate
    nodateext
}

Save that and exit.

That configuration, will rotate the listed log files, up to 5 saved and compressed archives each, and archives them only if they go over 50MB in size (and 100k for Sentinel).

Note: 50M and 100k are arbitrary. A typical day yields debug.log files of 3 to 5MB in size and sentinel.log files of neglible size, so you should see only 1 or 2 rotations per week. 5M, or 10M, or even 500M for the debug.log files is fine dependent on how much you want to preserve.

Let's check every hour instead of every day...

Logrotate automatically runs by default once per day. Forcing it to run every hour or every 30 minutes will really help if your log files are at risk of exploding in size due to an anomalous event, like, for example, a DDOS attack.

Edit root's crontab...

sudo crontab -e
# ...or if you don't like crontab's default editor...
sudo EDITOR="nano" crontab -e

...add this line; save; and exit...

# Run logrotate every 60 minutes
*/60  *  *  *  *   root    /usr/sbin/logrotate /etc/logrotate.conf

And your done!

Every 60 minutes, the logrotate service will check the size of your log files and rotate them, saving 5 compressed archives.

Please feel free to send feedback or comment to t0dd@protonmail.com

UPDATE!!! (September 2022)

I ran into the problem where the system journal would sometimes explode in size and take down my Dash nodes in the process. This is a simple problem to solve: limit the size of the journal to 100M.

  1. Edit the journald configuration file:
    sudo vi /etc/systemd/journald.conf

  2. Uncomment and set this key=value pair:

SystemMaxUse=100M

And if you are in a bind and you need to nuke that journal time now, do this:
sudo journalctl --vacuum-size=10M

That should bring your node back from the brink. (You may have to restart some things.)

Good luck! -t

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment