I use darktable (DT) as my RAW processing tool, and have so for many years now.
Losing my DT configurations and the database with tags and image metadata is something I don't want to experience.
For that reason I take backups in a couple of ways:
-
In addition to that, when I login on my Ubuntu 22.04 desktop computer, a backup of ~/.config/darktable is made and stored on a cloud service.
The bash script taking a backup, first deletes backups older than 10 days and then does a 'dar' backup of the directory
Location of the systemd user service ls: $HOME/.config/systemd/user/darktable-backup.service
I use this service description:
[Unit]
Description=Take backup of darktable config dir
[Service]
Type=oneshot
ExecStartPre=/bin/sleep 180
ExecStart=$HOME/.config/systemd/user/darktable-backup.sh
Restart=no
[Install]
WantedBy=default.target
The script starts approx 3 minutes after login. By that time the connection to my cloud provides should work just fine.
Script location is: $HOME/.config/systemd/user/darktable-backup.sh
#! /bin/bash
CLOUDDIR=$HOME/<cloud-drive>/darktable-config
AGE_DATE=$(date --date="-10 days" -I)
AGE_SECS=$(date +%s --date "$AGE_DATE")
while IFS= read -r -d "" file
do
FILE_DATE=$(echo $file|grep -o -E "darktable-config-[0-9]{4}-[0-9]{2}-[0-9]{2}") # don't find dates in directory names
FILE_DATE=$(echo $FILE_DATE|grep -o -E "[0-9]{4}-[0-9]{2}-[0-9]{2}")
FILE_DATE_SECS=$(date +%s --date "$FILE_DATE")
if (( AGE_SECS >= FILE_DATE_SECS )); then
echo "cleanup: $file"
rm -f "${file}"
fi
done < <(find "$CLOUDDIR" -type f -name "darktable-config-*.dar*" -print0)
# take a backup using "dar"
dar -c $CLOUDDIR/darktable-config-`date -I` -R "$HOME/" -g .config/darktable