This acheives full file system backups to a destination. I this for backing up my entire Raspberry Pi SD cards (with a few minor modifications) daily via my crontab.
The rrsync script is required from https://www.samba.org/ftp/unpacked/rsync/support/rrsync
Sample line from authorized_key file must reside on the destination server where the backups will be. The path to backup it given here and all restrictions are in place so that this is the only thing the ssh user can do, and the only place they have access to. A public/private key combo without passphrase must have been created and shared out first etc. etc. https://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/
The exclude-list can be added to to skip particular folders or files, although the rsync parameters will prevent it from crossing file systems.
The backup.sh file is where the deed is done and is inspired by https://blog.interlinked.org/tutorials/rsync_time_machine.html (link may now be broken) You specify the remote server and your exclude-list and a few other bits. You can look up what the various rsync parameters will do, but in essence everything will be mirrored or deleted but if existing file already reside in the LINKDIR, then these will be hard linked remote server side. Therefore only changed files need to be transfered. The backup will be timestamped and a sym-link will be made called "current" to the backup just made. The second rsync call moves the local log and new symlink to the remote server too. The first backup will copy everything, but subsequent ones should be faster and incremental. Since rrsync (restricted rsync) is used, the root of the destination folder is actually defined in the authorized_key file, and LINKDIR is relative to the destination and can't use . or .. relative folder names.
The resulting struction will be /path/to/backup/ containing sub-folders such as 2019-04-14T20-45-30 each incrementally hardlinked to the last but appearing to be complete in their own right. There will also be the "rsync.log" of the last backup, and "current" symlink to the last backup folder. Since all un-changing files are hardlinked, this is very space efficient. However it may not be obvious which these are and so all of the backups should be considered read-only.
Note that the script does not delete old backups and eventually you may just run out of space, particularly if the file systems change alot.
To sort this last issue, the PruneBackups.py script (Python3) will prune off backups from the parent folder of all of your backups. This is designed to run as root directly on the filesystem containing all of the backups, so could run on on the the machine hosting your backups, or via NFS network shares (without root-squash) etc. Since I back up multiple devices to their own folder, this has been designed to run on the parent folder containing all of these. If not needed, remove the first inner nested loop and unindent the rest of the block to get it to run directly. In essence, this script tries each sub folder from the root which is where the dated backups for each device will live, and then compares the timestamp to the retention policy defined within the "testretention" function. Any files or folders not in the exact timestamp format will be ignored. You can therefore forcefully keep a backup by renaming it's folder so it doesn't look exactly like the timestamp above (you could literally append "keep-me" to the end of it if you want).