So here we are, only one thing is missing for me to complete my switch from a self maintained Nextcloud instance to Nextcloud AIO: Differential restore of data to a standby system. A quick recap: The default restore doesn’t work for me, as it performs a full restore of all data which takes too long. What I need is a differential restore process that only transfers files that have changed from the remote backup server to my standby Nextcloud instance and deletes the files that no longer exist. While there is no official process to do this, it’s easier than I first thought.
The Nextcloud AIO (automatic) backup process is actually quite straight forward: It stops all Docker containers and then performs a Borg backup of the volumes used by the containers. These volumes contain all user files, the database, etc, etc. By default, these volumes are stored in /var/lib/docker/volumes and are named nextcloud_aio_*. In other words, they are easy to find and examine.
Differential Restore
Restoring only files that are new or have changed, my idea was to mount the remote Borg backup on the standby Nextcloud instance and then to use rsync for each volume. Basically, this boils down to the following two things:
First: Mount the latest Borg archive. Unfortunately, Borg does not have a native command to mount the latest archive at a deterministic directory. So I put together a short shell script to do the job, see blow.
Second: Once mounted on the standby instance, the differential restore is then run per volume as follows:
rsync --dry-run -h --progress --stats -r -tgo -p -l -D --delete-before \
/root/tmp-borg-mount/nextcloud_aio_volumes/nextcloud_aio_database/ \
/var/lib/docker/volumes/nextcloud_aio_database/_data
Repeat this command for each volume. And finally, unmount the borg archive again. Done!
And here’s the shell script to find the latest borg archive and mount it:
#!/bin/bash
# Usage: ./mount_latest_borg.sh /path/to/repo /path/to/mountpoint
REPO="$1"
MOUNTPOINT="$2"
BORG_CMD="/root/borg/borg" # Adjust if borg is in a different location
if [[ -z "$REPO" || -z "$MOUNTPOINT" ]]; then
echo "Usage: $0 /path/to/repo /path/to/mountpoint"
exit 1
fi
read -sp "Enter Borg password: " BORG_PASSPHRASE
echo
# Find the newest archive using Borg's timestamp sorting
latest_archive=$("$BORG_CMD" list --sort-by timestamp --last 1 --format="{archive}{NL}" "$REPO")
if [[ -z "$latest_archive" ]]; then
echo "No archives found in repository: $REPO"
exit 2
fi
BORG_ARCHIVE="$REPO::$latest_archive"
echo "Mounting latest archive: $BORG_ARCHIVE"
"$BORG_CMD" mount "$BORG_ARCHIVE" "$MOUNTPOINT"
Testing the Nextcloud Backup on a Running System
Together with using a different domain name on the standby Nextcloud instance (see previous post), one can easily test if the restore process has worked, because the active and standby Nextcloud instances can be run at the same time. Excellent! To do this, modify the following two parameters in the configuration.json file on the standby instance and then start the containers on the standby instance:
sudo docker run -it --rm --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config:rw alpine sh -c "apk add --no-cache nano && nano /mnt/docker-aio-config/data/configuration.json"
domain: XXXXXX
AIO_URL: XXXXXXX
Changing both parameters is really important as otherwise, your standby instance will point to the ‘all in one’ (AIO) master container on the production instance. If you then click on the “AIO” link in Nextcloud admin section of your standby instance, you are directed to the master container of the production instance, which is not what you want.
Oh, yes, and one more thing: Restoring the live system on the backup system also restores the backup configuration. If you have periodic backups configured on the live system, the backup system will use it as well. This is not really what you want, because at the same time each day, the live server and the backup server will then try to run a Borg backup. Only one will win and it’s not really helpful if the standby server wins 🙂 So either remove the backup configuration on the standby instance or shut it down after the restoration test is complete!
Summary
And that’s it, I now have my Nextcloud AIO instance running and if required, I can spin up my standby instance, differentially restore the latest data to it and be back in service in just a few minutes. And on top, I can verify a proper restore by using a different domain name on the standby server and thus effectively run both instances at the same time during the test. Very nice indeed!