Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Borgbase with Borgmatic (Borg) as the Software. As far as I know the whole Borgbase Service is from a Homelab guy (with our needs in mind).
Also 3-2-1 rule!
Also team borgmatic here. ;)
Regardless of service, if you don't test your backups, you have none.
Ehhh I would say then you have probabilistic backups. There's some percent chance they're okay, and some percent chance they're useless. (And maybe some percent chance they're in between those extremes.) With the odds probably not in your favor. 😄
Not so much about testing, but one time I really needed to get to my backups I lost password to the repository (I'm using restic). Luckily a copy of it was stored in bitwarden, but until I remembered it, were perhaps one of the worst moments.
Needless to say, please test backups and store secrets in more then one place.
I have an unraid server which hosts an docker image of Duplicacy. It is paid though for the web interface. And it backs up to Backblaze B2. I have roughly 175GB backed up, for which I pay $0.87 a month.
This is almost my exact backup workflow, with another location in between. Duplicacy is great, highly recommend.
rsync.net is great if you need something simple and cheap. Backblaze B2 is also decent, but does have the typical download and API usage cost.
I had never heard of rsync.net until now. I like the idea but it seems more expensive than B2. $15/TB vs $5/TB. Am I doing the math wrong or reading it wrong?
I've never heard of it either, but I came to the same conclusion as you
When I researched what to use for my backup I found rsync.net. They have some nice features nobody else seems to support, like they support ZFS send/receive https://www.rsync.net/products/zfsintro.html
But in the end the price made me go with borgbase.com
I don't see it on their website right now, but they offer a discount if you're using something like restic/borg and only need scp/sftp access. Their support is also super friendly. I've had an account forever and got moved to the 100+ TB pricing even though I have < 50TB stored. YMMV but it doesn't hurt to ask if they have any additional discounts.
Also keep in mind that B2 charges for bandwidth too. It's $5/TB for storage, but $10/TB to download that same data.
I use Restic + Resticprofile to back up everything and store it on my local HDD.
Then, I use Rclone to sync the local repository to Backblaze B2.
Here's my general setup:
/.config/restic/
├── logs
│ ├── statuses
│ │ ├── restic-status-20230202T020202.json
│ │ └── restic-status-20230101T010101.json
│ ├── restic-check-20230202T020202.log
│ └── restic-backup-20230101T010101.log
├── config
│ ├── profiles.yaml
│ ├── excludes.txt
│ ├── rclone.conf
│ └── password.txt
├── bin
│ ├── restic_0.15.2_linux_arm64
│ ├── rclone_1.63.1_linux_arm64
│ └── resticprofile_0.22.0_linux_arm64
version: "1"
# Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events)
{{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }} # Daily at 10PM
{{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }} # Weekly at 4AM on Saturday
{{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }} # Weekly at 11.30PM on Sunday
{{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday
# Directories
{{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }}
{{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }}
{{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }}
{{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }}
{{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }}
{{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }}
{{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }}
{{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }}
{{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }}
{{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }}
{{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }}
# Configs
{{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }}
{{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }}
{{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }}
global:
default-command: snapshots # Run 'snapshots' when no command is specified
initialize: false # Do not initialize a repository if none exists
priority: low # Use priority class on Windows and "nice" on Unixes
min-memory: 100 # Minimum required RAM for Resticprofile to start
restic-lock-retry-after: 5m # Retry failed restic command acquisition every 5 minutes
restic-stale-lock-age: 10h # Unlock stale lock if age exceeds 10 hours
restic-binary: '{{ $LOCATION_RESTIC_BINARY }}' # Location of the Restic binary
default:
lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}' # Local lockfile to prevent concurrent profile runs
force-inactive-lock: true # Detect and remove stale locks
initialize: true # Initialize repository if it doesn't exist
repository: '{{ $LOCATION_RESTIC_REPO }}' # Path to Restic repository
password-file: '{{ $CONFIG_RESTIC_PASSWORD }}' # File containing repository password
status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json' # Output status file
compression: 'max' # Maximum compression level
run-after-fail: # Block syncing if there was a failure. TODO: Add an email
- 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}'
backup:
run-before: # Bring down Docker before backup
- 'systemctl stop docker.socket'
- 'systemctl stop docker'
run-finally:
- 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log' # Copy log file, stripping out any unchanced files
- 'systemctl start docker' # Bring Docker back online after backup
one-file-system: false # Exclude other file systems
no-error-on-warning: true # Don't consider warnings as backup failures
source: # Directories to back up
- '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}'
exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns
exclude-caches: true # Exclude cache files
schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}' # Backup schedule
schedule-permission: system # Schedule permission
schedule-lock-wait: 10m # Wait time for the lock during schedule
schedule-log: '{{ tempFile "backup.log" }}' # Log file to /tmp. This contains all information, including unchanged files which we do not care about
verbose: 2 # Log details about processed files
check:
schedule: '{{ $SCHEDULE_RESTIC_CHECK }}' # Verification schedule
schedule-permission: system # Schedule permission
schedule-lock-wait: 10m # Wait time for the lock during schedule
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log' # Log file
read-data: true # Verify data during check
prune:
dry-run: true # Only prune if safe to do so, change manually
repack-uncompressed: true # Repack all uncompressed data
forget:
dry-run: true # Only forget if safe to do so, change manually
rewrite:
dry-run: true # Only rewrite if safe to do so, change manually
forget: true # Remove original snapshots after creating new ones
exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns
mount:
allow-other: true # Allow other users to access the mount point
rebuild-index:
read-all-packs: true # Read all pack files to generate new index from scratch
# The following shell profiles are simply to run other shell scripts at a scheduled time
# We do not actually run the primary Restic commands listed, as we exit the process early
shell-postgres: # Profile to run shell scripts only. We exit the current process before Restic can run.
backup:
schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}' # Postgres backup schedule
schedule-permission: system # Schedule permission
schedule-lock-mode: ignore # Ignore locks, if any
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log' # Log file
dry-run: true # Don't write data
run-before: # Dump postgres databases
- 'chmod 777 /var/run/docker.sock'
- 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
- 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"'
- 'kill $$'
shell-sync:
backup:
schedule: '{{ $SCHEDULE_SYNC_BACKUP }}' # Sync backup schedule
schedule-permission: system # Schedule permission
schedule-lock-mode: ignore # Ignore locks, if any
schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log' # Log file
dry-run: true # Don't write data
run-before: # Sync the Restic repo, after checking if the repository is in good health
- 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi'
- '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete'
- '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}'
- 'kill $$'
Resticprofile doesn't let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.
Backblaze b2, borgbase.com. There are also programs like dejadup that will let you backup to popular cloud drives. The alternatives are limitless.
Backblaze.
I use restic to backup my raspberry Pi's to my Synology NAS and backup my NAS to backblaze.
Restic or Kopia, both to Backblaze.
I second restic. Have been using it for a year now and have been generally very happy. Actually had to use it in a couple occasions to restore directory content and even recover a complete workstation drive. I have had relatively easy success in both scenarios.
I've always found them pretty similar. How'd you chose one or another?
I know Restic before Kopia and made a set of systemd units to run Restic backups on my home server and office workstation (both online 24/7).
Kopia seems much nicer for a regular user, so I use it on my and family laptops. I used to use Duplicati there, but that project seems dead.
As dumb/simple/boring as this may be...? An external hard drive.
....
....what? It doesn't require you to be online 24/7, works at any(tm) PC, and the speed is really great -- even on a potato.
Unless you work at NASA or at IBM or similars -- then feel free to call me dum.
Backblaze B2 for automatic syncing of all the little files
Glacier for long term archiving of old big files that never change
I use SyncThing to backup our cell phones to my on-prem server, and then use BackBlaze Personal Backup for a cloud copy.
Tears... Natural, salty, wet tears...
- restic > backblaze b2, nightly & automatic
- restic > normally unplugged drive, every couple weeks (manual, recurring reminder)
Veeam backup and replication at home and at work. At home a copy goes to a NAS, another copy goes to backblaze b2 currently.
External HDD in my wifi network. It runs Samba. I can just drag and drop folders and it transfers over wifi.
Backups and archived files go to my home server which then backups to backblaze b2.
My setup exactly, with the addition of using M-Discs to backup my core important stuff.
rsync.net and learn to use Borg; they're stupid cheap if you're technically proficient enough to handle the Borg setup yourself. Like, charge by the gigabyte, but it's 1.5¢/GB at the most expensive, and cheaper in bulk
I use Duplicati connected to Storj with data volumes that incrementally get backed up once per month. My files don't change very often, so monthly is a good balance. Not counting my Jellyfin library, those backups are around 1 TB. With the Jellyfin library, almost 15 TB.
Earlier this year, I recovered from a 100% data loss scenario, as I didn't (and still don't) have space for physical backups. I have a 25 TB allowance, so my actual cost was €0. If I had to pay, it would have been under €1.
Do you mean 25TB as the storj site says 25gb? Did some promotion give you that much free?
Definitely 25 TB. I've used the service for a long time, since before they accepted credit cards. I attached my credit card one day and got a bump to 25 TB. Since that happened, I pay basically nothing and my account is still 100% storj token funded.
Edit: I dug up screenshots I sent someone recently
I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.
I now have a Synology NAS, with 12TB in a RAID5 array (for a bit of disk redundancy). All my home devices, Proxmox servers etc back up here. The NAS also holds a few TB of media. Attached to it I have a USB hard drive (also 12TB). The NAS gets fully backed up to the USB drive nightly.
I also have a remote Raspberry Pi with a smaller USB drive (4TB) attached to it at my brother's house (in another country), where I backup most of the contents of my home NAS. I don't back up the media, just the important stuff. I might have to upgrade to a larger drive...
I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.
If it's the only copy, it's not a backup. It's the master.
I use wasabi s3, I back up to that using restic.
To back up my Synology: My first level is an old Synology, the second is Amazon Glacier.
Duplicati, to a friend's home server who lives in another town.
I hate to ask the scary question, but have you tried to restore your backups before? I used Duplicati and discovered that none of my backups were usable and ended up switching to Duplicacy.
+1 for Duplicacy. It just works, truly does. Duplicati on the other hand seems to work, but has a tendency to fail on restore, just as you described.
An important question though.
I have, when I first set it up, and again once when I needed to.
I use OneDrive. Buy the Costco subscription and get like 15 months for around 110 CAD. GIVES 6 TB. I create some fake accountsink the sharing to my main account. I have an encrypted rxlone share for some things and others I GPG encryot the tar before sending it up. Been working fine for a couple years and I have multiple TB backed up.