this post was submitted on 23 Nov 2024
88 points (92.3% liked)

Selfhosted

40347 readers
374 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

About a year ago I switched to ZFS for Proxmox so that I wouldn't be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can't downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn't go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

top 50 comments
sorted by: hot top controversial new old
[–] domi@lemmy.secnd.me 11 points 1 day ago

btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

[–] nichtburningturtle@feddit.org 7 points 1 day ago (1 children)

Didn't have any btrfs problems yet, infact cow saved me a few times on my desktop.

[–] Heavybell@lemmy.world 2 points 8 hours ago (1 children)

Can you elaborate for the curious among us?

[–] nichtburningturtle@feddit.org 2 points 8 hours ago (1 children)

btrfs + timeshift saved me multiple times, when updates broke random stuff.

[–] Heavybell@lemmy.world 1 points 8 hours ago

I have research to do, I see.

[–] cmnybo@discuss.tchncs.de 54 points 2 days ago (10 children)

Don't use btrfs if you need RAID 5 or 6.

The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

[–] lurklurk@lemmy.world 9 points 1 day ago

Or run the raid 5 or 6 separately, with hardware raid or mdadm

Even for simple mirroring there's an argument to be made for running it separately from btrfs using mdadm. You do lose the benefit of btrfs being able to automatically pick the valid copy on localised corruption, but the admin tools are easier to use and more proven in a case of full disk failure, and if you run an encrypted block device you need to encrypt half as much stuff.

load more comments (9 replies)
[–] sem@lemmy.blahaj.zone 16 points 1 day ago

Btrfs came default with my new Synology, where I have it in Synology's raid config (similar to raid 1 I think) and I haven't had any problems.

I don't recommend the btrfs drivers for windows 10. I had a drive using this and it would often become unreachable under load, but this is more a Windows problem than a problem with btrfs

[–] Bookmeat@lemmy.world 49 points 2 days ago (6 children)

A bit of topic; am I the only one that pronounces it "butterface"?

[–] adept@programming.dev 2 points 11 hours ago

Related, and I cannot help but read "bcachefs" as "bitch café"

[–] wrekone@lemmy.dbzer0.com 53 points 2 days ago (1 children)
[–] myersguy@lemmy.simpl.website 35 points 2 days ago

You son of a bitch, I'm in.

[–] downhomechunk@midwest.social 2 points 1 day ago

I call it butter fuss. Yours is better.

[–] uhmbah@lemmy.ca 18 points 2 days ago

Ah feck. Not any more.

[–] prole@lemmy.blahaj.zone 6 points 1 day ago (2 children)

Isn't it meant to be like "better FS"? So you're not too far off.

[–] blackstrat@lemmy.fwgx.uk 1 points 8 hours ago

I was meant to be Better FS, but it corrupted it to btrfs without noticing.

i call it "butter FS"

load more comments (1 replies)
[–] avidamoeba@lemmy.ca 23 points 2 days ago (17 children)

You shouldn't have abysmal performance with ZFS. Something must be up.

load more comments (17 replies)
[–] vividspecter@lemm.ee 33 points 2 days ago (3 children)

No reason not to. Old reputations die hard, but it's been many many years since I've had an issue.

I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

I'll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

load more comments (3 replies)
[–] exu@feditown.com 13 points 1 day ago (1 children)

Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

[–] randombullet@programming.dev 3 points 1 day ago

I'm thinking of bumping mine up to 128k since I do mostly photography and videography, but I've heard that 1M can increase write speeds but decrease read speeds?

I'll have a RAIDZ1 and a RAIDZ2 pool for hot storage and warm storage.

[–] fmstrat@lemmy.nowsci.com 5 points 1 day ago

What kind of disks, and how is your ZFS set up? Something seems amis here.

[–] interdimensionalmeme@lemmy.ml 0 points 23 hours ago (1 children)

For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.

For anything else I use ext4 on lvm thinpools.

[–] possiblylinux127@lemmy.zip 3 points 14 hours ago

That doesn't do error detection and correction nor does it have proper snapshots.

[–] zarenki@lemmy.ml 9 points 2 days ago (2 children)

I've been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn't do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.

load more comments (2 replies)
[–] BrownianMotion@lemmy.world 5 points 1 day ago (3 children)

My setup is different to yours but not totally different. I run ESXi 8, and I started to use BTRFS on some of my VM's.

I had a power failure, that was longer than the UPS could handle. Most of the system shutdown safely, a few VM's did not. All of the EXT4 VM's were easily recovered (including another one that was XFS). TWO of the BTRFS systems crashed into a non recoverable state.

Nothing I could do to fix them, they were just toast. I had no choice but to recover using backups. This made me highly aware that BTRFS is still not a reliable FS.

I am migrating everything from BTRFS to something more stable and reliable like EXT4. It's simply not worth the headache.

[–] blackstrat@lemmy.fwgx.uk 1 points 13 hours ago

I had almost exactly the same thing happen.

load more comments (2 replies)
load more comments
view more: next ›