this post was submitted on 03 Nov 2024
30 points (78.8% liked)

Selfhosted

40347 readers
518 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I know how RAID work and prevent data lost from disks failures. I want to know is possible way/how easy to recover data from unfunctioned remaining RAID disks due to RAID controller failure or whole system failure. Can I even simply attach one of the RAID 1 disk to the desktop system and read as simple as USB disk? I know getting data from the other RAID types won't be that simple but is there a way without building the whole RAID system again. Thanks.

you are viewing a single comment's thread
view the rest of the comments
[–] computergeek125@lemmy.world 9 points 2 weeks ago (6 children)

For recovering hardware RAID: most guaranteed success is going to be a compatible controller with a similar enough firmware version. You might be able to find software that can stitch images back together, but that's a long shot and requires a ton of disk space (which you might not have if it's your biggest server)

I've used dozens of LSI-based RAID controllers in Dell servers (of both PERC and LSI name brand) for both work and homelab, and they usually recover the old array to the new controller pretty well, and also generally have a much lower failure rate than the drives themselves (I find myself replacing the cache battery more often than the controller itself)

Only twice out of the handful of times I went to a RAID controller from a different generation

  • first time from a mobi failed R815 (PERC H700) physically moving the disks to an R820 (PERC H710, might've been an H710P) and they were able to foreign import easily
  • Second time on homelab I went from an H710 mini mono to an H730P full size in the same chassis (don't do that, it was a bad idea), but aside from iDRAC being very pissed off, the card ran for years with the same RAID-1 array imported.

As others have pointed out, this is where backups come into play. If you have to replace the server with one from a different generation, you run the risk that the drives won't import. At that point, you'd have to sanitize the super block of the array and re-initialize it as a new array, then restore from backup. Now, the array might be just fine and you never notice a difference (like my users that had to replace a failed R815 with an 820), but the result pattern is really to the extremes of work or fault with no in between.

Standalone RAID controllers are usually pretty resilient and fail less often than disks, but they are very much NOT infallible as you are correct to assess. The advantage to software systems like mdadm, ZFS, and Ceph is that it removed the precise hardware compatibility requirements, but by no means does it remove the software compatible requirements - you'll still have to do your research and make sure the new version is compatible with the old format, or make sure it's the same version.

All that's said, I don't trust embedded motherboard RAIDs to the same degree that I trust standalone controllers. A friend of mine about 8-10 years ago ran a RAID-0 on a laptop that got it's super block borked when we tried to firmware update the SSDs - stopped detecting the array at all. We did manage to recover data, but it needed multiple times the raw amount of storage to do so.

  • we made byte images of both disks in ddrescue to a server that had enough spare disk space
  • found a software package that could stitch together images with broken super blocks if we knew the order the disks were in (we did), which wrote a new byte images back to the server
  • copied the result again and turned it into a KVM VM to network attach and copy the data off (we could have loop mounted the disk to an SMB share and been done, but it was more fun and rewarding to boot the recovered OS afterwards as kind of a TAKE THAT LENOVO...we were younger)
  • took in total a bit over 3TB to recover the 2x500GB disks to a usable state - and took about a week of combined machine and human time to engineer and cook, during which my friend opted to rebuild his laptop clean after we had images captured - to one disk windows, one disk Linux, not RAID-0 this time :P
[–] Cyber 1 points 2 weeks ago (1 children)

I can confirm that moving the disks to a very similar device will work.

We recovered “enough” data from what disks remained of a Dell server that was dropped (PSU side down) from a crane. The server was destroyed, most of the disks had moved further inside the disk caddy which protected them a little more.

It was fun to struggle with that one for ~1 week

And the noise from the drives...

[–] possiblylinux127@lemmy.zip 1 points 2 weeks ago

At some point you need a clean room

load more comments (4 replies)