It can be a super mixed bag - you can get lucky and end up with a drive that has spent it's entire life sitting on a shelf as a cold spare and was literally only powered up to be wiped so the recycler can say they wiped all the drives, or you can get a drive that has been running well over its MTBF and will fail start throwing SMART pre-fail warnings 30 seconds after your warranty expires
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
throwing SMART pre-fail warnings 30 seconds after your warranty expires
WD does something very similar to this for its Red SATA drives 🥲
Is it easy to check that out when you have the drive?
Bought an ultra cheap (classic sata) 3TB drive for redundancy, haven't hooked it up yet though, but I mean what about some magic raid with a handfull if them for low usage like backup? Maybe some 512/1TB ssd as cache on top of it if it's used more than a backup? I mean I don't even know if that exists.
Sorry if it's a stupid question but I grew up well before 1GB drives hit the market :-)
SMART (the internal drive self-check/monitoring system, exposes a number of statistics that can be read by software on the host machine) exposes a "power on hours" counter and a "power cycles" counter - a high count of either of those would indicate a drive that had been heavily used. Also worth looking at the "pending sector count" and "reallocated sector count", as increasing values of those is a pretty good early indication of failure
Excellent, thanks!
It is easy to get to these drive self reporting data, but reading it can be tricky.
For Windows there are GUI tools like https://crystalmark.info/en/software/crystaldiskinfo/, that make all those values understandable even for people without much computer background.
For Linux the story is a bit different. Most current desktop environments do carry some functionality to read the S.M.A.R.T. data from the drives and display these data to the user. The user then has to find a way to interprete these seeminly random numbers. For things like the amound of written data to a drive (relevant for SSD) you have to pull out a calculator. Maybe there are also easily unterstandable GUI tools for Linux, I just haven't found them.
Thanks!
SAS drives are cheaper because the market is smaller. You need SAS hardware to use them, but SATA drives can go basically anywhere.
Hopefully the seller gives some idea of the condition. Usually the ones I've bought have been anywhere between 10k and 40k power-on hours. Much more than that, if they were really cheap, I'd just buy spares.
The other big health factor is start-stop counts. Server drives shouldn't have too many of those. If that number is really high, I'd be concerned about how the drive was used.
I'm running 4 Seagate exos 10tba sas3 drives. They're fast, quiet, and don't use a ton of power.
Now you can get 14 and 16tb drives now for the same price.
Honestly, at this point, I'd rather sell the SAS drives and go back to SATA. I don't need an insane amount of storage, and the controller adds power usage.
Hmm, I would only purchase them as a third (or more) layer of redundancy, or maybe for storing things like ripped media that could just be re-ripped (or re-torrented) should the drives fail. I would not trust them for anything important since you have no idea what kind of environment they were in for all those years.
I've been running solely on used drives from ebay, I've only ever had one DOA which was refunded without issue and only had one die in service