SamSausages

joined 11 months ago
[–] SamSausages@alien.top 1 points 9 months ago (1 children)

What has prompted your interest in data hoarding?

Censorship and Memory-holing

[–] SamSausages@alien.top 1 points 9 months ago

I can't tell you how many channels have disappeared and been memory-holed. Especially since censorship went into overdrive around 2019.

Data hoarders can show you how the world was before all that happened.

[–] SamSausages@alien.top 1 points 9 months ago

Yes and no.

Yes if you have the resources to monitor and update. Companies have entire teams dedicated to this.

No if you don't have the resources/time to keep up with it regularly.

IMO, no need to take this risk when you have services like Tailscale available today.

[–] SamSausages@alien.top 1 points 9 months ago (2 children)

This is very complicated to just give an answer, because:
It varies greatly based on the content. Animated compresses vastly differently than an action movie.

Varies greatly based on encoder. NEVC vs CPU etc

Varies greatly based on encoder options. I.e. -b:v -minrate -maxrate vs -rc vbr -qmin -qmaxcq values, etc

Varies greatly based on who is watching, the TV they use and their tolerance and experience.

Savings are greater at 4k than 1080p. But once you start adding HDR into the mix, you're in a whole new world.

Even the people with very discerning eyes can't agree on everything related to this topic. Wish I could just tell you do x... but you'll have to test various methods and determine what you are happy with.
or, if you just want some space savings... use some default setting that cuts it in 1/2 and forget about it.

[–] SamSausages@alien.top 1 points 9 months ago

I think in general they have lower RPM and run a bit cooler and use a little less power. That usually comes with a bit less performance.

But I'm hooked on the WD Ultrastar series. Server Grade and fast. Also has low power usage, at full tilt, mine use less than 10w each. I'm running 20 hc530's and been rock solid.

[–] SamSausages@alien.top 1 points 10 months ago

Really depends on your use case.

Unraid is wonderful and easy to use. But really has two reasons to use it:Unraid Array fits your file storage strategy. (few writes, mainly reads)You want an EZ way to get into docker and use the unraid appstore.

Other than that, you can probably find everything on Debian or Ubuntu. (I prefer Debian for services)

You can add one more:
Proxmox and then run a Debian VM for docker, for example, and compartmentalize other things you may want to run.

Also, download a mem test utility and run it overnight to test your hardware.

[–] SamSausages@alien.top 1 points 10 months ago

I started out self hosting everything... 20 years ago or so. Then I got swept up in the "cloud" movement and put so many things into the "cloud".
Today I'm reverting back to how I started, self hosting everything that I can.

Mainly privacy, but also because they keep changing and I don't want to have to worry about them.
I just didn't feel like it was 'my' data anymore.

[–] SamSausages@alien.top 2 points 10 months ago

Can be safer. Can be worse.

A poorly configured self hosted vaultwarden can be a major security issue.

A properly configured one is arguable safer than hosting with a 3rd party. Lastpass taught me that one.

If you configure it to where it's not exposed to the web, and only accessed through a VPN, like Tailscale. It can be quite robust.

[–] SamSausages@alien.top 1 points 10 months ago

self hosted git repository.

I setup gitea on my server and use it to track version changes of all my scripts.

And I use a combination of the wiki and .md (readme) files for howto's and any inventory I'm keeping, like IP addresses, CPU assignments etc.

But mainly it's all in .md formatted with markdown.

[–] SamSausages@alien.top 1 points 10 months ago

I do this at the file system level, not the file level, using zfs.

Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.

This is a block level backup, so it only backs up the actual data blocks that changed.

[–] SamSausages@alien.top 1 points 11 months ago (1 children)

I don't use photoprism, but have experienced similar in other docker containers. What is most likely happening is that something, like headers/ports, needs to be forwarded by NPM, usually b adding additional config to the "advanced" tab in NPM.
Sorry, I'm not familiar enough with photoprism to know what exactly needs to be added to the config, but I since nobody has replied, I thought it might at least give you a direction to search in.

view more: next ›