this post was submitted on 16 Jun 2023
62 points (98.4% liked)

Selfhosted

40347 readers
517 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of?

I'm currently in process of "building" my own server and I'm kinda wondering how "far" most people are going, where do y'all take any shortcuts, and what do you spend effort getting just right.

(page 2) 31 comments
sorted by: hot top controversial new old
[–] VexCatalyst@lemmy.fmhy.ml 2 points 1 year ago

Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.

Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.

NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.

I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part

The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.

[–] Saigonauticon@voltage.vn 1 points 1 year ago

Cloud vps with debian. Then fix/update whatever weird or outdated image my vps provider gave me (over ssh). Then setup ssh certs instead of password. I use tmux a lot. Sometimes I have local scripts with scp to move some files around.

Usually I'm just hosting mosquitto, maybe apache2 webserver and WordPress or Flask. The latter two are only for development and get moved to other servers when done.

I don't usually use containers.

I'm better at hardware development than all this newfangled web stuff, so mostly just give me a command line without abstractions and I'm happy.

[–] EddyBot@feddit.de 1 points 1 year ago

Probably the odd one here with Arch Linux + docker compose with still a lot of manual labor
updating it after maximum 4 weeks is enough, container more often

[–] master@lem.serkozh.me 1 points 1 year ago

A series of VPSes running AlmaLinux, I have a relatively big Ansible playbook to setup everything after the server goes online. The idea is that I can at any time scrape the server off, install an OS, put in all the persistent data (Docker volumes and /srv partition with all the heavy data), and run a playbok.

Docker Compose for services, last time I checked Podman, podman-compose didn't work properly, and learning a new orchestration tool would take an unjustifiable amount of time.

I try to avoid shell scripts as much as possible because they are hard to write in such a way so that they handle all possible scenarios, they are difficult to debug, and they can make a mess when not done properly. Premade scripts are usually the big offenders here, and they are I nice way to leave you without a single clue how the stuff they set up works.

I don't have a selfhosting addiction.

[–] neo@lemmy.hacktheplanet.be 1 points 1 year ago

I've recently switched my entire self hosted infrastructure to NixOS, but only after a few years of evaluation, because it's quite a paradigm shift but well worth it imho.

Before that I used to stick to a solid base of Debian with some docker containers. There are still a few of those remaining that I have yet to migrate to my NixOS infra (namely mosquitto, gotify, nodered and portainer for managing them).

[–] null@slrpnk.net 1 points 1 year ago

I usually set up SSH keys and disable password login.

Then I git-pull my base docker-compose stack that sets up:

  • Nginx proxy manager
  • Portainer
  • Frontend and backend networks

I have a handful of other docker-compose files that hook into that setup to make it easy to quickly deploy various services wherever in a modular way.

[–] sascamooch@lemmy.sascamooch.com 1 points 1 year ago (2 children)

I'd like to use rootless podman, but since I include zerotier in my containers, they need access to the tunnel device and net_admin, so rootless isn't an option right now.

Podman-compose works for me. I'd like to learn how to use Ansible and Kubernetes, but right now, it's just my Lemmy VPS and my Raspberry Pi 4, so I don't have much need for automation at the moment. Maybe some day.

You can add net_admin to the user running podman, I have added it to the ambient capability mask before, which acts like an inherited override for everything the user runs.

[–] myersguy@lemmy.simpl.website 1 points 1 year ago

Super interesting to me that you switch between Debian and Ubuntu. Is there any rhyme or reason to when you use one over the other?

[–] jlh@lemmy.jlh.name 1 points 1 year ago

Kubernetes.

I deploy all of my container/Kubernetes definitions from Github:

https://github.com/JustinLex/jlh-h5b/tree/main/applications

[–] Laura@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

For me it’s Ubuntu Server as the OS base, swag as reverse proxy and docker-compose for the services. So mostly SSH and yolo but with containers. I’d guess having something like Portainer running would probably be useful, but for me the terminal was enough.

As folder structure I just have a services directory with subfolders for each app/service.

[–] thomas@lemmy.douwes.co.uk 1 points 1 year ago

I have a stupid overcomplicated networking script that never works. So every time i set up a new server I need to fix a myriad of weird issues I've never seen before. Usually I setup a server with a keyboard and mouse because SSH needs networking, if it's a cloud machine its the QEMU console or hundreds of reboots.

[–] d3lta19@kbin.social 1 points 1 year ago

For years over done an Ubuntu LTS base with docker, but I've just recently started using debian base. Moved to debian for my workstation as well.

[–] binwiederhier@discuss.ntfy.sh 1 points 1 year ago
[–] Elbullazul@lem.elbullazul.com 1 points 1 year ago

I run Debian + Docker, and use Portainer to manage the docker stacks

[–] sudneo@lemmy.world 0 points 1 year ago (1 children)

I have a bunch of different stuff, a dedicated server with Debian, 4 raspberry Pis + 1 micro computer that acts as a LB/Router/DHCP/DNS for the Pis.

In general I would say that my logic is as follows:

  • Every OS change is done through Ansible. This sometimes is a pain, you want to just apt install X and instead you might need to create a new playbook for it, but in the long term, it paid off multiple times. I do have some default playbook that does basic config (user, SSH key provisioning, some default packages) and hardening (SSH config, iptables).
  • I then try to keep the OS logic to a minimum, and do everything else as code. On my older dedicated server I run mostly docker-compose with Systemd + templated docker-compose files dropped by Ansible. The Pis instead run Kubernetes, with flux and all my applications are either directly managed via Flux or they have Helm in between. This means I can destroy a cluster, create another way, point it to my flux repository and I am pretty much back where I started.
[–] laculacu@lemmy.world 1 points 1 year ago (1 children)

Sounds cool. ansible could never convince me, though, because playbook writing is so annoying.

[–] sudneo@lemmy.world 1 points 1 year ago (1 children)

Oh, I am there with you on that. I got used in my previous job, where everything was done with Ansible, but I still find myself copy pasting and changing most of the times. I actually like way more a declarative approach a-la-terraform.

Overall though there is a lot of community material, and once the playbooks are written it's quite good!

[–] laculacu@lemmy.world 2 points 1 year ago

I guess if I would automate my base setups with ansible so that I have a good foundation and have learned the tool properly, I would stick to it, but it was one of the cases were I was pushed away right from the start.

load more comments
view more: ‹ prev next ›