this post was submitted on 03 Aug 2024
62 points (95.6% liked)

Selfhosted

40347 readers
546 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

We are changing our system. We settled on git (but are open for alternatives) as long as we can selfhost it on our own machines.

Specs

Must have

  • hosted on promise
  • reliabile
  • unlikely to be discontinued in the next >5 years
  • for a group of at least 20 people

Plus

  • gui / windows integration
you are viewing a single comment's thread
view the rest of the comments
[–] chiisana@lemmy.chiisana.net 21 points 3 months ago (3 children)

I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.

Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.

It also adds an entire system to be audited by the auditors.

The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.

[–] catloaf@lemm.ee 11 points 3 months ago (1 children)

Git should be able to go down during the day. Worst case you just can't push to origin for a little while. You can still work and commit locally.

[–] chiisana@lemmy.chiisana.net 4 points 3 months ago (1 children)

No PRs means no automated tests/CI/CD, which means you’d slow down the release train. It might typically be just a 2 minutes quick cycle, but that one time it goes off for longer due to a botched update from upstream means you’re never going to do that again during business hours.

Eh, we've had our self-hosted Github go down for a couple hours in the daytime, and it wasn't a big deal. We have something like 60 engineers spread out across the globe, about 15-20 that were directly impacted by the outage (the rest were in different timezones). Yeah, it was annoying, but each engineer only creates like 1 or 2 PRs in a given day, so they posted their PRs after the outage was resolved while working on something else. Yeah, PRs were delayed by a couple hours, but the actual flow of work didn't change, we just had more stuff get posted all at once after the problems resolved.

In fact, Github would have to be out for 2 days straight before we start actually impacting delivery. An hour or two here and there really isn't an issue, especially if the team has advance notice (most of the hit to productivity is everyone trying to troubleshoot at the same time (is it my VPN? Did wifi die? Etc).

[–] swooosh@lemmy.world 5 points 3 months ago (3 children)

Nope. Hosting in the cloud isn't possible due to legal reasons.

I don't think that downtimes area serious issue for us.

[–] chiisana@lemmy.chiisana.net 5 points 3 months ago (1 children)

Must be very unique sector. Good luck with your explorations!

[–] swooosh@lemmy.world 1 points 3 months ago

It is :) thanks!

[–] sugar_in_your_tea@sh.itjust.works 2 points 3 months ago* (last edited 3 months ago)

We have similar (legal is paranoid about our competitors getting our algorithms), so we just put our self-hosted cloud stuff behind our VPN. Nothing we run is on-prem, but almost everything is in our cloud infra.

[–] carl_dungeon@lemmy.world 2 points 3 months ago

In our case cloud is fine, as long as it’s within our security boundary- so that means external SaS is out, but hosted within our cloud is fine. I’m still not super excited about the prospect of managing and maintaining it though :/ We're going down this path because AWS is killing code commit and other pipeline stuff, which sucks because even though other tools are better, code commit was fedRamped and from the same vendor.

[–] corsicanguppy@lemmy.ca 5 points 3 months ago (1 children)

Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance,

This is weird.

Hosts selected for updates will be unavailable from 2100-2110 or so. Then they're up.

They're done by at/cron if they're selected.

There's no manual work if the monitoring system thinks they're okay.

Gitlab-ce on-prem. Although that may now suck since they're being bought out; and we all know how that went for redhat.

[–] sugar_in_your_tea@sh.itjust.works 1 points 3 months ago* (last edited 3 months ago)

That really depends on who buys them. If it's something like Datadog, maybe that's a good thing and they can compete with Github better. It's probably not great for self-hosters, but it could be a great thing for the commercial software ecosystem.