this post was submitted on 12 Jun 2023
196 points (100.0% liked)

sh.itjust.works Main Community

7705 readers
23 users here now

Home of the sh.itjust.works instance.

Matrix

founded 1 year ago
MODERATORS
 

I'm aware that some of you have been getting some errors loading this instance. This was a configuration that I needed to be adjusted which has since been.

Do be patient if we run into other issues, I'll be continuously working in the back-end with others to better your overall experience.

Still a lot of capacity available!

edit: For those that are interested in the configuring changes. I was using the default lemmy configuration for the nginx worker_connections value. This value needed to be raised. Shout out to @ruud@lemmy.world over at lemmy.world for helping me out.

top 50 comments
sorted by: hot top controversial new old
[–] player1@sh.itjust.works 33 points 1 year ago

Funny enough I’d say Shit Doesn’t Just Work - it takes your hard work. Thanks for helping fill the Reddit void @TheDude@sh.itjust.works

[–] Whooping_Seal@sh.itjust.works 16 points 1 year ago (1 children)

Just out of curiosity, would it help if when posting images we use services such as imgur (or alternatives)? I’m assuming if there is storage issues those type of posts are the biggest culprit.

Thank you for hosting this server @TheDude :)

[–] TheDude@sh.itjust.works 21 points 1 year ago (3 children)

Yes 100%. Using external image hosting service would reduce how much storage is being consumed

[–] csm10495@sh.itjust.works 6 points 1 year ago (1 children)

Would be cool if there was a plugin or something to link with imgur, etc.

Have you considered disabling image upload?

[–] TheDude@sh.itjust.works 3 points 1 year ago

yes this is an option however the lemmy-ui which is responsible for displaying this page would still show the option. It could be manually removed and might be something I look more into at a later time.

[–] PCChipsM922U@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago)

As someone who is still hosting a forum, I would not suggest this. I have also tried to reduce footprint of the forum using external image hodting services and this has always ended up badly (images get lost as one hosting service changes policy and maybe disables direct linking or just closes it's doors, whatever).

This is one of the reasons why I'm not tempted to open up a lemmy server, even though my hosting plan allows 3 subdomains. Used space will rise quickly, even if it's just images... if videos are allowed as well, than all hell will break loose, even if they are processed and reduced in quality and resolution server side. I've seen it happen before, it's a nightmare to revert things afterwards (users complain), not to mention you can't revert the damage, that space is taken and that's that.

My estimate is that, if only images are allowed and images are processed in webp, it'll take about a week for a quite busy instance to reach the 1GB mark... probably a lot faster if it's an NSFW instance (a few days). Think about this from a migration perspective after 1 year - it will be a nightmare.

load more comments (1 replies)
[–] this@sh.itjust.works 15 points 1 year ago

congradulations, you have now beat reddit itself for uptime, lmfao.

[–] _haha_oh_wow_@sh.itjust.works 15 points 1 year ago

Thank you for hosting this and keeping us informed!

[–] jord@sh.itjust.works 13 points 1 year ago

Thanks for addressing it so quickly!

[–] can@sh.itjust.works 12 points 1 year ago (1 children)

Are things scaling well then? Is it still alright to recommend new people come here or would you rather hold off for a bit now?

[–] TheDude@sh.itjust.works 14 points 1 year ago (16 children)

I figured out the storage issue last night, the instance is only at about 20% utilization so we should be good to take on a good amount more. We probably will need to do some more tweaking as we grow but for now its looking pretty good!

[–] VinS@sh.itjust.works 7 points 1 year ago

In Montreal, do we need to send you some hard drives?

load more comments (15 replies)
[–] falkerie71@sh.itjust.works 12 points 1 year ago (1 children)

Thanks for the update! Any plans for a separate channel outside of sh.itjust.works like Discord or Mastodon, just in case to give people a heads up if the server goes down or is in maintenance?

[–] PolDelta@sh.itjust.works 15 points 1 year ago (1 children)

I like the idea of a status account on Mastodon. That’s probably a little more accessible than Discord.

[–] borari@sh.itjust.works 12 points 1 year ago

Matrix would also be a viable alternative to Discord.

[–] TheDude@sh.itjust.works 12 points 1 year ago (1 children)

if anyone starts getting any weird errors please do let me know but everything still seems to be running smoothly from my side.

[–] csm10495@sh.itjust.works 3 points 1 year ago (3 children)

I tried to create a new community a couple times and it hung a bit ago. I'll try again tomorrow.

load more comments (3 replies)
[–] larktreblig@sh.itjust.works 11 points 1 year ago

Thanks for the update, i really like the transparency

[–] Barbarian@sh.itjust.works 10 points 1 year ago

Great to hear, and thanks for the update!

[–] starrox@sh.itjust.works 9 points 1 year ago

Thanks dude. The performance here is really excellent in my opinion. Had a couple of page errors but nothing that couldn't be fixed by reloading.

[–] Echolot@sh.itjust.works 9 points 1 year ago (1 children)

Would love to hear some more details on the misconfiguration if you want to share @TheDude@sh.itjust.works

[–] TheDude@sh.itjust.works 9 points 1 year ago (1 children)

just updated the thread post to include more information for your curious beautiful mind

[–] taladar@sh.itjust.works 8 points 1 year ago

Not sure if you already did or the default config does it but if you only have one backend in an nginx reverse proxy it might also make sense to configure the max_fails and possibly the fail_timeout options so nginx won't consider your backend down for a few seconds every time it receives a TCP error connecting to that (single) backend. max_fails=0 in the named upstream section (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream ) is what you want here.

[–] manifex@sh.itjust.works 8 points 1 year ago

Thanks @TheDude! Scaling systems like this is always a challenge and you really get to learn the performance quirks of the code. Thanks for all your work.

[–] Weird_With_A_Beard@sh.itjust.works 8 points 1 year ago (1 children)

And through it all, the Dude abides.

Thanks for hosting, thanks for the expertise, thanks for the response time!

load more comments (1 replies)
[–] agreenbhm@sh.itjust.works 7 points 1 year ago* (last edited 1 year ago) (1 children)

@TheDude@sh.itjust.works does Lemmy support a distributed configuration with multiple database and app servers, or are you limited to a single instance of everything?

[–] TheDude@sh.itjust.works 13 points 1 year ago (3 children)

The official supported deployments are single instance based, that being said things are already broken up into separate docker containers so it should be pretty easy to do. Would need to do some testing before hand. If this instance continues growing this way I'll need to look into scaling horizontally instead of vertically

[–] httpjames@sh.itjust.works 5 points 1 year ago* (last edited 1 year ago)

Check out CockroachDB, a distributed SQL database compatible with PostgreSQL clients.

[–] can@sh.itjust.works 4 points 1 year ago (1 children)

If this instance continues growing this way I'll need to look into scaling horizontally instead of vertically

Could you elaborate what this means?

[–] isildun@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago) (1 children)

Better to have more servers as opposed to one ultra powerful server because the ultra powerful server tends to be more expensive than an equivalent strength collection of weaker servers. Also, you pay for all the power even during times you don't need it whereas you can take down or add more weak servers as necessary.

load more comments (1 replies)
[–] agreenbhm@sh.itjust.works 3 points 1 year ago

Thanks for the info! Keep up the great work man.

[–] quazen@sh.itjust.works 6 points 1 year ago

first comment here after coming from reddit, this looks really clean and simple so good job :)

[–] ImFresh3x@sh.itjust.works 5 points 1 year ago

Thanks. Glad I’m here.

[–] PCChipsM922U@sh.itjust.works 5 points 1 year ago

worker_connections basically means how many concurent conmections cam be established, right?

[–] Provenscroll@sh.itjust.works 4 points 1 year ago

Glad to have an update on this, Thanks!

[–] scarrexx@sh.itjust.works 4 points 1 year ago

Thanks for the fix @TheDude@sh.itjust.works.

[–] Mstraa@sh.itjust.works 4 points 1 year ago

Thank you for this great server!

[–] ShadowAether@sh.itjust.works 4 points 1 year ago
[–] oxideSeven@sh.itjust.works 4 points 1 year ago (1 children)

my all/hot on here hasn't updated in update 10 hours. Is that something with our instance?

Checked on a different instance and it seems to work fine.

[–] csm10495@sh.itjust.works 3 points 1 year ago (1 children)
load more comments (1 replies)
[–] HoochIsCrazy@sh.itjust.works 3 points 1 year ago

What a dude. Thanks for your work

[–] mrmanager@lemmy.today 3 points 1 year ago (1 children)

Users should spread out across instances - that is the entire point of being distributed and federated.

Absolutely. At the same time, when you have a bunch of users migrating from a centralized platform who don't quite get the fediverse stuff yet, they're likely going to go to well-populated instances.

Hopefully as things progress messaging around this point will penetrate with new folks (like me!)

[–] Poot@sh.itjust.works 3 points 1 year ago

Really glad to have a home here. Thank you for your work!!!!

[–] dzaffaires@sh.itjust.works 3 points 1 year ago

Thanks for the great server! I've read your sticky post saying the storage was the most unexpected ressource going up. Can you share what kind of storage space your talking about? I could in the future be interested in spinning up an instance, and I haven't seen any documentation or info on that.

[–] Contentedness@sh.itjust.works 3 points 1 year ago

Nice work dude

load more comments
view more: next ›