this post was submitted on 31 Jul 2023
192 points (89.0% liked)

Fediverse

17788 readers
5 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

Ugh.

you are viewing a single comment's thread
view the rest of the comments
[–] RoundSparrow@lemmy.ml -3 points 1 year ago (2 children)
[–] favrion@lemmy.ml 7 points 1 year ago (2 children)

This is what happens when people don't understand federation.

[–] 1984@lemmy.today 1 points 1 year ago

Yep. Sitting on Lemmy.today browsing Lemmy.world posts right now...so I don't know. Really advice people to not have just one account. :)

[–] RoundSparrow@lemmy.ml -3 points 1 year ago (1 children)

Do you know of the site_aggregates federation TRIGGER issue lemmy.ca exposed?

[–] favrion@lemmy.ml 5 points 1 year ago (1 children)
[–] RoundSparrow@lemmy.ml -2 points 1 year ago* (last edited 1 year ago) (3 children)

No. Care to explain please?

On Saturday July 22, 2023... the SysOp of Lemmy.ca got so frustrated with constant overload crashes they cloned their PostgreSQL database and ran AUTO_EXPLAIN on it. They found 1675 rows being written to disk (missive I/O, PostgreSQL WAL activity) for every single UPDATE SQL to a comment/post. They shared details on Github and the PostgreSQL TRIGGER that Lemmy 0.18.2 and earlier had was scrutinized.

[–] sabreW4K3@lemmy.tf 3 points 1 year ago (1 children)

You've become fixated on this issue but if you look at the original bug, phiresky says it's fixed in 0.18.3

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

The issue isn't who fixed it it, the issue is the lack of testing to find these bugs. It was there for years before anyone noticed it was hammering PostgreSQL on every new comment and post to update data that the code never read back.

There have been multiple data overrun situations, wasting server resources.

[–] sabreW4K3@lemmy.tf 2 points 1 year ago

But now Lemmy has you and Phiresky looking over the database and optimizing things so things like this should be found a lot quicker. I think you probably underestimate your value and the gratitude people feel for your insight and input.

[–] favrion@lemmy.ml 2 points 1 year ago (2 children)
[–] fiat_lux@kbin.social 3 points 1 year ago (2 children)

Every time you perform an action like commenting, you expect it to maybe update a few things. The post will increase the number of comments so it updates that, your comment is added to the list so those links are created, your comment is written to the database itself, etc. Each action has a cost, let's say it costs a dollar every update. Then each comment would cost $3, $1 for each action.

What if instead of doing 3 things each time you posted a comment, it did 1300 things. And it did the same for everyone else posting a comment. Each comment now costs $1300. You would run out of cash pretty quickly unless you were a billionaire. Using computing power is like spending cash, and lemmy.world are not billionaires.

[–] RoundSparrow@lemmy.ml 2 points 1 year ago

What if instead of doing 3 things each time you posted a comment, it did 1300 things. And it did the same for everyone else posting a comment.

Yes, that is what was happening in Lemmy before lemmy.ca called it out with AUTO_EXPLAIN PostgeSQL on Saturday, 8 days ago.

[–] RoundSparrow@lemmy.ml 0 points 1 year ago (1 children)

rows=1675 was the actual number on Saturday in July 2023.

rows=1675 from lemmy.ca here: https://github.com/LemmyNet/lemmy/issues/3165#issuecomment-1646673946

[–] fiat_lux@kbin.social 2 points 1 year ago

Brutal. This is why I don't go near databases unless I have to.

[–] RoundSparrow@lemmy.ml -3 points 1 year ago* (last edited 1 year ago)

What are you asking for? lemmy.ml is the official developers server, and it crashes constantly, every 10 minutes it ERROR out, for 65 days in a row.

[–] r00ty@kbin.life 1 points 1 year ago* (last edited 1 year ago) (1 children)

I don't know that it's a DB design flaw if we're talking about federation messages to other instances inboxes (which created rows of that magnitude for updates does sound like federation messages outbound to me). Those need to be added somewhere. On kbin, if installed using the instructions as-is, we're using rabbitmq (but there is an option to write to db). But failures do end up hitting sql still and rabbit is still storing this on the drive. So unless you have a dedicated separate rabbitmq server it makes little difference in terms of hits to storage.

It's hard to avoid storing them somewhere, you need to be able to know when they've been sent or if there are temporary errors store them until they can be sent. There needs to be a way to recover from a crash/reboot/restart of services and handle other instances being offline for a short time.

EDIT: Just read the issue (it's linked a few comments down) it actually looks like a weird pgsql reaction to a trigger. Not based on the number of connected instances like I thought.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

(which created rows of that magnitude for updates does sound like federation messages outbound to me)

rows=1675 from lemmy.ca here: https://github.com/LemmyNet/lemmy/issues/3165#issuecomment-1646673946

It was not about outbound federation messages. It was about counting the number of comments and posts for the sidebar on the right of lemmy-ui to show statistics about the content. site_aggregates is about counting.

[–] r00ty@kbin.life 1 points 1 year ago

Yep I read through it in the end. Looks like they were applying changes to all rows in a table instead of just one on a trigger. The first part of my comment was based on reading comments here. I'd not seen the link to the issue at that stage. Hence the edit I made.

[–] RoundSparrow@lemmy.ml -2 points 1 year ago (1 children)

Latest, at the time of this comment: still over 4 SECONDS

[–] RoundSparrow@lemmy.ml -2 points 1 year ago

Fresh as of comment time: