this post was submitted on 12 Jun 2023
341 points (98.6% liked)
Lemmy.World Announcements
29084 readers
161 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news π
Outages π₯
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations π
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Any progress on this. I've been thinking about it too. Couple of ideas:
Too many indexes needing to update when an insert occurs?
Are there any triggers running upon insert?
Unlikely but there isn't a disk write bottleneck? Might be worth running some benchmarks from the VM shell.
I was thinking that as well, itβs like the post gets βcheckedβ or something like that and that gives a timeout of 20secs. It could be an api or database but somehow my spidey sense says this could well be in code. Some extra calls to filter things maybe? Using an external server? Or even the propagation to the others? (Idk how this federation thing connects to the others, could be just that; maybe another server that is the bottleneck) I just found the 20 seconds suspicious given that is the default timeout
Didn't know about the timeout but that makes sense. Would be easy to test by changing the nginx timeout.
Another thought: how many db connections do you have? Could it be starved because there are so many selects happening and it needs to wait for them to finish first?