this post was submitted on 21 Oct 2024
873 points (98.8% liked)

Technology

59588 readers
3192 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] chalupapocalypse@lemmy.world 17 points 1 month ago (2 children)

They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

Not saying it's right but from a business standpoint it makes sense

[–] brucethemoose@lemmy.world 5 points 1 month ago* (last edited 1 month ago) (1 children)

Don't they flag stuff automatically?

Not sure what they're using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

[–] T156@lemmy.world 2 points 1 month ago (1 children)

They do, but they'd still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don't toe the company line exactly.

I doubt that they would use an LLM for that. That's very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren't as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.

[–] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Small LLMs are quite fast these days, even the multimodal ones. Same with small models explicitly used to filter diffusion output.

[–] ripcord@lemmy.world 0 points 1 month ago

A shitload of people, like as many as 10!