this post was submitted on 27 Dec 2024
-31 points (15.6% liked)

Asklemmy

44176 readers
2584 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Moderation is work. Trolls traumatize. Humans powertrip. All this could be resolved via AI.

all 47 comments
sorted by: hot top controversial new old
[–] orcrist@lemm.ee 3 points 3 hours ago

The moderation impossibility theorem says that your idea will fail. Also, what do you think AI is? People are keen to say "AI", but they're incredibly tentative about providing any details.

More importantly, what problem do you think you're solving? We all agree that trolling and power tripping occur, but what specifically are you trying to address? I'm not sure you know, and this is really important.

[–] Zelaf@sopuli.xyz 3 points 12 hours ago

I could see it function very well as an aid in moderation but not any type of solution like most things with AI is today.

In the case of Lemmy and other defederated social media platforms there's going to be the usual cost hindrance and then the ethical side of it with excessive electricity usage and training data.

Disregarding that, as most know and everyone should know, AIs are not to be considered reliable or accurate ever. They will falsely flag and give false positives to potential comments and posts and images.

However, having an AI aggregate a list of potential bad comments and posts, then have a user manually checking the results, could help with moderation efficiency. Because how many users actually report comments and posts? How many do mods actually miss out on? There's a lot of content and limited time.

[–] IDKWhatUsernametoPutHereLolol@lemmy.dbzer0.com 14 points 18 hours ago (1 children)
[–] Zelaf@sopuli.xyz 5 points 12 hours ago (1 children)
[–] joelfromaus@aussie.zone 3 points 7 hours ago

removed by AI moderator

[–] simple@lemm.ee 21 points 20 hours ago

AI is extremely fallible and often makes mistakes, so no.

[–] jewbacca117@lemmy.world 11 points 20 hours ago (1 children)

Ai moderation would lead to every comment being prompt injection

[–] okr765@lemmy.okr765.com 1 points 11 hours ago

The AI used doesn't necessarily have to be an LLM. A simple model for determining the "safety" of a comment wouldn't be vulnerable to prompt injection.

[–] frauddogg@hexbear.net 11 points 20 hours ago* (last edited 20 hours ago) (2 children)

Moderation is work. Trolls traumatize. Humans powertrip.

Correct.

All this could be resolved via AI.

Incorrect, for all the same reasons that facial recognition in 'AI' is unethical. All ~~theftboxes~~ adversarial networks are built by humans, most of whom in the 'AI' space come standard-equipped with built in racist biases. You see it all the time in facial recognition algorithms that couldn't tell the difference between a hundred Black people if you ran 'em all side by side. The same thing would happen with AI moderators; they will more likely than not moderate to right-wing white sensibilities, over-target and powertrip on ethnic minorities, and only really contribute to the general 'apartheid-supporter' vibe that most of the western internet has.

tl;dr please stop going to bat for theftboxes and the techbro STEMlords who build them.

[–] Nakoichi@hexbear.net 2 points 20 hours ago (2 children)

oh hay look. Of course this is some weirdo who spams creep shit like this all over.

[–] WittyProfileName2@hexbear.net 4 points 17 hours ago

I scrolled through the modlog on their home instance and like clockwork...

CW: transphobia

[–] frauddogg@hexbear.net 6 points 20 hours ago (2 children)
[–] Nakoichi@hexbear.net 5 points 19 hours ago* (last edited 19 hours ago) (2 children)

every time one of these creeps comes in concern trolling with some big bold idea for "fixing" problems with moderation it's because they are a creep who got banned for being a creep.

Also of course one of their posts is in there removed for NSFW and I ain't even gonna look at the context on that one. The lemmy.ml modlog on this guy is an entire page long. On other instances it gets even longer.

[–] frauddogg@hexbear.net 4 points 19 hours ago (1 children)

Good looking out; I wasn't even THINKING about modlogging this guy 'cause I really only took him for the average techbro

More fools me ig lmfao

[–] Nakoichi@hexbear.net 8 points 19 hours ago (1 children)

lmao we broke him he's just whining about safe spaces now.

Fascists always show their whole ass so easily.

[–] frauddogg@hexbear.net 4 points 18 hours ago

Scratch a techbro, a lolbert bleeds

[–] infinite_ass@leminal.space -2 points 20 hours ago (3 children)

What if it worked 90%? It doesn't need to be perfect, just better.

I'd like to see an experiment.

[–] orcrist@lemm.ee 2 points 3 hours ago

Go look at YouTube, they are already doing it over there.

And it's horrible. Sometimes my comments are taken down automatically, but YouTube never tells me why, so I don't know what I need to change, and it's even hard to find out if my comments have been taken down. The fastest way is for me to write a comment and then wait 10 seconds and then try to edit it.

You're asking for something better but what's your baseline? What are you measuring? What's your metric? How would you know if it got better, and more importantly, how would we as a user base in general know if it got better?

[–] frauddogg@hexbear.net 7 points 20 hours ago (1 children)

What if it worked 90%? It doesn't need to be perfect, just better.

An injustice against one of us is an injustice against all of us. 90% isn't good enough. Hell, 99% isn't good enough.

[–] infinite_ass@leminal.space -1 points 20 hours ago* (last edited 20 hours ago) (2 children)

But human moderators aren't perfect either. And they are often biased.

[–] orcrist@lemm.ee 3 points 3 hours ago

Human moderators who tweak the AI settings are still biased. So you haven't solved any problem by throwing AI in the middleof it all.

[–] frauddogg@hexbear.net 6 points 20 hours ago* (last edited 20 hours ago) (1 children)

But they can be checked and balanced by other moderators unless it's a one mod/one admin system; in which case then it's just a personal fief. Where's the checks and/or balances for theftbox moderation?

Y'know, if I had a physical watch, I'd be looking at my wrist really condescendingly right now.

[–] infinite_ass@leminal.space 0 points 19 hours ago* (last edited 19 hours ago)

AIs can be checked too. And judged, tweaked, etc. Obviously

Groups of moderators can be just as biased as individual moderators. More so even. Given the amplifying effects of echo chambers.

[–] Nakoichi@hexbear.net 5 points 20 hours ago* (last edited 20 hours ago) (2 children)

I sure as shit wouldn't. Did you even read any of what frauddogg just said? Genuine question because they explained pretty clearly why it is a terrible and stupid idea.

[–] infinite_ass@leminal.space 1 points 19 hours ago (2 children)

Yes I read it. I just don't think it significates.

[–] Nakoichi@hexbear.net 6 points 19 hours ago

And this is something you should reexamine about yourself then if you don't understand the significance of racism and other bigoted biases in AI.

I am pretty much done arguing with you because you're a creep and a troll.

[–] frauddogg@hexbear.net 5 points 19 hours ago* (last edited 19 hours ago)

significates

Nice word-a-day calender, creepazoid

[–] infinite_ass@leminal.space 0 points 20 hours ago (1 children)

You don't even want to see an experiment? But that's the pudding.

[–] Nakoichi@hexbear.net 4 points 20 hours ago

I repeat. Did you actually read any of the concrete argument laid out before you or are you just being willfully obtuse.

[–] deegeese@sopuli.xyz 9 points 20 hours ago (1 children)

β€œConfidently Incorrect” describes terrible moderators and AI.

[–] infinite_ass@leminal.space 2 points 20 hours ago

Oh nice phrase. Synonymous with smugnorant.

Wisdom and ignorance look alike in that there is a dearth of uncertainty.

[–] Corno@lemm.ee 5 points 20 hours ago

I've read about people being automatically banned by AI for saying something along the lines of "I hate burritos" because it had the word "hate" in it, so the AI judged their comment as hate speech and auto-banned them even though they were talking about food or a videogame character. AI is not very good at reading context and the "A" in "AI" is an important detail here.

[–] SteposVenzny@beehaw.org 5 points 20 hours ago (1 children)

It's absurd to give any amount of power over people, however trivial, to a thing which is incapable of thought.

[–] infinite_ass@leminal.space 1 points 20 hours ago

Well giving that power to a human isn't so great either, clearly.

We already use text filters as a moderation bot. So we're just looking at improving the bot.

Maybe we're just looking for a way for the bot to recognize more complex patterns.

[–] wuphysics87@lemmy.ml 4 points 20 hours ago

More "accurate" or otherwise, moderating is community engagement. We cultivate our communities by posting relevant content and removing what we find unacceptable. What are we doing if we are not doing both? Allowing a computer to sort the former and the latter? No thank you.

[–] latenightnoir@lemmy.world 4 points 20 hours ago* (last edited 20 hours ago)

If you're referring to the data models we have now (as in, not AGI), it's a solid no for a whole host of reasons.

As it is, it is not intelligent. It is capable of structuring immense datasets and identifying patterns throughout said datasets, but it is incapable of comprehending them at a conceptual level. Even if it can mimic the verbal patterns of context, nuance, humour, sarcasm, irony and even coded speech, it is not capable of understanding any of them. It is not an intelligence as we know and understand it, it's just a really, really complex math equation.

As it is, all AI is still primarily run by a human consciousness. It cannot decide for itself what to do, it has to be pre-programmed. This means that any biases the human programming said AI might have will be transferred to the program itself given the immensity of data it is meant to process, so you're right back at human fallibility. At best, contemporary AI is to manual moderation what a chainsaw is to chopping down trees with an axe - just an implement to aid humans in doing exactly what they did before, but maybe faster. That's it.

[–] Alice@beehaw.org 1 points 15 hours ago

I'm just curious how this would differ from automatic moderating tools we already have. I know moderating actually can be a traumatic job due to stuff like gore and CSEM, but we already have automatic filters in place for that stuff, and things still slip through the cracks. Can we train an AI to recognize it when it hasn't already been put into a filter? And if so, wouldn't it hit false positives and require an appeal system, which could still be used to traumatize people?

[–] Vanth@reddthat.com 3 points 20 hours ago

Aren't there already some automated mod tools working to delete CSAM and shit? That's a form of AI.

But all moderation problems you identify (work, biases) would not fully go away with AI moderation. Someone has to build and manage those tools (work) and train them on how to moderate (incorporating their biases as they do so).

[–] sylver_dragon@lemmy.world 3 points 20 hours ago

Yes, as soon as we actually invent AI.
The Large Language Models we have now aren't really it. When we have programs which can come to a well reasoned decision and actually explain the logic of said decision, then we'll start having something approaching AI. For now, it's just a well directed random number generator.

[–] kadup@lemmy.world 3 points 20 hours ago (1 children)

Sure.

In fact, sometimes the people posting content aren't that great too... We could also some AI posts.

And then the quality of the comments has been decreasing a lot, let's make the comments AI too.

AI posts moderated by AI to be appreciated by AI readers.

[–] infinite_ass@leminal.space 1 points 19 hours ago

Lol. Imagine being the last human. Living in a bunker. Facebooking with your "friends"

[–] akkajdh999@programming.dev 3 points 20 hours ago

AI is stupid