this post was submitted on 30 Dec 2024
42 points (72.8% liked)

Asklemmy

44255 readers
1390 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

It probably seems weird asking this on Lemmy, but of course posting this on Reddit would get banned or taken down. Reddit doesn’t like being critical of Reddit. Anyways….

Over the last 10 years as a Reddit user I’ve believe the amount of accounts that are bots or foreign bad actors has tipped past 50%. I have no statistics to speak of, but would love if somebody did and could share.

Based purely on some of the conversations, posts, rage bait, strong ideologies, etc… I’m pretty convinced that a reasonable sample of humans could not or would not act the way they do on that platform. So often now I see posts that I feel are specifically attempting to sow discord and disagreement.

Does anyone else agree? What percent of users do you think are bots? Foreign bad actors?

Sadly, I think Reddit has no desire to find out or do anything about it. There would be no upside to them correcting their advertising numbers.

(page 2) 38 comments
sorted by: hot top controversial new old
[–] geneva_convenience@lemmy.ml 11 points 1 week ago (1 children)
[–] sirboozebum@lemmy.world 10 points 1 week ago (1 children)

It has basically been taken over by the IDF

[–] plinky@hexbear.net 11 points 1 week ago

Reddit is default "human advice" on what to buy, if you don't think it's crawling with companies bots as well shrug-outta-hecks

(but feely wise, it's sub dependent, no one will go to small sub to influence 10 people, conversely big subs are shaped both via allowed topics and first-to-post, first-to-downvote races)

[–] frightful_hobgoblin@lemmy.ml 9 points 1 week ago (9 children)
load more comments (9 replies)
[–] Xiisadaddy@lemmygrad.ml 6 points 1 week ago

Im not even sure how normal people post on reddit. Everytime ive ever tried to post on there in a community that isn't tiny my post is automatically removed and sent to "manual approval" which never actually happens. it baffles me how a website that is so hard to post on remains popular.

[–] Aria@lemmygrad.ml 4 points 1 week ago

Voters? Probably 99%. Commenters though? Like actual bots and LLMs and stuff like that? Very few, 1% rounded up, I'd think. You're much more likely to encounter humans posing as unaffiliated random people as part of their job than LLMs doing the same.

[–] hansolo@lemm.ee 4 points 1 week ago

There's a few other categories to consider.

Of small niche subs I've moderated, there's maybe a 10 to 1 or higher ratio of non-active users to active. Look at the highest voted posts of all time or the last year in a sub. If the sub as 10K subscribers, the highest number of votes on any post might be 1K or so. Maybe far less.

I saw on a couple of the sub's metrics that we would consistently gain 10-20 users a day, and maybe lose 1-3 subscribers daily. But with very little increased engagement. But so we would gain sometimes 500 or even 1000 users in a month, and nothing changes. Why? Always drove me crazy.

A lot of real people start up accounts and quickly abandon them. A lot of bots sub every subreddit and do stupid things like comment when you're comment is a haiku. Every script kiddie that ever coded a broken bot that never worked right might still have 4 or 5 axcounts out there as a dead subscribers.

And let's not forget the massive amount of people with multiple accounts (hi!) and the ones with sometimes severe mental health problems, wannabe trolls, and straight up Aholes trying to evade bans. There's likely more of these out there than actual malicious and active bots.

As for actual malicious bots posting, it's likely very few, and limited to engagement on larger subs to drop parts of a larger group of talking points. But the places that normally go for that kind of thing also don't mind hiring a bunch of Nigerian 419 scammers to be real humans posting from the bot accounts sometimes.

[–] thezeesystem@lemmy.blahaj.zone 1 points 1 week ago (2 children)

Wait? "Foreign bad actors"? You don't realize that the bad actors are domestic? Through the US government? Hence the whole "china bad" narrative.

For how much people on Lemmy who thinks "China bad" it's just as bad as reddit for domestic bad actors.

load more comments (2 replies)
[–] UltraGiGaGigantic@lemmy.ml 0 points 1 week ago
[–] IDKWhatUsernametoPutHereLolol@lemmy.dbzer0.com -2 points 1 week ago* (last edited 1 week ago) (1 children)

My guess: (pure speculation)

Lemmy (edit: I mean the comment section) is probably at 25% government agents or people acting on behalf of governments including US, Russia, China, possibly other allies of the aforementioned.

Bots tho, probably few, maybe 10% or less. Most of the instances use manual applications, so hard to get bots through. You'd need to write a different "essay" for each application, also think of unique names that doesnt look bot generated.

If you look specifically in (edit: the comments section of) political threads, probably anywhere from 25% to 50% government agents.

Mainsteam social media like Reddit, probably at 25% to 50% bots pre-exodus, now it seem like 50% to 75% bots, the percentage of government agents are probably much lower, since unlike Lemmy where there are much less users, on reddit they wouldn't have the manpower to post enough comments to manipulate the discussion, but they could just use bots instead, many of those bots are probably operated by governments. And on political subreddits, these numbers will skyrocket.

Thing thing about the internet, is you have to treat it as entertainment, not real source of unbiased information, especially not a forum where any rando can sign up.

I'm gonna restate what I said in another thread:

::: spoiler


I’ve come up with a system to categorize reality in different ways:

Category 1: Thoughts inside my brain formed by logics

Category 2: Things I can directly observe via vision, hearing, or other direct sensory input

Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations

Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telephone, Telegrams, etc…

Category 5: The Internet

The higher the category number, means the more distant that information is, and therefore more suspicious I am.

I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea). :::

[–] davel@lemmy.ml 7 points 1 week ago (2 children)

Lemmy is probably at 25% government agents or people acting on behalf of governments including US, Russia, China, possibly other allies of the aforementioned.

Come on: Lemmy isn’t nearly big enough for state actors to bother with—yet. In the social media space, Lemmy is a rounding error.

The military-intelligence-industrial complex is aware of the fediverse’s existence, though:

Atlantic Council » Collective Security in a Federated World (PDF)

Many discussions about social media governance and trust and safety are focused on a small number of centralized, corporate-owned platforms that currently dominate the social media landscape: Meta’s Facebook and Instagram, YouTube, Twitter, Reddit, and a handful of others. The emergence and growth in popularity of federated social media services, like Mastodon and Bluesky, introduces new opportunities, but also significant new risks and complications. This annex offers an assessment of the trust and safety (T&S) capabilities of federated platforms—with a particular focus on their ability to address collective security risks like coordinated manipulation and disinformation.

[–] Serinus@lemmy.world 2 points 1 week ago (2 children)

If it's big enough for us, it's big enough for state actors. They may not be putting in a ton of effort yet, but I'm sure they're here.

load more comments (2 replies)
[–] IDKWhatUsernametoPutHereLolol@lemmy.dbzer0.com 1 points 1 week ago (1 children)

I mean more like comments, not the total users. Total user at 25% would be a lot of man power.

Like a post with 25 comments could have at least 7 comments be a government account, and it doesn't take a lot of people. One new NSA or FSB hire can run 7 virtual machines to create Lemmy sockpuppet accounts to push whatever they want. Like... it only takes 1 out of the thousands of employees they have to run this. Lemmy is small enough to be doable.

I mean, if I wanted to troll, I could pull up 7 tor browser sessions and create accounts to post bad faith arguments, but I just don't have the energy for it.

[–] davel@lemmy.ml 5 points 1 week ago

It certainly can be done, and without much effort, but there’s virtually no bang for that buck right now, because the audience is laughably small.

load more comments
view more: ‹ prev next ›