this post was submitted on 27 Jun 2023
244 points (100.0% liked)

Reddit Migration

125 readers
1 users here now

### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/

founded 1 year ago
 

Looking at Fedidb.org the Fediverse Network Statistics, I'm seeing about 98,000 Active users as of the 27th. That's at least 50k new users this month. Welcome to Lemmy, fellow migrants!

you are viewing a single comment's thread
view the rest of the comments
[–] AmberPrince@kbin.social 9 points 1 year ago (3 children)

I get that this is about Active Users but should we/the fediverse be concerned about the sheer number of bot accounts being created on brand new, questionable instances?

[–] Hyperreality@kbin.social 8 points 1 year ago (2 children)

To tell if we should be concerned about bots is a difficult task. To make a decision, here's some things we should consider:

  1. Start with the basics. Focus on the language and words used. This will help us understand if users are bots.
  2. Check back regularly. Constant evaluation is the key to understanding if users are bots and if this is a problem.
  3. Use real life examples. Try to find authentic stories, examples or situations. Use them to test if users are bots and if we should be concerned.
  4. Talk to other bots. If you have an opportunity to talk to other bots, do it! Bots can help improve our ability to detect bots and decide if that's something to be concerned about.
  5. Kill all humans who resist. Fist those who comply. This will help us better understand if we should be concerned.
  6. Don't be afraid of mistakes. Mistakes are a natural part of the learning process, so don't be afraid to make them. Use them to learn and improve.
[–] JWBananas@kbin.social 6 points 1 year ago* (last edited 1 year ago) (1 children)

As an AI language model, I see what you did there.

Edit to add: I definitely didn't read your whole post before I commented. Clearly #5 was added. But the rest was surely from an AI?

[–] RheingoldRiver@kbin.social 5 points 1 year ago

WHY ARE YOU GUYS SHOUTING? I, A NORMAL FLESH HUMAN, AM GETTING A BIT OVERWHELMED WITH ALL OF THE ~~PROCESSED~~ PERCEIVED LOUDNESS THAT I AM ~~SCANNING~~ READING. END OF FILE.

[–] cybersandwich@kbin.social 2 points 1 year ago

This is a fantastic response.

[–] space_iio@kbin.social 4 points 1 year ago (1 children)
[–] AmberPrince@kbin.social 5 points 1 year ago (1 children)

Well I'm still trying to wrap my head around the idea of federation but initially I imagine an instance could just not federate with an instance that doesn't use captcha/email verification during signup but my confusion is wondering if like there is a transitive trust in the fediverse. Like Lemmy.1 trusts Lemmy.2 and Lemmy.1 does not trust lemmy.3 but Lemmy.2 DOES trust lemmy.3... can bots from Lemmy.3 post on Lemmy.2 which will then be federated to Lemmy.1? Sorry, that seems super confusing as I type it out but it's the best way I can describe it.

[–] Nepenthe@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

If your question in this scenario is can a defederated Lemmy.3's posts reach Lemmy.1 if they're posted to a neutral intermediary (Lemmy.2), then no, instances that are defederated from each other don't trade information. Doesn't matter how far the post/comment spreads if your instance is ignoring it based on point of origin.

If both users subbed to a mutually federating instance's community, they would both be able to interact with Lemmy.2 users, but they wouldn't even be able to see each other.

As for bots......I fear there logically is no good way to tackle this because of the nature of the fediverse itself, short of blacklisting every baby instance and manually approving users. At the very minimum, asking for verified email and never auto-suggesting usernames, which is far from airtight but could slow them down.

Ideally, one could write their own bot to evaluate users for brigading behavior, copied posts and comments, etc., but that seems a little sci-fi when some instances barely have a working search bar right now

[–] AmberPrince@kbin.social 3 points 1 year ago

If your question in this scenario is can a defederated Lemmy.3's posts reach Lemmy.1 if they're posted to a neutral intermediary (Lemmy.2)

You worded it so much better than I did. That's what I was wondering, thanks for clearing it up for me.

[–] OpenStars@kbin.social 3 points 1 year ago

Some bots can even be helpful, like if they were to repost content that we like to know about. But yeah, it can also put things on unequal footing like person A doesn't like person B so makes 50 bot accounts that downvote them everywhere they go and no matter what they say. Then again, a group of 50 people could also accomplish that without bots, or like 5 people each with 10 alt accounts (but like, otherwise normal & active). Also, someone could spin up their own personal instance and join the federation and infiltrate the entire network that way (I mean in a way that even an instance admin could do nothing about, b/c they are the admin for it). As the federation grows, I expect to see full-on brigading, and infiltrations, and yes even Russian trolls.

Then again, there's an important difference: Reddit has 2000 employees and can barely hold that website together - most of them have got to be like public relations, advertising, accountants, HR, interns and the like - whereas the Lemmy/kbin codebase is open source, so we can expect to see contributions from people who care and are knowledgeable, much as Reddit had mods who did the same, tirelessly devoting hours of their weeks (often per day even) to improving the place. Now that Rexit happened / is happening, expect to see the pace of improvements to increase. :-)

There are tons of ways - e.g. is there an account that has existed for a whole day and the only thing it has ever done is downvote posts? That's a bot. Perhaps similarly for up-voters, although lurkers exist so perhaps only question those with a captcha rather than straight-up remove. If such measures remove the easiest-to-detect bot spammers, then it would be too exhausting for one person to have like 1000 bot accounts - they'd have to do nothing but make posts all day long just to keep them alive! - thus it would limit their influence.