this post was submitted on 29 Jun 2023
253 points (100.0% liked)
Reddit Migration
125 readers
1 users here now
### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/
founded 1 year ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Seriously asking, what makes you think the fediverse is immune to that? Eventually they'll get good enough that they'll be almost indistinguishable from normal users, so how can we keep the bots out?
There's a number of options including a chain of trust where you only see comments from someone who's been verified by someone who's been verified by someone and so on who's been verified by an actual real human that you've met in person. We can also charge per post, which will rapidly drive up the cost of a botnet (as well as trim down the number of two word derails).
I'm not sure how reliable chains of trust would be. There's a pretty obvious financial incentive for someone to simply lie and vouch for a bot etc. But in general, I think some kind of network of trustworthiness or verification as a real human will eventually be necessary. I could see PGP etc being useful.
"charge per post"
That part kind of worries me, are you proposing charging users to participate in the fediverse? Seems like it would also exclude a lot of people who can't afford to spend money on social media...
Listen here, you! I paid good money for this here comment so you're gonna read it, alright‽
<Brought to you by FUBAR, a corporation with huge pockets that can afford to sway opinion with lots of carefully placed bot comments>
The obvious question is then "how are they helping pay for the servers they're using?".
It's not that I don't see your point, everyone should be able to take part in a community without having to spend money, but I do find it annoying that whenever the topic of money comes up, we end up debating the hypothetical of someone with 0c spare in their budget.
Charging for membership worked well for Something Awful, and they only charge something like $20 for lifetime membership anyway, plus an additional fee for extra functionality. But you don't get the money back if you get banned. Corporations would still be able to spend their way into the conversation, but it would be harder to create massive networks that just flood the real users.
The nice thing about federated media is that there doesn't need to be one instance that carries most of the traffic. The cost gets distributed among many servers and instances, and they can choose how to fund the server independently (many instance owners spend their own money to a point, then bridge the gap with donations from users).
I'm just not sure that's the best way to cut down bots, IMHO.
It's not immune but until the fediverse reaches a critical mass, we're safe... probably.
After that, it will be the same whac-a-mole game we're used to and somehow I don't think we'll win.
Right now, we can already recognize lower quality bots within conversation. AI generated "art" is already very distinct to everyone to the point almost nobody misses it.
Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.
By the time bots become good enough to be indistinguishable online, they'll either be actually worth talking to, or they will simply be another corporate shill.
I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.
I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.
Because bots don't think. They exist solely to push an agenda on behalf of someone.
If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it's automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.
Bots are not inherently bad, but I think it's imperative that our interactions with them are transparent and consensual.
Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what's even talked about at all.
FOAF network for validation. For PGP signing parties people could even ask for ID.
Nothing is immune, but at least on the fediverse it's unlikely API access will be revoked on tools used to detect said bots.