I created an account on mastodon.social a few days ago. A day after creation, my account was suspended. My appeal was denied and no reason was given. So I assumed mastodon.social was not accepting new accounts, so I moved over to mastodon.online and created an account there. Today that account was suspended as well, again without reason. I didn't post anything from either account. My only actions were to follow a few people within tech.
Looking at previous posts here, people are laughing at complaints about difficulties of joining mastodon and pushing it away as a simple task. I have now attempted to join two of the highest suggested servers of mastodon and gotten suspended from both. I am uninterested in shotgunning servers until I find one which doesn't suspend me without reason.
How is the onboarding process of mastodon supposed to work if the top suggested servers are suspending new accounts without warning or reason?
If I were a bot farm owner, I would likely just generate more "realistic" person usernames. Generating a unique username which doesn't look like random letters is trivial, and I don't really think that creating that obstacle is a real hinderance to anyone.
Yes, but when creating a new system you can't just defend against new attacks. You have to defend against all the old attack vectors too.
I just don't see how the username is an attack vector. The sign-up has email verification and CAPTCHA. Requiring the username to be something sensible seems excessive.
But honestly, I don't know. Maybe this stops a lot more bot farms than I'd expect.
Captchas and email verifications can be easily bypassed.
Emails, sure. Captchas require a fair bit of elbow grease. Generating a random username which looks fine is nothing in the landscape of bot protection.
Bot farmers could find an exploit in reCaptcha. Or they could train up a neutral network to accurately defeat them (I saw someone demonstrating a GPT4 prompt that could handle it quickly and flawlessly with just a little bit of prompt engineering). When (not if) they find a way to defeat captcha, those lower level protections become way more important and relevant.
It's an ever-moving set of problems; fixing it today is no guarantee that it'll still be fixed tomorrow, so everything has to stay in place until it's proven to no longer be effective or to cause more problems than it fixes.
It just seems like the perspective is off. Implementing some script which reads images of the website which depicts the CAPTCHA, sending it to some AI-solution which can succeed some percentage of the time. Adding this to something which can interact with the website (not sure if you'll need to indirectly act through something like selenium or if you can make direct web-calls), while also ensuring that the CAPTCHA doesn't receive other suspicious data.
If you go through that trouble, I would be amazed if combining 2 or 3 words from a dictionary into a username would be the kryptonite of your bot farm.
Again, I don't know, and it might be a much more preventative solution than I can understand, but it feels like a strange security by obscurity.
You're not wrong, but it's also one of those things where you don't want to make things easier for the bad actor, especially since most people aren't going to be signing up with random strings.
This does not make suspicious random usernames not spam. They generally are spam accounts.
A recent spam I just received five days ago was from @oyPhFrxPx0@mastodon.social.