this post was submitted on 09 Jan 2025
45 points (97.9% liked)

Selfhosted

40956 readers
1290 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Now that we know AI bots will ignore robots.txt and churn residential IP addresses to scrape websites, does anyone know of a method to block them that doesn't entail handing over your website to Cloudflare?

you are viewing a single comment's thread
view the rest of the comments
[–] dudeami0@lemmy.dudeami.win 5 points 1 day ago (2 children)

The only way I can think of is require users to authenticate themselves, but this isn't much of a hurdle.

To get into the details of it, what do you define as an AI bot? Are you worried about scrappers grabbing the contents of you website? What is the activities of an "AI Bot". Are you worried about AI bots registering and using your platform?

The real answer is not even cloudflare will fully defend you from this. If anything cloudflare is just making sure they get paid for access to your website by AI scappers. As someone who has worked around bot protections (albeit in a different context than web scrapping), it's a game of cat and mouse. If you or some company you hire are not actively working against automated access, you lose as the other side is active.

Just think of your point that they are using residential IP addresses. How do they get these addresses? They provide addons/extensions for browsers that offer some service (generally free VPNs) in exchange for access to your PC and therefore your internet in the contract you agree to. The same can be used by any addon, and if the addon has permissions to read any website they can scrape those websites using legit users for whatever purposes they want. The recent exposure of the Honey scam highlights this, as it's very easy to get users to install addons by selling users they might save a small amount of money (or make money for other programs). There will be users who are compromised by addons/extensions or even just viruses that will be able to extract the data you are trying to protect.

[–] DaGeek247@fedia.io 2 points 15 hours ago (1 children)

Just think of your point that they are using residential IP addresses. How do they get these addresses?

You can ping all of the ipv4 addresses in under an hour. If all you're looking for is publicly available words written by people, you only have to poke port 80 and then suddenly you have practically every possible small self-hosted website out there.

[–] dudeami0@lemmy.dudeami.win 2 points 13 hours ago* (last edited 13 hours ago)

When I say residential IP addresses, I mostly mean proxies using residential IPs, which allow scrappers to mask themselves as organic traffic.

Edit: Your point stands on there are a lot of services without these protections in place, but a lot of services are protective against scrapping.

[–] ctag@lemmy.sdf.org 1 points 20 hours ago

Thank you for the detailed response. It's disheartening to consider the traffic is coming from 'real' browsers/IPs, but that actually makes a lot of sense.

I'm coming at this from the angle of AI bots ingesting a website over and over to obsessively look for new content.

My understanding is there are two reasons to try blocking this: to protect bandwidth from aggressive crawling, or to protect the page contents from AI ingestion. I think the former is doable, and the latter is an unwinnable task. My personal reason is because I'm an AI curmudgeon, I'd rather spend CPU resources blocking bots than serving any content to them.