this post was submitted on 23 Sep 2024
153 points (97.5% liked)

Technology

58711 readers
4034 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] gaylord_fartmaster@lemmy.world 50 points 3 weeks ago (1 children)

They're already ignoring robots.txt, so I'm not sure why anyone would think they won't just ignore this too. All they have to do is get a new IP and change their useragent.

Cloudflare is protecting a lot of sites from scraping with their POW captchas. They could allow people who pay

[–] scarabine@lemmynsfw.com 25 points 3 weeks ago

I have an idea. Why don’t I put a bunch of my website stuff in one place, say a pdf, and you screw heads just buy that? We’ll call it a “book”

[–] umami_wasbi@lemmy.ml 19 points 3 weeks ago (1 children)

How can I do this without Cloudflare?

[–] rikudou@lemmings.world 22 points 3 weeks ago (1 children)

Put a page on your website saying that scrapping your website costs [insert amount] and block the bots otherwise.

[–] gravitas_deficiency@sh.itjust.works 15 points 3 weeks ago (1 children)

The hard part is reliably detecting the bots

[–] melroy@kbin.melroy.org 5 points 3 weeks ago (1 children)

Also you don't want to block legit search engines that are not scraping your data for AI.

[–] gravitas_deficiency@sh.itjust.works 7 points 3 weeks ago (1 children)

Again: hard to differentiate all those different bots, because you have to trust that they are what they say they are, and they often are not

[–] melroy@kbin.melroy.org 5 points 3 weeks ago (1 children)

Instead of blocking bots on user agent.. I'm blocking full IP ranges: https://gitlab.melroy.org/-/snippets/619

[–] vinnymac@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

It certainly can be a cat and mouse game, but scraping at scale tends to be ahead of the curve of the security teams. Some examples:

https://brightdata.com/

https://oxylabs.io/

Preventing access by requiring an account, with strict access rules can curb the vast majority of scraping, then your only bad actors are the rich venture capitalists.

[–] magic_smoke@links.hackliberty.org 19 points 3 weeks ago (2 children)

As someone who uses invidious daily I've always been of the belief if you don't want something scraped, then maybe don't upload it to a public web page/server.

[–] General_Effort@lemmy.world 5 points 3 weeks ago

There's probably not many people here who understand the connection between Invidious and scraping.

[–] Justas@sh.itjust.works 0 points 2 weeks ago (1 children)

Imagine a company that sells a lot of products online. Now imagine a scraping bot coming at peak sales hours and looking at each product list and page separately for said service. Now realise that some genuine users will have a worse buying experience because of that.

Yeah there's way easier ways to combat that without trying to prevent scraping.

Maybe don't ship 20 units to the same address.