this post was submitted on 02 Nov 2023
49 points (86.6% liked)

Technology

59427 readers
4429 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

tldr- Using the robots meta element or HTTP header to say that content of this page should not be used for machine learning, in case some actors make their search UA indistinguishable from their machine learning efforts.

top 14 comments
sorted by: hot top controversial new old
[–] smileyhead@discuss.tchncs.de 18 points 1 year ago

It would work as well as "Do not track me" HTTP header.

[–] db2@sopuli.xyz 16 points 1 year ago (1 children)

This doesn't prevent anything.

[–] hownowbrowncow@lemmy.ml 3 points 1 year ago (2 children)

Any credible large scale AI has to explain where they got their information from, typically through a known user agent. e.g. OpenAI was initially trained on common crawl.

Lots of other orgs use Common Crawl but not necessarily for AI.

[–] TheHobbyist@lemmy.zip 3 points 1 year ago (1 children)

Did OpenAI ever detail what GPT-4 was trained on?

[–] hownowbrowncow@lemmy.ml 1 points 1 year ago (2 children)

Exactly, no one knows and had a choice to include their content in it or not.

[–] TheHobbyist@lemmy.zip 3 points 1 year ago (1 children)

And how should your proposal change that?

[–] hownowbrowncow@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

The point is that mainstream, for lack of a better word, user agents will discern themselves.

Perhaps it's important to differentiate here between known user agents and general scrapers.

Googlebot, Bingbot and any honourable UA will have a specific user agent and have a robots page telling you why they're fetching a page. They pretty much always have a way to reverse DNS verify that their user agent is coming from a genuine IP.

wrt generally scrapers, that's just a general issue beyond AI. That's just scrapers scraping.

If honourable user agents can honour a site owner's content, then a 'noml' tag can instruct them to not use the page for machine learning.

This is as much about protecting content IP as drawing a line in the sand, IMO. Perhaps it also protects brands from misinformation that would be presented by an AI.

Yes, people will continue to steal content, this has happened since the start of the web, there is a distinction here about not using content to train AI models that'll steal clicks from content creators.

[–] TheHobbyist@lemmy.zip 2 points 1 year ago (1 children)

Yes, people will continue to steal content,

I fail to see how this will solve anything. Why would stealing for AI or scraping for other purposes be done differently? If someone does not care about the rules for scraping, they still won't care about it for AI. Especially as they don't even have to disclose that it was used for AI (see my point about OpenAI above). There is no accountability. Previous versions or GPT language models have been trained on heaps of copyrighted material. Unless some law is enacted, it is unlikely to change.

Is the robots file carrying any legal value? I don't think so but if I'm wrong, this feels more like wishful thinking. I don't mean to say I don't care about it being done, but this is realistically unlikely to change anything in practice.

Perhaps if robots files had legal weight (if they don't already) (in the sense of being legally constraining the crawlers and scrapers) similarly to how LinkedIn was recently forced to abide by "do not track" requests in Germany then I'd welcome it with open arms!

[–] hownowbrowncow@lemmy.ml 1 points 1 year ago

solve anything.

As I say, honourable UAs will honour robots.txt and its protocol, this proposal is an extension of that.

Google have been proposing similar. Perhaps presumably for different reasons: https://services.google.com/fh/files/misc/public_comment_thought_starters_oct23.pdf

There is no accountability.

On small scales perhaps not, but as said this has always been the case with scraping.

Unless some law is enacted,

robots.txt protocols has never been law but has been honoured so it's worth hanging on to. It's still the defintion of 'good bots' vs 'bad bots' on one level and that's about as good as site owners have vs whack-a-mole with UA-IP variations.

[–] ripcord@kbin.social 3 points 1 year ago (1 children)

But I think their point was that if they didn't have to provide this detail, why do you think others would "have to"?

[–] themurphy@lemmy.world 1 points 1 year ago (1 children)

This is a fair point, but I think this will be a new standard for AI. So GPT-4 was possible bc of no regulations, but it won't be the same for GPT-5 or 6.

So it's more for future proof than back tracking.

[–] ripcord@kbin.social 2 points 1 year ago (1 children)

What evidence is there of that? What regulations have been added?

[–] themurphy@lemmy.world 1 points 1 year ago

I'm referring to this post. It's not official regulations.

[–] db2@sopuli.xyz 3 points 1 year ago

Any credible large scale AI has to explain where they got their information from

To who?