this post was submitted on 01 Aug 2024
110 points (100.0% liked)

Technology

37712 readers
166 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 36 comments
sorted by: hot top controversial new old
[–] theangriestbird@beehaw.org 59 points 3 months ago (1 children)

The beef between Microsoft and Reddit came to light after I published a story revealing that Reddit is currently blocking every crawler from every search engine except Google, which earlier this year agreed to pay Reddit $60 million a year to scrap the site for its generative AI products.

I know the author meant "scrape", but sometimes it really does feel like AI is just scrapping the old internet for parts.

[–] cybermass@lemmy.ca 15 points 3 months ago (1 children)

Yeah, aren't like over half of reddit comments/posts by bots these days?

[–] originalucifer@moist.catsweat.com 13 points 3 months ago (1 children)

yep, and the longer that happens the less value to the dataset. its becoming aged.

[–] RiikkaTheIcePrincess@pawb.social 13 points 3 months ago* (last edited 3 months ago) (1 children)

[Joke] See, Reddit's doing a nice thing here! They're making sure nobody ends up toxifying their own dataset by using Reddit's garbage heap of bot posts!

[–] originalucifer@moist.catsweat.com 5 points 3 months ago (2 children)

google needs a checkbox of 'ignore reddit' im sick of having to manually add -reddit

[–] Cube6392@beehaw.org 13 points 3 months ago (1 children)

Hey good news. Turns out you can use bing and not get back Reddit results

yeah but then i get back bing results. no one needs that

There's a browser extension for that. It also works on Pintrest and other useless sites. https://iorate.github.io/ublacklist/docs

[–] doctortofu@reddthat.com 44 points 3 months ago

I can see why spez is upset about scrappers and search engines - image a company profiting from people creating lots of data, just hoarding it and using it for free, and not paying those people a cent, preposterous, right? :)

[–] Moonrise2473@feddit.it 28 points 3 months ago* (last edited 3 months ago) (4 children)

A search engine can't pay a website for having the honor of bringing them visits and ad views.

Fuck reddit, get delisted, no problem.

Weird that google is ignoring their robots.txt though.

Even if they pay them for being able to say that glue is perfect on pizza, having

User-agent: *
Disallow: /

should block googlebot too. That means google programmed an exception on googlebot to ignore robots.txt on that domain and that shouldn't be done. What's the purpose of that file then?

Because robots.txt is completely based on honor (there's no need to pretend being another bot, could just ignore it), should be

User-agent: Googlebot
Disallow:
User-agent: *
Disallow: /
[–] MrSoup@lemmy.zip 28 points 3 months ago (2 children)

I doubt Google respects any robots.txt

[–] DaGeek247@fedia.io 27 points 3 months ago (3 children)

My robots.txt has been respected by every bot that visited it in the past three months. I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.

I've only gotten like, 20 visits in the past three months though, so, very small sample size.

[–] mozz@mbin.grits.dev 13 points 3 months ago (1 children)

I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.

This is fuckin GENIUS

[–] Moonrise2473@feddit.it 8 points 3 months ago (2 children)

only if you don't want any visits except from yourself, because this removes your site from any search engine

should write a "disallow: /juicy-content" and then block anything that tries to access that page (only bad bots would follow that path)

[–] Miaou@jlai.lu 23 points 3 months ago (1 children)

That's exactly what was described..?

[–] Moonrise2473@feddit.it 3 points 3 months ago (1 children)

Oops. As a non-native English speaker I misunderstood what he meant. I understood wrongly that he set the server to ban everything that asked for robots.txt

[–] Zoop@beehaw.org 2 points 3 months ago

Just in case it makes you feel any better: I'm a native English speaker who always aced the reading comprehension tests back in school, and I read it the exact same way. Lol! I'm glad I wasn't the only one. :)

[–] mozz@mbin.grits.dev 5 points 3 months ago

You need to read again the thing that was described, more carefully. Imagine for example that by “a page,” the person means a page called /juicy-content or something.

[–] MrSoup@lemmy.zip 2 points 3 months ago

Thank you for sharing

[–] thingsiplay@beehaw.org 2 points 3 months ago* (last edited 3 months ago)

Interesting way of testing this. Another would be to search the search machines with adding site:your.domain (Edit: Typo corrected. Off course without - at -site:, otherwise you will exclude it, not limit to.) to show results from your site only. Not an exhaustive check, but another tool to test this behavior.

[–] Moonrise2473@feddit.it 10 points 3 months ago

for common people they respect and even warn a webmaster if they submit a sitemap that has paths included in robots.txt

[–] skullgiver@popplesburger.hilciferous.nl 15 points 3 months ago (1 children)

I think Reddit serves Googlebot a different robots.txt to prevent issues. For instance, check Google's cached version of robots.txt: it only blocks stuff that you'd expect to be blocked.

[–] Zoop@beehaw.org 2 points 3 months ago

User-Agent: bender

Disallow: /my_shiny_metal_ass

Ha!

[–] tal@lemmy.today 4 points 3 months ago* (last edited 3 months ago)

I guessed in a previous comment that given their new partnership, Reddit is probably feeding their comment database to Google directly, which reduces load for both of them and permits Google to have real-time updates of the whole kit-and-kaboodle rather than polling individual pages. Both Google and Reddit are better-off doing that, and for Google it'd make sense for any site that's large-enough and valuable enough to warrant putting forth any effort special-case to that site.

I know that Reddit built functionality for that before, used it for pushshift.io and I believe bots.

I doubt that Google is actually using Googlebot on Reddit at all today.

I would bet against either Google violating robots.txt or Reddit serving different robots.txt files to different clients (why? It's just unnecessary complication).

[–] jarfil@beehaw.org 3 points 3 months ago

Google is paying for the use of Reddit's API, not for scraping the site.

That's the new Reddit's business model: want "their" (users') content, then pay for API access.

[–] Ilandar@aussie.zone 28 points 3 months ago

“This was Microsoft's choice, not ours,” Reddit spokesperson Tim Rathschmidt told me in an email. “We are and have been open to agreements with companies who are open about their intentions and commit to treat us and our users fairly. If Bing or others want access within our policies, without training, without summarization, and without selling it to others, we are and have always been open to that. If they want to build a business selling Reddit data or using the data for training, we could be open to that, but it’s a commercial conversation.”

Mojeek, the search engine that initially told me that Reddit was blocking all search engines but Google, and which was unable to get in touch with Reddit at the time, told me Reddit got in touch after that story was published. Mojeek said it was unable to share any details about the deal because of an NDA, but confirmed that Reddit wanted to get paid for letting Mojeek crawl the site, even though Mojeek does not have any AI products.

This doesn't add up and it makes me wonder what else Google and reddit agreed upon. This situation benefits no one except Google, as far as I can tell. If reddit wants to milk search engines, and Microsoft is willing and able to pay (which I assume they are), there is no reason for the deal to not go ahead like it did with Google. Kinda makes my brain start going down the conspiracy path, but then again it's hardly unbelievable that Google would pursue anti-competitive business strategies, particularly when it comes to generative AI.

[–] ssm@lemmy.sdf.org 21 points 3 months ago (2 children)

I hope all big corporate SEO trash follows suite, once they've all filtered themselves out for profit we can hopefully get some semblance of an unshittified search experience.

[–] tal@lemmy.today 7 points 3 months ago* (last edited 3 months ago) (2 children)

The reason that robots.txt generally worked was because nobody was trying to really leverage it against bot operators. I'm not sure that this might not just kill robots.txt. Historically, search engines wanted to index stuff and websites wanted to be indexed. Their interests were aligned, so the convention worked. This no longer holds if things like the Google-Reddit partnership become common.

Reddit can also try to detect and block crawlers; robots.txt isn't the only tool in their toolbox.

Microsoft, unlike most companies, does actually have a technical counter that Reddit probably cannot stop, if it comes to that and Microsoft wants to do a "hostile index" of Reddit.

Microsoft's browser, Edge, is used by a bunch of people, and Microsoft can probably rig it up to send content of Reddit pages requested by their browser's users sufficient to build their index. Reddit can't stop that without blocking Edge users. I expect that that'd probably be exploring a lot of unexplored legal territory under the laws of many countries. It also wouldn't be as good as Google's (I assume real-time) access to the comments, but they'd get to them.

Browsers do report the host-referrer, which would permit Reddit to detect that a given user has arrived from Bing and block them:

https://en.wikipedia.org/wiki/HTTP_referer

In HTTP, "Referer" (a misspelling of "Referrer"[1]) is an optional HTTP header field that identifies the address of the web page (i.e., the URI or IRI), from which the resource has been requested. By checking the referrer, the server providing the new web page can see where the request originated.

In the most common situation, this means that when a user clicks a hyperlink in a web browser, causing the browser to send a request to the server holding the destination web page, the request may include the Referer field, which indicates the last page the user was on (the one where they clicked the link).

Web sites and web servers log the content of the received Referer field to identify the web page from which the user followed a link, for promotional or statistical purposes.[2] This entails a loss of privacy for the user and may introduce a security risk.[3] To mitigate security risks, browsers have been steadily reducing the amount of information sent in Referer. As of March 2021, by default Chrome,[4] Chromium-based Edge, Firefox,[5] Safari[6] default to sending only the origin in cross-origin requests, stripping out everything but the domain name.

Reddit could block browsers with a host-referrer off bing.com, killing the ability of Bing to link to them. I don't know if there's a way for a linking site to ask a browser to not give or forge the host-referrer. For Edge users -- not all Bing users -- Microsoft could modify the browser to do so, forcing Reddit to decide whether to block all Edge users or not.

[–] AceSLS@ani.social 2 points 3 months ago

They can try to block crawlers all they want

They will not succeed without restricting access to Reddit to an unusable degree, since crawlers can be coded to imitate real users close enough. Combine that with enough proxies and they can't do jack shit

Also you could get arround the Referer header quite easily via redirects (unless Reddit went ahead and used a Whitelist for those, which again would be a very stupid decision) and some more methods

[–] CanadaPlus@lemmy.sdf.org 2 points 3 months ago

Man, wouldn't that be nice. There's too much money in appearing on searches for me to ever expect that to happen, though.

[–] TehPers@beehaw.org 11 points 3 months ago (1 children)

Joke's on Reddit. I've been blocking their results in the search engine I use for months!

I wonder if this will end up being pursued as an antitrust case. If anything, it'll reduce traffic to Reddit from non-Google users, so hopefully that kills them off just a little faster.

[–] AVincentInSpace@pawb.social 10 points 3 months ago (1 children)

Come on. Be realistic. Chrome has 70% browser market share and people are already used to tacking "Reddit" onto the end of their search queries to find useful information. If anything this will have no effect besides steering people towards Google.

[–] TehPers@beehaw.org 5 points 3 months ago (2 children)

People on Chrome adding Reddit to their Google searches already use Google. People not using Google who don't search "Reddit" are going to see fewer Reddit results.

No, this won't kill Reddit, but it certainly isn't helping them get more traffic.

[–] Cube6392@beehaw.org 2 points 3 months ago

They don't care about traffic. They care about the existing barrel of data for the data models

[–] lemmyvore@feddit.nl 2 points 3 months ago

...I thought that was the whole point of Spez blocking other spiders.