177
Sites scramble to block ChatGPT web crawler after instructions emerge
(arstechnica.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
You think Google thought about
robots.txt
before they developed their search engine? Nah, it's all public Internet, and they scraped away.A non-zero percentage of web sites will bother to follow these instructions, but it might as well be zero.
Yeah I always assumed robots.txt only told them to hide it from search results, but Google still scrapes everything they can from you. The illusion they skipped over you
If you look in the server logs, you can see what their spiders are grabbing.
No you've got it backwards.
Robots.txt absolutely stops Google from scraping your site.
But they can still learn enough by scraping other sites that link to yours to build a concrete picture of the contents of your website and they will use that info to populate search results that link to you.
If you don't want to appear in search results, then you need to tell Google which pages to hide, and to tell them that you have to allow them to scrape your site.
Very early on, at least, their spiders respected robots.txt.
I know there are folks that have all of the Big G in their robots.txt files on principle, might want to ask them if it works or not.
I do and I can confirm there are no requests (except for robots.txt and the odd /favicon.ico). Google sorta respects robots.txt. They do have a weird gotcha though: they still put the URLs in search, they just appear with an useless description. Their suggestion to avoid that can be summarized as: don't block us, let us crawl and just tell us not to use the result, just trust us! when they could very easily change that behavior to make more sense. Not a single damn person with Google blocked in robots.txt wants to be indexed, and their logic on password protecting kind of makes sense but my concern isn't security, it's that I don't like them (or Bing or Yandex).
Another gotcha I've seen linked is that their ad targeting bot for Google AdSense (different crawler) doesn't respect a
*
exclusion, but that kind of makes sense since it will only ever visit your site if you place AdSense ads on it.And I suppose they'll train Bard on all data they scraped because of course. Probably no way to opt out of that without opting out of Google Search as well.
Now that's a dirty trick.