Yeah, I’ve been curious if you could explicitly block a page in robots.txt, hide an invisible link to the same page in your footer, then kinda have it act like an immediate IP block when someone requests it.
I guess it depends too much on the nature of the crawler. Does it actually extract links from robots.txt or is it merely ignoring them? If the crawlers are distributed, do page hits come from the same IP that the robots.txt was hit from?
It gets harder and harder to get away from CDNs and captchas, which are not exactly good things from an open source POV for the most part.
Are there any good log monitoring programs that will automatically blacklist the IP of any crawler that ignores robots.txt?
Yeah, I’ve been curious if you could explicitly block a page in robots.txt, hide an invisible link to the same page in your footer, then kinda have it act like an immediate IP block when someone requests it.
I guess it depends too much on the nature of the crawler. Does it actually extract links from robots.txt or is it merely ignoring them? If the crawlers are distributed, do page hits come from the same IP that the robots.txt was hit from?
It gets harder and harder to get away from CDNs and captchas, which are not exactly good things from an open source POV for the most part.