Spider Trap

spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are also called web spiders, from which the name is derived. Spider traps may be created to "catch" spambots or other crawlers that waste a website's bandwidth. They may also be created unintentionally by calendars that use dynamic pages with links that continually point to the next day or year.

Common techniques used are:

  • creation of indefinitely deep directory structures like
    http://foo.com/bar/foo/bar/foo/bar/foo/bar/.....
    
  • Dynamic pages that produce an unbounded number of documents for a web crawler to follow. Examples include calendars[citation needed] and algorithmically generated language poetry.[1]
  • documents filled with a large number of characters, crashing the lexical analyzer parsing the document.
  • documents with session-id's based on required cookies.

There is no algorithm to detect all spider traps. Some classes of traps can be detected automatically, but new, unrecognized traps arise quickly.

 

 spider trap causes a web crawler to enter something like an infinite loop, which wastes the spider's resources, lowers its productivity, and, in the case of a poorly written crawler, can crash the program. Polite spiders alternate requests between different hosts, and don't request documents from the same server more than once every several seconds, meaning that a "polite" web crawler is affected to a much lesser degree than an "impolite" crawler.

In addition, sites with spider traps usually have a robots.txt telling bots not to go to the trap, so a legitimate "polite" bot would not fall into the trap, whereas an "impolite" bot which disregards the robots.txt settings would be affected by the trap.

Notes:

An infinite-recursion website that lures web-crawlers into an infinite-indexing loop.

Folksonomies: computer science hacking

Keywords:
web crawler (0.949260 (positive:0.097905)), spider trap (0.904653 (negative:-0.555450)), spider traps (0.792370 (negative:-0.347454)), poorly constructed crawler (0.777651 (negative:-0.636016)), crawler trap (0.704364 (negative:-0.517509)), poorly written crawler (0.692575 (negative:-0.605888)), dynamic pages (0.603531 (positive:0.632698)), indefinitely deep directory (0.602994 (positive:0.242432)), Trap An infinite-recursion (0.566594 (negative:-0.476527)), unrecognized traps (0.504612 (negative:-0.381671)), search bot (0.498365 (negative:-0.636016)), Polite spiders (0.492663 (positive:0.310964)), infinite-indexing loop (0.482475 (negative:-0.476527)), Web crawlers (0.480182 (neutral:0.000000)), infinite number (0.472362 (negative:-0.636016)), Common techniques (0.470936 (neutral:0.000000)), web spiders (0.465960 (neutral:0.000000)), web pages (0.464400 (negative:-0.636016)), robots.txt settings (0.462737 (negative:-0.604683)), unbounded number (0.462102 (positive:0.632698)), lexical analyzer (0.458730 (negative:-0.534287)), language poetry. (0.456248 (neutral:0.000000)), lowers its productivity (0.455406 (negative:-0.315327)), different hosts (0.452984 (positive:0.310964)), lesser degree (0.452693 (negative:-0.458659)), infinite loop (0.450984 (negative:-0.509702)), large number (0.446435 (neutral:0.000000)), documents (0.413178 (positive:0.228932)), website (0.367360 (negative:-0.509890)), calendars (0.364455 (neutral:0.000000))

Entities:
spider trap:FieldTerminology (0.839297 (negative:-0.451152)), session-id:Company (0.158505 (positive:0.496756))

Concepts:
Web crawler (0.972660): dbpedia | freebase | yago
Spider trap (0.820038): dbpedia | freebase
World Wide Web (0.808495): dbpedia | freebase | yago
Spambot (0.672280): dbpedia | freebase
Web page (0.514886): dbpedia | freebase
Web server (0.510757): dbpedia | freebase
Robots exclusion standard (0.488538): dbpedia | freebase
Infinity (0.443988): dbpedia | freebase

 Spider trap
Electronic/World Wide Web>Wiki:  Various, (2014), Spider trap, Wikipedia, Retrieved on 2014-03-17
  • Source Material [en.wikipedia.org]
  • Folksonomies: software hacking