🕷CrawlerDetect is a Python class for detecting bots/crawlers/spiders via the user agent

moskrc, updated 🕥 2022-08-30 20:45:16

About CrawlerDetect

This is a Python wrapper for CrawlerDetect - the web crawler detection library It helps to detect bots/crawlers/spiders via the user agent and other HTTP-headers. Currently able to detect > 1,000's of bots/spiders/crawlers.

Installation

Run pip install crawlerdetect

Usage

Variant 1

```Python from crawlerdetect import CrawlerDetect crawler_detect = CrawlerDetect() crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')

true if crawler user agent detected

```

Variant 2

```Python from crawlerdetect import CrawlerDetect crawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)') crawler_detect.isCrawler()

true if crawler user agent detected

```

Variant 3

```Python from crawlerdetect import CrawlerDetect crawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '/', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': '[email protected]', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728}) crawler_detect.isCrawler()

true if crawler user agent detected

```

Output the name of the bot that matched (if any)

```Python from crawlerdetect import CrawlerDetect crawler_detect = CrawlerDetect() crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')

true if crawler user agent detected

crawler_detect.getMatches()

Sosospider

```

Contributing

If you find a bot/spider/crawler user agent that CrawlerDetect fails to detect, please submit a pull request with the regex pattern added to the array in providers/crawlers.py and add the failing user agent to tests/crawlers.txt.

Failing that, just create an issue with the user agent you have found, and we'll take it from there :)

ES6 Library

To use this library with NodeJS or any ES6 application based, check out es6-crawler-detect.

.NET Library

To use this library in a .net standard (including .net core) based project, check out NetCrawlerDetect.

Nette Extension

To use this library with the Nette framework, checkout NetteCrawlerDetect.

Ruby Gem

To use this library with Ruby on Rails or any Ruby-based application, check out crawler_detect gem.

Parts of this class are based on the brilliant MobileDetect

Analytics

Issues

[Question] About blocking selenium,puppeteer,playwright, scrapy bot

opened on 2022-11-01 16:22:18 by thomasmuus

Hi, it seems like when I try to use this by wrapping it via API so that I can detect any automatic scraper like selenium,puppeteer,playwright, scrapy, it is unable to detect it.

How can we make it detect all the scraper generated by selenium,puppeteer,playwright, scrapy and any other bot specially which can be scraping information ?

Releases

0.1.5 2022-08-30 20:35:43

Sync with https://github.com/JayBizzle/Crawler-Detect/releases/tag/v1.2.111

v0.1.3 2019-08-16 07:17:50

Vitalii

Full Stack Web Developer (Python, Django, Vue.js, AWS)

GitHub Repository

python user-agent crawler spider bot detect