mirror of
https://github.com/ai-robots-txt/ai.robots.txt.git
synced 2025-06-19 10:00:52 +00:00
Compare commits
138 commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
4ed17b8e4a | ||
![]() |
5326c202b5 | ||
a31ae1e6d0 | |||
![]() |
7535893aec | ||
![]() |
eb05f2f527 | ||
26a46c409d | |||
![]() |
2b68568ac2 | ||
![]() |
b05f2fee00 | ||
![]() |
e53d81c66d | ||
![]() |
20e327e74e | ||
![]() |
8f17718e76 | ||
d760f9216f | |||
![]() |
842e2256e8 | ||
![]() |
229ea20426 | ||
14d68f05ba | |||
![]() |
cf598b6b71 | ||
![]() |
3759a6bf14 | ||
7867c3e26c | |||
![]() |
e21f6ae1b6 | ||
![]() |
ac7ed17e71 | ||
![]() |
81747e6772 | ||
528d77bf07 | |||
![]() |
77393df5aa | ||
![]() |
75ea75a95b | ||
![]() |
2fca1ddcf1 | ||
![]() |
9c28c63a0c | ||
395c013eea | |||
4568d69b0e | |||
03831a7eb5 | |||
![]() |
2b5a59a303 | ||
![]() |
3efabc603d | ||
b35f9a31d7 | |||
![]() |
8f75f4a2f5 | ||
![]() |
080946c360 | ||
![]() |
7eec033cad | ||
3187fd8a32 | |||
![]() |
d239e7e5ad | ||
![]() |
9dbf34010a | ||
![]() |
87016d1504 | ||
899ce01c55 | |||
![]() |
4af776f0a0 | ||
1dd66b6969 | |||
814df6b9a0 | |||
268922f8f2 | |||
4259b25ccc | |||
d22b9ec51a | |||
![]() |
3e8edd083e | ||
![]() |
093ab81d78 | ||
![]() |
7bf7f9164d | ||
![]() |
fedb658cc0 | ||
![]() |
851eabe059 | ||
![]() |
7c5389f4a0 | ||
af597586b6 | |||
b1d9a60a38 | |||
![]() |
1c2acd75b7 | ||
![]() |
202d3c3b9a | ||
![]() |
0a78fe1e76 | ||
8b151b2cdc | |||
8a8001cbec | |||
![]() |
fe1267e290 | ||
![]() |
9297c7dfa3 | ||
![]() |
7a2e6cba52 | ||
![]() |
dd1ed174b7 | ||
![]() |
89c0fbaf86 | ||
ca918a963f | |||
5fba0b746d | |||
![]() |
16d1de7094 | ||
![]() |
73f6f67adf | ||
![]() |
498aa50760 | ||
![]() |
1c470babbe | ||
![]() |
84d63916d2 | ||
![]() |
0c56b96fd9 | ||
28e69e631b | |||
9539256cb3 | |||
9659c88b0c | |||
![]() |
c66d180295 | ||
![]() |
9a9b1b41c0 | ||
![]() |
b4610a725c | ||
36a52a88d8 | |||
![]() |
678380727e | ||
![]() |
fb8188c49d | ||
![]() |
ec995cd686 | ||
![]() |
1310dbae46 | ||
![]() |
91a88e2fa8 | ||
![]() |
a4a9f2ac2b | ||
![]() |
66da70905f | ||
![]() |
50e739dd73 | ||
![]() |
c6c7f1748f | ||
![]() |
934ac7b318 | ||
![]() |
4654e14e9c | ||
![]() |
9bf31fbca8 | ||
![]() |
9d846ced45 | ||
![]() |
8d25a424d9 | ||
![]() |
bbec639c14 | ||
422cf9e29b | |||
![]() |
33c5ce1326 | ||
774b1ddf52 | |||
![]() |
b1856e6988 | ||
![]() |
d05ede8fe1 | ||
![]() |
fd41de8522 | ||
![]() |
4a6f37d727 | ||
![]() |
e0cdb278fb | ||
![]() |
a96e330989 | ||
156e6baa09 | |||
![]() |
d9f882a9b2 | ||
![]() |
305188b2e7 | ||
![]() |
4a764bba18 | ||
a891ad7213 | |||
b65f45e408 | |||
![]() |
49e58b1573 | ||
![]() |
c6f308cbd0 | ||
![]() |
5f5a89c38c | ||
![]() |
6b0349f37d | ||
![]() |
8dc36aa2e2 | ||
![]() |
ae8f74c10c | ||
![]() |
5b8650b99b | ||
![]() |
c249de99a3 | ||
ec18af7624 | |||
![]() |
6851413c52 | ||
![]() |
dba03d809c | ||
![]() |
68d1d93714 | ||
1183187be9 | |||
![]() |
7c3b5a2cb2 | ||
![]() |
4f3f4cd0dd | ||
![]() |
5a312c5f4d | ||
![]() |
da85207314 | ||
![]() |
6ecfcdfcbf | ||
5e7c3c432f | |||
![]() |
9f41d4c11c | ||
![]() |
8a74896333 | ||
![]() |
1d55a205e4 | ||
![]() |
8494a7fcaa | ||
![]() |
c7c1e7b96f | ||
![]() |
17b826a6d3 | ||
![]() |
0bd3fa63b8 | ||
![]() |
a884a2afb9 | ||
![]() |
c0d418cd87 | ||
![]() |
abfd6dfcd1 |
21 changed files with 830 additions and 86 deletions
7
.github/workflows/ai_robots_update.yml
vendored
7
.github/workflows/ai_robots_update.yml
vendored
|
@ -20,7 +20,12 @@ jobs:
|
||||||
echo "... done."
|
echo "... done."
|
||||||
git --no-pager diff
|
git --no-pager diff
|
||||||
git add -A
|
git add -A
|
||||||
git diff --quiet && git diff --staged --quiet || (git commit -m "Update from Dark Visitors" && git push)
|
if ! git diff --cached --quiet; then
|
||||||
|
git commit -m "Update from Dark Visitors"
|
||||||
|
git push
|
||||||
|
else
|
||||||
|
echo "No changes to commit."
|
||||||
|
fi
|
||||||
shell: bash
|
shell: bash
|
||||||
convert:
|
convert:
|
||||||
name: convert
|
name: convert
|
||||||
|
|
28
.github/workflows/run-tests.yml
vendored
Normal file
28
.github/workflows/run-tests.yml
vendored
Normal file
|
@ -0,0 +1,28 @@
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
jobs:
|
||||||
|
run-tests:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Check out repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
pip install -U requests beautifulsoup4
|
||||||
|
- name: Run tests
|
||||||
|
run: |
|
||||||
|
code/tests.py
|
||||||
|
lint-json:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Check out repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
- name: JQ Json Lint
|
||||||
|
run: jq . robots.json
|
|
@ -1,3 +1,3 @@
|
||||||
RewriteEngine On
|
RewriteEngine On
|
||||||
RewriteCond %{HTTP_USER_AGENT} ^.*(AI2Bot|Ai2Bot-Dolma|Amazonbot|anthropic-ai|Applebot|Applebot-Extended|Brightbot\ 1.0|Bytespider|CCBot|ChatGPT-User|Claude-Web|ClaudeBot|cohere-ai|cohere-training-data-crawler|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|FriendlyCrawler|Google-Extended|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta-ExternalAgent|Meta-ExternalFetcher|OAI-SearchBot|omgili|omgilibot|PanguBot|PerplexityBot|PetalBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio-Extended|YouBot).*$ [NC]
|
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot) [NC]
|
||||||
RewriteRule .* - [F,L]
|
RewriteRule !^/?robots\.txt$ - [F,L]
|
||||||
|
|
3
Caddyfile
Normal file
3
Caddyfile
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
@aibots {
|
||||||
|
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)"
|
||||||
|
}
|
8
FAQ.md
8
FAQ.md
|
@ -55,3 +55,11 @@ That depends on your stack.
|
||||||
## How can I contribute?
|
## How can I contribute?
|
||||||
|
|
||||||
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
|
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
|
||||||
|
|
||||||
|
## I'd like to donate money
|
||||||
|
|
||||||
|
That's kind of you, but we don't need your money. If you insist, we'd love you to make a donation to the [American Civil Liberties Union](https://www.aclu.org/), the [Disasters Emergency Committee](https://www.dec.org.uk/), or a similar organisation.
|
||||||
|
|
||||||
|
## Can my company sponsor ai.robots.txt?
|
||||||
|
|
||||||
|
No, thank you. We do not accept sponsorship of any kind. We prefer to maintain our independence. Our costs are negligible as we are entirely volunteer-based and community-driven.
|
||||||
|
|
33
README.md
33
README.md
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
<img src="/assets/images/noai-logo.png" width="100" />
|
<img src="/assets/images/noai-logo.png" width="100" />
|
||||||
|
|
||||||
This is an open list of web crawlers associated with AI companies and the training of LLMs to block. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
|
This list contains AI-related crawlers of all types, regardless of purpose. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
|
||||||
|
|
||||||
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
|
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
|
||||||
|
|
||||||
|
@ -13,16 +13,45 @@ If you'd like to add information about a crawler to the list, please make a pull
|
||||||
This repository provides the following files:
|
This repository provides the following files:
|
||||||
- `robots.txt`
|
- `robots.txt`
|
||||||
- `.htaccess`
|
- `.htaccess`
|
||||||
|
- `nginx-block-ai-bots.conf`
|
||||||
|
- `Caddyfile`
|
||||||
|
- `haproxy-block-ai-bots.txt`
|
||||||
|
|
||||||
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
|
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
|
||||||
|
|
||||||
`.htaccess` may be used to configure web servers such as [Apache httpd](https://httpd.apache.org/) to return an error page when one of the listed AI crawlers sends a request to the web server.
|
`.htaccess` may be used to configure web servers such as [Apache httpd](https://httpd.apache.org/) to return an error page when one of the listed AI crawlers sends a request to the web server.
|
||||||
Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/current/howto/htaccess.html), more performant methods than an `.htaccess` file exist.
|
Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/current/howto/htaccess.html), more performant methods than an `.htaccess` file exist.
|
||||||
|
|
||||||
|
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
|
||||||
|
|
||||||
|
`Caddyfile` includes a Header Regex matcher group you can copy or import into your Caddyfile, the rejection can then be handled as followed `abort @aibots`
|
||||||
|
|
||||||
|
`haproxy-block-ai-bots.txt` may be used to configure HAProxy to block AI bots. To implement it;
|
||||||
|
1. Add the file to the config directory of HAProxy
|
||||||
|
2. Add the following lines in the `frontend` section;
|
||||||
|
```
|
||||||
|
acl ai_robot hdr_sub(user-agent) -i -f /etc/haproxy/haproxy-block-ai-bots.txt
|
||||||
|
http-request deny if ai_robot
|
||||||
|
```
|
||||||
|
(Note that the path of the `haproxy-block-ai-bots.txt` may be different in your environment.)
|
||||||
|
|
||||||
|
|
||||||
|
[Bing uses the data it crawls for AI and training, you may opt out by adding a `meta` tag to the `head` of your site.](./docs/additional-steps/bing.md)
|
||||||
|
|
||||||
|
### Related
|
||||||
|
|
||||||
|
- [Robots.txt Traefik plugin](https://plugins.traefik.io/plugins/681b2f3fba3486128fc34fae/robots-txt-plugin):
|
||||||
|
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
|
||||||
|
file on-the-fly.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, and `.htaccess`.
|
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
|
||||||
|
|
||||||
|
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
|
||||||
|
```console
|
||||||
|
code/tests.py
|
||||||
|
```
|
||||||
|
|
||||||
## Subscribe to updates
|
## Subscribe to updates
|
||||||
|
|
||||||
|
|
83
code/robots.py
Normal file → Executable file
83
code/robots.py
Normal file → Executable file
|
@ -1,8 +1,11 @@
|
||||||
import json
|
#!/usr/bin/env python3
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from bs4 import BeautifulSoup
|
from bs4 import BeautifulSoup
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
def load_robots_json():
|
def load_robots_json():
|
||||||
|
@ -27,6 +30,7 @@ def updated_robots_json(soup):
|
||||||
"""Update AI scraper information with data from darkvisitors."""
|
"""Update AI scraper information with data from darkvisitors."""
|
||||||
existing_content = load_robots_json()
|
existing_content = load_robots_json()
|
||||||
to_include = [
|
to_include = [
|
||||||
|
"AI Agents",
|
||||||
"AI Assistants",
|
"AI Assistants",
|
||||||
"AI Data Scrapers",
|
"AI Data Scrapers",
|
||||||
"AI Search Crawlers",
|
"AI Search Crawlers",
|
||||||
|
@ -47,6 +51,7 @@ def updated_robots_json(soup):
|
||||||
continue
|
continue
|
||||||
for agent in section.find_all("a", href=True):
|
for agent in section.find_all("a", href=True):
|
||||||
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
|
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
|
||||||
|
name = clean_robot_name(name)
|
||||||
desc = agent.find("p").get_text().strip()
|
desc = agent.find("p").get_text().strip()
|
||||||
|
|
||||||
default_values = {
|
default_values = {
|
||||||
|
@ -98,8 +103,24 @@ def updated_robots_json(soup):
|
||||||
return sorted_robots
|
return sorted_robots
|
||||||
|
|
||||||
|
|
||||||
def ingest_darkvisitors():
|
def clean_robot_name(name):
|
||||||
|
""" Clean the robot name by removing some characters that were mangled by html software once. """
|
||||||
|
# This was specifically spotted in "Perplexity-User"
|
||||||
|
# Looks like a non-breaking hyphen introduced by the HTML rendering software
|
||||||
|
# Reading the source page for Perplexity: https://docs.perplexity.ai/guides/bots
|
||||||
|
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
|
||||||
|
# and it's only the Row-Heading that has the special hyphen
|
||||||
|
#
|
||||||
|
# Technically, there's no reason there wouldn't someday be a bot that
|
||||||
|
# actually uses a non-breaking hyphen, but that seems unlikely,
|
||||||
|
# so this solution should be fine for now.
|
||||||
|
result = re.sub(r"\u2011", "-", name)
|
||||||
|
if result != name:
|
||||||
|
print(f"\tCleaned '{name}' to '{result}' - unicode/html mangled chars normalized.")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def ingest_darkvisitors():
|
||||||
old_robots_json = load_robots_json()
|
old_robots_json = load_robots_json()
|
||||||
soup = get_agent_soup()
|
soup = get_agent_soup()
|
||||||
if soup:
|
if soup:
|
||||||
|
@ -121,30 +142,55 @@ def json_to_txt(robots_json):
|
||||||
return robots_txt
|
return robots_txt
|
||||||
|
|
||||||
|
|
||||||
|
def escape_md(s):
|
||||||
|
return re.sub(r"([]*\\|`(){}<>#+-.!_[])", r"\\\1", s)
|
||||||
|
|
||||||
|
|
||||||
def json_to_table(robots_json):
|
def json_to_table(robots_json):
|
||||||
"""Compose a markdown table with the information in robots.json"""
|
"""Compose a markdown table with the information in robots.json"""
|
||||||
table = "| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |\n"
|
table = "| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |\n"
|
||||||
table += "|-----|----------|-----------------------|----------|------------------|-------------|\n"
|
table += "|------|----------|-----------------------|----------|------------------|-------------|\n"
|
||||||
|
|
||||||
for name, robot in robots_json.items():
|
for name, robot in robots_json.items():
|
||||||
table += f'| {name} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
|
table += f'| {escape_md(name)} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
|
||||||
|
|
||||||
return table
|
return table
|
||||||
|
|
||||||
|
|
||||||
|
def list_to_pcre(lst):
|
||||||
|
# Python re is not 100% identical to PCRE which is used by Apache, but it
|
||||||
|
# should probably be close enough in the real world for re.escape to work.
|
||||||
|
formatted = "|".join(map(re.escape, lst))
|
||||||
|
return f"({formatted})"
|
||||||
|
|
||||||
|
|
||||||
def json_to_htaccess(robot_json):
|
def json_to_htaccess(robot_json):
|
||||||
# Creates a .htaccess filter file. It uses a regular expression to filter out
|
# Creates a .htaccess filter file. It uses a regular expression to filter out
|
||||||
# User agents that contain any of the blocked values.
|
# User agents that contain any of the blocked values.
|
||||||
htaccess = "RewriteEngine On\n"
|
htaccess = "RewriteEngine On\n"
|
||||||
htaccess += "RewriteCond %{HTTP_USER_AGENT} ^.*("
|
htaccess += f"RewriteCond %{{HTTP_USER_AGENT}} {list_to_pcre(robot_json.keys())} [NC]\n"
|
||||||
|
htaccess += "RewriteRule !^/?robots\\.txt$ - [F,L]\n"
|
||||||
# Escape spaces in each User Agent to build the regular expression
|
|
||||||
robots = map(lambda el: el.replace(" ", "\\ "), robot_json.keys())
|
|
||||||
htaccess += "|".join(robots)
|
|
||||||
htaccess += ").*$ [NC]\n"
|
|
||||||
htaccess += "RewriteRule .* - [F,L]"
|
|
||||||
return htaccess
|
return htaccess
|
||||||
|
|
||||||
|
def json_to_nginx(robot_json):
|
||||||
|
# Creates an Nginx config file. This config snippet can be included in
|
||||||
|
# nginx server{} blocks to block AI bots.
|
||||||
|
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
|
||||||
|
return config
|
||||||
|
|
||||||
|
|
||||||
|
def json_to_caddy(robot_json):
|
||||||
|
caddyfile = "@aibots {\n "
|
||||||
|
caddyfile += f' header_regexp User-Agent "{list_to_pcre(robot_json.keys())}"'
|
||||||
|
caddyfile += "\n}"
|
||||||
|
return caddyfile
|
||||||
|
|
||||||
|
def json_to_haproxy(robots_json):
|
||||||
|
# Creates a source file for HAProxy. Follow instructions in the README to implement it.
|
||||||
|
txt = "\n".join(f"{k}" for k in robots_json.keys())
|
||||||
|
return txt
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def update_file_if_changed(file_name, converter):
|
def update_file_if_changed(file_name, converter):
|
||||||
"""Update files if newer content is available and log the (in)actions."""
|
"""Update files if newer content is available and log the (in)actions."""
|
||||||
|
@ -171,6 +217,19 @@ def conversions():
|
||||||
file_name="./.htaccess",
|
file_name="./.htaccess",
|
||||||
converter=json_to_htaccess,
|
converter=json_to_htaccess,
|
||||||
)
|
)
|
||||||
|
update_file_if_changed(
|
||||||
|
file_name="./nginx-block-ai-bots.conf",
|
||||||
|
converter=json_to_nginx,
|
||||||
|
)
|
||||||
|
update_file_if_changed(
|
||||||
|
file_name="./Caddyfile",
|
||||||
|
converter=json_to_caddy,
|
||||||
|
)
|
||||||
|
|
||||||
|
update_file_if_changed(
|
||||||
|
file_name="./haproxy-block-ai-bots.txt",
|
||||||
|
converter=json_to_haproxy,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
|
@ -1,3 +1,3 @@
|
||||||
RewriteEngine On
|
RewriteEngine On
|
||||||
RewriteCond %{HTTP_USER_AGENT} ^.*(AI2Bot|Ai2Bot-Dolma|Amazonbot|anthropic-ai|Applebot|Applebot-Extended|Bytespider|CCBot|ChatGPT-User|Claude-Web|ClaudeBot|cohere-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google-Extended|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta-ExternalAgent|Meta-ExternalFetcher|OAI-SearchBot|omgili|omgilibot|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio-Extended|YouBot).*$ [NC]
|
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash) [NC]
|
||||||
RewriteRule .* - [F,L]
|
RewriteRule !^/?robots\.txt$ - [F,L]
|
||||||
|
|
3
code/test_files/Caddyfile
Normal file
3
code/test_files/Caddyfile
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
@aibots {
|
||||||
|
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)"
|
||||||
|
}
|
47
code/test_files/haproxy-block-ai-bots.txt
Normal file
47
code/test_files/haproxy-block-ai-bots.txt
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
AI2Bot
|
||||||
|
Ai2Bot-Dolma
|
||||||
|
Amazonbot
|
||||||
|
anthropic-ai
|
||||||
|
Applebot
|
||||||
|
Applebot-Extended
|
||||||
|
Bytespider
|
||||||
|
CCBot
|
||||||
|
ChatGPT-User
|
||||||
|
Claude-Web
|
||||||
|
ClaudeBot
|
||||||
|
cohere-ai
|
||||||
|
Diffbot
|
||||||
|
FacebookBot
|
||||||
|
facebookexternalhit
|
||||||
|
FriendlyCrawler
|
||||||
|
Google-Extended
|
||||||
|
GoogleOther
|
||||||
|
GoogleOther-Image
|
||||||
|
GoogleOther-Video
|
||||||
|
GPTBot
|
||||||
|
iaskspider/2.0
|
||||||
|
ICC-Crawler
|
||||||
|
ImagesiftBot
|
||||||
|
img2dataset
|
||||||
|
ISSCyberRiskCrawler
|
||||||
|
Kangaroo Bot
|
||||||
|
Meta-ExternalAgent
|
||||||
|
Meta-ExternalFetcher
|
||||||
|
OAI-SearchBot
|
||||||
|
omgili
|
||||||
|
omgilibot
|
||||||
|
Perplexity-User
|
||||||
|
PerplexityBot
|
||||||
|
PetalBot
|
||||||
|
Scrapy
|
||||||
|
Sidetrade indexer bot
|
||||||
|
Timpibot
|
||||||
|
VelenPublicWebCrawler
|
||||||
|
Webzio-Extended
|
||||||
|
YouBot
|
||||||
|
crawler.with.dots
|
||||||
|
star***crawler
|
||||||
|
Is this a crawler?
|
||||||
|
a[mazing]{42}(robot)
|
||||||
|
2^32$
|
||||||
|
curl|sudo bash
|
3
code/test_files/nginx-block-ai-bots.conf
Normal file
3
code/test_files/nginx-block-ai-bots.conf
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
|
||||||
|
return 403;
|
||||||
|
}
|
|
@ -223,6 +223,13 @@
|
||||||
"operator": "[Webz.io](https://webz.io/)",
|
"operator": "[Webz.io](https://webz.io/)",
|
||||||
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
||||||
},
|
},
|
||||||
|
"Perplexity-User": {
|
||||||
|
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
||||||
|
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
|
||||||
|
"function": "Used to answer queries at the request of users.",
|
||||||
|
"frequency": "Only when prompted by a user.",
|
||||||
|
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
|
||||||
|
},
|
||||||
"PerplexityBot": {
|
"PerplexityBot": {
|
||||||
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
||||||
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
|
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
|
||||||
|
@ -278,5 +285,47 @@
|
||||||
"function": "Scrapes data for search engine and LLMs.",
|
"function": "Scrapes data for search engine and LLMs.",
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
"description": "Retrieves data used for You.com web search engine and LLMs."
|
"description": "Retrieves data used for You.com web search engine and LLMs."
|
||||||
|
},
|
||||||
|
"crawler.with.dots": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression dots need to be escaped."
|
||||||
|
},
|
||||||
|
"star***crawler": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression stars need to be escaped."
|
||||||
|
},
|
||||||
|
"Is this a crawler?": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression spaces and question marks need to be escaped."
|
||||||
|
},
|
||||||
|
"a[mazing]{42}(robot)": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped."
|
||||||
|
},
|
||||||
|
"2^32$": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression RE anchor characters need to be escaped."
|
||||||
|
},
|
||||||
|
"curl|sudo bash": {
|
||||||
|
"operator": "Test suite",
|
||||||
|
"respect": "No",
|
||||||
|
"function": "To ensure the code works correctly.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "When used in the .htaccess regular expression pipes need to be escaped."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,6 +30,7 @@ User-agent: Meta-ExternalFetcher
|
||||||
User-agent: OAI-SearchBot
|
User-agent: OAI-SearchBot
|
||||||
User-agent: omgili
|
User-agent: omgili
|
||||||
User-agent: omgilibot
|
User-agent: omgilibot
|
||||||
|
User-agent: Perplexity-User
|
||||||
User-agent: PerplexityBot
|
User-agent: PerplexityBot
|
||||||
User-agent: PetalBot
|
User-agent: PetalBot
|
||||||
User-agent: Scrapy
|
User-agent: Scrapy
|
||||||
|
@ -38,4 +39,10 @@ User-agent: Timpibot
|
||||||
User-agent: VelenPublicWebCrawler
|
User-agent: VelenPublicWebCrawler
|
||||||
User-agent: Webzio-Extended
|
User-agent: Webzio-Extended
|
||||||
User-agent: YouBot
|
User-agent: YouBot
|
||||||
|
User-agent: crawler.with.dots
|
||||||
|
User-agent: star***crawler
|
||||||
|
User-agent: Is this a crawler?
|
||||||
|
User-agent: a[mazing]{42}(robot)
|
||||||
|
User-agent: 2^32$
|
||||||
|
User-agent: curl|sudo bash
|
||||||
Disallow: /
|
Disallow: /
|
||||||
|
|
|
@ -1,42 +1,49 @@
|
||||||
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|
||||||
|-----|----------|-----------------------|----------|------------------|-------------|
|
|------|----------|-----------------------|----------|------------------|-------------|
|
||||||
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
| Ai2Bot-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
||||||
| anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
||||||
| Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
|
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
|
||||||
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|
||||||
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
|
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
|
||||||
| ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|
||||||
| Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| Claude\-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
||||||
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
||||||
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
||||||
| facebookexternalhit | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | No information. | Unclear at this time. | Unclear at this time. |
|
| facebookexternalhit | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | No information. | Unclear at this time. | Unclear at this time. |
|
||||||
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
||||||
| Google-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
||||||
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GoogleOther-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GoogleOther-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
|
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
|
||||||
| iaskspider/2.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
|
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
|
||||||
| ICC-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
|
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
|
||||||
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
|
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
|
||||||
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
|
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
|
||||||
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
||||||
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
||||||
| Meta-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
||||||
| Meta-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
||||||
| OAI-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
||||||
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
||||||
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
||||||
|
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
|
||||||
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
|
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
|
||||||
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
||||||
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
||||||
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
||||||
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
||||||
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
||||||
| Webzio-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
||||||
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
|
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
|
||||||
|
| crawler\.with\.dots | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression dots need to be escaped. |
|
||||||
|
| star\*\*\*crawler | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression stars need to be escaped. |
|
||||||
|
| Is this a crawler? | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression spaces and question marks need to be escaped. |
|
||||||
|
| a\[mazing\]\{42\}\(robot\) | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped. |
|
||||||
|
| 2^32$ | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression RE anchor characters need to be escaped. |
|
||||||
|
| curl\|sudo bash | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression pipes need to be escaped. |
|
||||||
|
|
103
code/tests.py
Normal file → Executable file
103
code/tests.py
Normal file → Executable file
|
@ -1,27 +1,94 @@
|
||||||
"""These tests can be run with pytest.
|
#!/usr/bin/env python3
|
||||||
This requires pytest: pip install pytest
|
"""To run these tests just execute this script."""
|
||||||
cd to the `code` directory and run `pytest`
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
from pathlib import Path
|
import unittest
|
||||||
|
|
||||||
from robots import json_to_txt, json_to_table, json_to_htaccess
|
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx, json_to_haproxy, json_to_caddy
|
||||||
|
|
||||||
|
class RobotsUnittestExtensions:
|
||||||
|
def loadJson(self, pathname):
|
||||||
|
with open(pathname, "rt") as f:
|
||||||
|
return json.load(f)
|
||||||
|
|
||||||
|
def assertEqualsFile(self, f, s):
|
||||||
|
with open(f, "rt") as f:
|
||||||
|
f_contents = f.read()
|
||||||
|
|
||||||
|
return self.assertMultiLineEqual(f_contents, s)
|
||||||
|
|
||||||
|
|
||||||
def test_robots_txt_creation():
|
class TestRobotsTXTGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
robots_json = json.loads(Path("test_files/robots.json").read_text())
|
maxDiff = 8192
|
||||||
robots_txt = json_to_txt(robots_json)
|
|
||||||
assert Path("test_files/robots.txt").read_text() == robots_txt
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_robots_txt_generation(self):
|
||||||
|
robots_txt = json_to_txt(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/robots.txt", robots_txt)
|
||||||
|
|
||||||
|
|
||||||
def test_table_of_bot_metrices_md():
|
class TestTableMetricsGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
robots_json = json.loads(Path("test_files/robots.json").read_text())
|
maxDiff = 32768
|
||||||
robots_table = json_to_table(robots_json)
|
|
||||||
assert Path("test_files/table-of-bot-metrics.md").read_text() == robots_table
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_table_generation(self):
|
||||||
|
robots_table = json_to_table(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/table-of-bot-metrics.md", robots_table)
|
||||||
|
|
||||||
|
|
||||||
def test_htaccess_creation():
|
class TestHtaccessGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
robots_json = json.loads(Path("test_files/robots.json").read_text())
|
maxDiff = 8192
|
||||||
robots_htaccess = json_to_htaccess(robots_json)
|
|
||||||
assert Path("test_files/.htaccess").read_text() == robots_htaccess
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_htaccess_generation(self):
|
||||||
|
robots_htaccess = json_to_htaccess(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/.htaccess", robots_htaccess)
|
||||||
|
|
||||||
|
class TestNginxConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
|
maxDiff = 8192
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_nginx_generation(self):
|
||||||
|
robots_nginx = json_to_nginx(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
|
||||||
|
|
||||||
|
class TestHaproxyConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
|
maxDiff = 8192
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_haproxy_generation(self):
|
||||||
|
robots_haproxy = json_to_haproxy(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/haproxy-block-ai-bots.txt", robots_haproxy)
|
||||||
|
|
||||||
|
class TestRobotsNameCleaning(unittest.TestCase):
|
||||||
|
def test_clean_name(self):
|
||||||
|
from robots import clean_robot_name
|
||||||
|
|
||||||
|
self.assertEqual(clean_robot_name("Perplexity‑User"), "Perplexity-User")
|
||||||
|
|
||||||
|
class TestCaddyfileGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
|
maxDiff = 8192
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.robots_dict = self.loadJson("test_files/robots.json")
|
||||||
|
|
||||||
|
def test_caddyfile_generation(self):
|
||||||
|
robots_caddyfile = json_to_caddy(self.robots_dict)
|
||||||
|
self.assertEqualsFile("test_files/Caddyfile", robots_caddyfile)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
import os
|
||||||
|
os.chdir(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
unittest.main(verbosity=2)
|
||||||
|
|
40
docs/additional-steps/bing.md
Normal file
40
docs/additional-steps/bing.md
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
# Bing (bingbot)
|
||||||
|
|
||||||
|
It's not well publicised, but Bing uses the data it crawls for AI and training.
|
||||||
|
|
||||||
|
However, the current thinking is, blocking a search engine of this size using `robots.txt` seems a quite drastic approach as it is second only to Google and could significantly impact your website in search results.
|
||||||
|
|
||||||
|
Additionally, Bing powers a number of search engines such as Yahoo and AOL, and its search results are also used in Duck Duck Go, amongst others.
|
||||||
|
|
||||||
|
Fortunately, Bing supports a relatively simple opt-out method, requiring an additional step.
|
||||||
|
|
||||||
|
## How to opt-out of AI training
|
||||||
|
|
||||||
|
You must add a metatag in the `<head>` of your webpage or set the [X-Robots-Tag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Robots-Tag) HTTP header in your response. This also needs to be added to every page or response on your website.
|
||||||
|
|
||||||
|
If using the metatag, the line you need to add is:
|
||||||
|
|
||||||
|
```plaintext
|
||||||
|
<meta name="robots" content="noarchive">
|
||||||
|
```
|
||||||
|
Or include the HTTP response header:
|
||||||
|
```plaintext
|
||||||
|
X-Robots-Tag: noarchive
|
||||||
|
```
|
||||||
|
|
||||||
|
By adding this line or header, you are signifying to Bing: "Do not use the content for training Microsoft's generative AI foundation models."
|
||||||
|
|
||||||
|
## Will my site be negatively affected
|
||||||
|
|
||||||
|
Simple answer, no.
|
||||||
|
The original use of "noarchive" has been retired by all search engines. Google retired its use in 2024.
|
||||||
|
|
||||||
|
The use of this metatag will not impact your site in search engines or in any other meaningful way if you add it to your page(s).
|
||||||
|
|
||||||
|
It is now solely used by a handful of crawlers, such as Bingbot and Amazonbot, to signify to them not to use your data for AI/training.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
Bing Blog AI opt-out announcement: https://blogs.bing.com/webmaster/september-2023/Announcing-new-options-for-webmasters-to-control-usage-of-their-content-in-Bing-Chat
|
||||||
|
|
||||||
|
Bing metatag information, including AI opt-out: https://www.bing.com/webmasters/help/which-robots-metatags-does-bing-support-5198d240
|
80
haproxy-block-ai-bots.txt
Normal file
80
haproxy-block-ai-bots.txt
Normal file
|
@ -0,0 +1,80 @@
|
||||||
|
AI2Bot
|
||||||
|
Ai2Bot-Dolma
|
||||||
|
aiHitBot
|
||||||
|
Amazonbot
|
||||||
|
Andibot
|
||||||
|
anthropic-ai
|
||||||
|
Applebot
|
||||||
|
Applebot-Extended
|
||||||
|
bedrockbot
|
||||||
|
Brightbot 1.0
|
||||||
|
Bytespider
|
||||||
|
CCBot
|
||||||
|
ChatGPT-User
|
||||||
|
Claude-SearchBot
|
||||||
|
Claude-User
|
||||||
|
Claude-Web
|
||||||
|
ClaudeBot
|
||||||
|
cohere-ai
|
||||||
|
cohere-training-data-crawler
|
||||||
|
Cotoyogi
|
||||||
|
Crawlspace
|
||||||
|
Diffbot
|
||||||
|
DuckAssistBot
|
||||||
|
EchoboxBot
|
||||||
|
FacebookBot
|
||||||
|
facebookexternalhit
|
||||||
|
Factset_spyderbot
|
||||||
|
FirecrawlAgent
|
||||||
|
FriendlyCrawler
|
||||||
|
Google-CloudVertexBot
|
||||||
|
Google-Extended
|
||||||
|
GoogleOther
|
||||||
|
GoogleOther-Image
|
||||||
|
GoogleOther-Video
|
||||||
|
GPTBot
|
||||||
|
iaskspider/2.0
|
||||||
|
ICC-Crawler
|
||||||
|
ImagesiftBot
|
||||||
|
img2dataset
|
||||||
|
ISSCyberRiskCrawler
|
||||||
|
Kangaroo Bot
|
||||||
|
meta-externalagent
|
||||||
|
Meta-ExternalAgent
|
||||||
|
meta-externalfetcher
|
||||||
|
Meta-ExternalFetcher
|
||||||
|
MistralAI-User/1.0
|
||||||
|
MyCentralAIScraperBot
|
||||||
|
NovaAct
|
||||||
|
OAI-SearchBot
|
||||||
|
omgili
|
||||||
|
omgilibot
|
||||||
|
Operator
|
||||||
|
PanguBot
|
||||||
|
Panscient
|
||||||
|
panscient.com
|
||||||
|
Perplexity-User
|
||||||
|
PerplexityBot
|
||||||
|
PetalBot
|
||||||
|
PhindBot
|
||||||
|
Poseidon Research Crawler
|
||||||
|
QualifiedBot
|
||||||
|
QuillBot
|
||||||
|
quillbot.com
|
||||||
|
SBIntuitionsBot
|
||||||
|
Scrapy
|
||||||
|
SemrushBot
|
||||||
|
SemrushBot-BA
|
||||||
|
SemrushBot-CT
|
||||||
|
SemrushBot-OCOB
|
||||||
|
SemrushBot-SI
|
||||||
|
SemrushBot-SWA
|
||||||
|
Sidetrade indexer bot
|
||||||
|
TikTokSpider
|
||||||
|
Timpibot
|
||||||
|
VelenPublicWebCrawler
|
||||||
|
Webzio-Extended
|
||||||
|
wpbot
|
||||||
|
YandexAdditional
|
||||||
|
YandexAdditionalBot
|
||||||
|
YouBot
|
3
nginx-block-ai-bots.conf
Normal file
3
nginx-block-ai-bots.conf
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)") {
|
||||||
|
return 403;
|
||||||
|
}
|
260
robots.json
260
robots.json
|
@ -13,6 +13,13 @@
|
||||||
"operator": "[Ai2](https://allenai.org/crawler)",
|
"operator": "[Ai2](https://allenai.org/crawler)",
|
||||||
"respect": "Yes"
|
"respect": "Yes"
|
||||||
},
|
},
|
||||||
|
"aiHitBot": {
|
||||||
|
"operator": "[aiHit](https://www.aihitdata.com/about)",
|
||||||
|
"respect": "Yes",
|
||||||
|
"function": "A massive, artificial intelligence/machine learning, automated system.",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Scrapes data for AI systems."
|
||||||
|
},
|
||||||
"Amazonbot": {
|
"Amazonbot": {
|
||||||
"operator": "Amazon",
|
"operator": "Amazon",
|
||||||
"respect": "Yes",
|
"respect": "Yes",
|
||||||
|
@ -20,6 +27,13 @@
|
||||||
"frequency": "No information provided.",
|
"frequency": "No information provided.",
|
||||||
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
|
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
|
||||||
},
|
},
|
||||||
|
"Andibot": {
|
||||||
|
"operator": "[Andi](https://andisearch.com/)",
|
||||||
|
"respect": "Unclear at this time",
|
||||||
|
"function": "Search engine using generative AI, AI Search Assistant",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Scrapes website and provides AI summary."
|
||||||
|
},
|
||||||
"anthropic-ai": {
|
"anthropic-ai": {
|
||||||
"operator": "[Anthropic](https://www.anthropic.com)",
|
"operator": "[Anthropic](https://www.anthropic.com)",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -41,6 +55,13 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
|
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
|
||||||
},
|
},
|
||||||
|
"bedrockbot": {
|
||||||
|
"operator": "[Amazon](https://amazon.com)",
|
||||||
|
"respect": "[Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector)",
|
||||||
|
"function": "Data scraping for custom AI applications.",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application."
|
||||||
|
},
|
||||||
"Brightbot 1.0": {
|
"Brightbot 1.0": {
|
||||||
"operator": "Browsing.ai",
|
"operator": "Browsing.ai",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -69,12 +90,26 @@
|
||||||
"frequency": "Only when prompted by a user.",
|
"frequency": "Only when prompted by a user.",
|
||||||
"description": "Used by plugins in ChatGPT to answer queries based on user input."
|
"description": "Used by plugins in ChatGPT to answer queries based on user input."
|
||||||
},
|
},
|
||||||
"Claude-Web": {
|
"Claude-SearchBot": {
|
||||||
"operator": "[Anthropic](https://www.anthropic.com)",
|
"operator": "[Anthropic](https://www.anthropic.com)",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
|
||||||
"function": "Scrapes data to train Anthropic's AI products.",
|
"function": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses.",
|
||||||
"frequency": "No information provided.",
|
"frequency": "No information provided.",
|
||||||
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
|
"description": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses."
|
||||||
|
},
|
||||||
|
"Claude-User": {
|
||||||
|
"operator": "[Anthropic](https://www.anthropic.com)",
|
||||||
|
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
|
||||||
|
"function": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent.",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent."
|
||||||
|
},
|
||||||
|
"Claude-Web": {
|
||||||
|
"operator": "Anthropic",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "Undocumented AI Agents",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web"
|
||||||
},
|
},
|
||||||
"ClaudeBot": {
|
"ClaudeBot": {
|
||||||
"operator": "[Anthropic](https://www.anthropic.com)",
|
"operator": "[Anthropic](https://www.anthropic.com)",
|
||||||
|
@ -97,6 +132,13 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
|
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
|
||||||
},
|
},
|
||||||
|
"Cotoyogi": {
|
||||||
|
"operator": "[ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/)",
|
||||||
|
"respect": "Yes",
|
||||||
|
"function": "AI LLM Scraper.",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Scrapes data for AI training in Japanese language."
|
||||||
|
},
|
||||||
"Crawlspace": {
|
"Crawlspace": {
|
||||||
"operator": "[Crawlspace](https://crawlspace.dev)",
|
"operator": "[Crawlspace](https://crawlspace.dev)",
|
||||||
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
|
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
|
||||||
|
@ -118,6 +160,13 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot"
|
"description": "DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot"
|
||||||
},
|
},
|
||||||
|
"EchoboxBot": {
|
||||||
|
"operator": "[Echobox](https://echobox.com)",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "Data collection to support AI-powered products.",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Supports company's AI-powered social and email management products."
|
||||||
|
},
|
||||||
"FacebookBot": {
|
"FacebookBot": {
|
||||||
"operator": "Meta/Facebook",
|
"operator": "Meta/Facebook",
|
||||||
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
|
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
|
||||||
|
@ -125,6 +174,27 @@
|
||||||
"frequency": "Up to 1 page per second",
|
"frequency": "Up to 1 page per second",
|
||||||
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
|
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
|
||||||
},
|
},
|
||||||
|
"facebookexternalhit": {
|
||||||
|
"operator": "Meta/Facebook",
|
||||||
|
"respect": "[No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313)",
|
||||||
|
"function": "Ostensibly only for sharing, but likely used as an AI crawler as well",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is \"to crawl the content of an app or website that was shared on one of Meta\u2019s family of apps\u2026\". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary."
|
||||||
|
},
|
||||||
|
"Factset_spyderbot": {
|
||||||
|
"operator": "[Factset](https://www.factset.com/ai)",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI model training.",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Scrapes data for AI training."
|
||||||
|
},
|
||||||
|
"FirecrawlAgent": {
|
||||||
|
"operator": "[Firecrawl](https://www.firecrawl.dev/)",
|
||||||
|
"respect": "Yes",
|
||||||
|
"function": "AI scraper and LLM training",
|
||||||
|
"frequency": "No information provided.",
|
||||||
|
"description": "Scrapes data for AI systems and LLM training."
|
||||||
|
},
|
||||||
"FriendlyCrawler": {
|
"FriendlyCrawler": {
|
||||||
"description": "Unclear who the operator is; but data is used for training/machine learning.",
|
"description": "Unclear who the operator is; but data is used for training/machine learning.",
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
|
@ -132,6 +202,13 @@
|
||||||
"operator": "Unknown",
|
"operator": "Unknown",
|
||||||
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
|
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
|
||||||
},
|
},
|
||||||
|
"Google-CloudVertexBot": {
|
||||||
|
"operator": "Google",
|
||||||
|
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
||||||
|
"function": "Build and manage AI models for businesses employing Vertex AI",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents."
|
||||||
|
},
|
||||||
"Google-Extended": {
|
"Google-Extended": {
|
||||||
"operator": "Google",
|
"operator": "Google",
|
||||||
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
||||||
|
@ -209,13 +286,27 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
|
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
|
||||||
},
|
},
|
||||||
"Meta-ExternalAgent": {
|
"meta-externalagent": {
|
||||||
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
|
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
|
||||||
"respect": "Yes.",
|
"respect": "Yes",
|
||||||
"function": "Used to train models and improve products.",
|
"function": "Used to train models and improve products.",
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
|
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
|
||||||
},
|
},
|
||||||
|
"Meta-ExternalAgent": {
|
||||||
|
"operator": "Unclear at this time.",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI Data Scrapers",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent"
|
||||||
|
},
|
||||||
|
"meta-externalfetcher": {
|
||||||
|
"operator": "Unclear at this time.",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI Assistants",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
||||||
|
},
|
||||||
"Meta-ExternalFetcher": {
|
"Meta-ExternalFetcher": {
|
||||||
"operator": "Unclear at this time.",
|
"operator": "Unclear at this time.",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -223,6 +314,27 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
||||||
},
|
},
|
||||||
|
"MistralAI-User/1.0": {
|
||||||
|
"operator": "Mistral AI",
|
||||||
|
"function": "Takes action based on user prompts.",
|
||||||
|
"frequency": "Only when prompted by a user.",
|
||||||
|
"description": "MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response.",
|
||||||
|
"respect": "Yes"
|
||||||
|
},
|
||||||
|
"MyCentralAIScraperBot": {
|
||||||
|
"operator": "Unclear at this time.",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI data scraper",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Operator and data use is unclear at this time."
|
||||||
|
},
|
||||||
|
"NovaAct": {
|
||||||
|
"operator": "Unclear at this time.",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI Agents",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact"
|
||||||
|
},
|
||||||
"OAI-SearchBot": {
|
"OAI-SearchBot": {
|
||||||
"operator": "[OpenAI](https://openai.com)",
|
"operator": "[OpenAI](https://openai.com)",
|
||||||
"respect": "[Yes](https://platform.openai.com/docs/bots)",
|
"respect": "[Yes](https://platform.openai.com/docs/bots)",
|
||||||
|
@ -244,6 +356,13 @@
|
||||||
"operator": "[Webz.io](https://webz.io/)",
|
"operator": "[Webz.io](https://webz.io/)",
|
||||||
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
||||||
},
|
},
|
||||||
|
"Operator": {
|
||||||
|
"operator": "Unclear at this time.",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "AI Agents",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator"
|
||||||
|
},
|
||||||
"PanguBot": {
|
"PanguBot": {
|
||||||
"operator": "the Chinese company Huawei",
|
"operator": "the Chinese company Huawei",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -251,12 +370,33 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot"
|
"description": "PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot"
|
||||||
},
|
},
|
||||||
|
"Panscient": {
|
||||||
|
"operator": "[Panscient](https://panscient.com)",
|
||||||
|
"respect": "[Yes](https://panscient.com/faq.htm)",
|
||||||
|
"function": "Data collection and analysis using machine learning and AI.",
|
||||||
|
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
|
||||||
|
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
|
||||||
|
},
|
||||||
|
"panscient.com": {
|
||||||
|
"operator": "[Panscient](https://panscient.com)",
|
||||||
|
"respect": "[Yes](https://panscient.com/faq.htm)",
|
||||||
|
"function": "Data collection and analysis using machine learning and AI.",
|
||||||
|
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
|
||||||
|
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
|
||||||
|
},
|
||||||
|
"Perplexity-User": {
|
||||||
|
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
||||||
|
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
|
||||||
|
"function": "Used to answer queries at the request of users.",
|
||||||
|
"frequency": "Only when prompted by a user.",
|
||||||
|
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
|
||||||
|
},
|
||||||
"PerplexityBot": {
|
"PerplexityBot": {
|
||||||
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
"operator": "[Perplexity](https://www.perplexity.ai/)",
|
||||||
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
|
"respect": "[Yes](https://docs.perplexity.ai/guides/bots)",
|
||||||
"function": "Used to answer queries at the request of users.",
|
"function": "Search result generation.",
|
||||||
"frequency": "Takes action based on user prompts.",
|
"frequency": "No information.",
|
||||||
"description": "Operated by Perplexity to obtain results in response to user queries."
|
"description": "Crawls sites to surface as results in Perplexity."
|
||||||
},
|
},
|
||||||
"PetalBot": {
|
"PetalBot": {
|
||||||
"description": "Operated by Huawei to provide search and AI assistant services.",
|
"description": "Operated by Huawei to provide search and AI assistant services.",
|
||||||
|
@ -265,6 +405,48 @@
|
||||||
"operator": "[Huawei](https://huawei.com/)",
|
"operator": "[Huawei](https://huawei.com/)",
|
||||||
"respect": "Yes"
|
"respect": "Yes"
|
||||||
},
|
},
|
||||||
|
"PhindBot": {
|
||||||
|
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly",
|
||||||
|
"frequency": "No explicit frequency provided.",
|
||||||
|
"function": "AI-enhanced search engine.",
|
||||||
|
"operator": "[phind](https://www.phind.com/)",
|
||||||
|
"respect": "Unclear at this time."
|
||||||
|
},
|
||||||
|
"Poseidon Research Crawler": {
|
||||||
|
"operator": "[Poseidon Research](https://www.poseidonresearch.com)",
|
||||||
|
"description": "Lab focused on scaling the interpretability research necessary to make better AI systems possible.",
|
||||||
|
"frequency": "No explicit frequency provided.",
|
||||||
|
"function": "AI research crawler",
|
||||||
|
"respect": "Unclear at this time."
|
||||||
|
},
|
||||||
|
"QualifiedBot": {
|
||||||
|
"description": "Operated by Qualified as part of their suite of AI product offerings.",
|
||||||
|
"frequency": "No explicit frequency provided.",
|
||||||
|
"function": "Company offers AI agents and other related products; usage can be assumed to support said products.",
|
||||||
|
"operator": "[Qualified](https://www.qualified.com)",
|
||||||
|
"respect": "Unclear at this time."
|
||||||
|
},
|
||||||
|
"QuillBot": {
|
||||||
|
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
|
||||||
|
"frequency": "No explicit frequency provided.",
|
||||||
|
"function": "Company offers AI detection, writing tools and other services.",
|
||||||
|
"operator": "[Quillbot](https://quillbot.com)",
|
||||||
|
"respect": "Unclear at this time."
|
||||||
|
},
|
||||||
|
"quillbot.com": {
|
||||||
|
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
|
||||||
|
"frequency": "No explicit frequency provided.",
|
||||||
|
"function": "Company offers AI detection, writing tools and other services.",
|
||||||
|
"operator": "[Quillbot](https://quillbot.com)",
|
||||||
|
"respect": "Unclear at this time."
|
||||||
|
},
|
||||||
|
"SBIntuitionsBot": {
|
||||||
|
"description": "AI development and information analysis",
|
||||||
|
"respect": "[Yes](https://www.sbintuitions.co.jp/en/bot/)",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"function": "Uses data gathered in AI development and information analysis.",
|
||||||
|
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)"
|
||||||
|
},
|
||||||
"Scrapy": {
|
"Scrapy": {
|
||||||
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
|
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
|
@ -272,6 +454,27 @@
|
||||||
"operator": "[Zyte](https://www.zyte.com)",
|
"operator": "[Zyte](https://www.zyte.com)",
|
||||||
"respect": "Unclear at this time."
|
"respect": "Unclear at this time."
|
||||||
},
|
},
|
||||||
|
"SemrushBot": {
|
||||||
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
"function": "Crawls your site for ContentShake AI tool.",
|
||||||
|
"frequency": "Roughly once every 10 seconds.",
|
||||||
|
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
||||||
|
},
|
||||||
|
"SemrushBot-BA": {
|
||||||
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
"function": "Crawls your site for ContentShake AI tool.",
|
||||||
|
"frequency": "Roughly once every 10 seconds.",
|
||||||
|
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
||||||
|
},
|
||||||
|
"SemrushBot-CT": {
|
||||||
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
"function": "Crawls your site for ContentShake AI tool.",
|
||||||
|
"frequency": "Roughly once every 10 seconds.",
|
||||||
|
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
||||||
|
},
|
||||||
"SemrushBot-OCOB": {
|
"SemrushBot-OCOB": {
|
||||||
"operator": "[Semrush](https://www.semrush.com/)",
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
"respect": "[Yes](https://www.semrush.com/bot/)",
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
@ -279,6 +482,13 @@
|
||||||
"frequency": "Roughly once every 10 seconds.",
|
"frequency": "Roughly once every 10 seconds.",
|
||||||
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
||||||
},
|
},
|
||||||
|
"SemrushBot-SI": {
|
||||||
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
"function": "Crawls your site for ContentShake AI tool.",
|
||||||
|
"frequency": "Roughly once every 10 seconds.",
|
||||||
|
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
|
||||||
|
},
|
||||||
"SemrushBot-SWA": {
|
"SemrushBot-SWA": {
|
||||||
"operator": "[Semrush](https://www.semrush.com/)",
|
"operator": "[Semrush](https://www.semrush.com/)",
|
||||||
"respect": "[Yes](https://www.semrush.com/bot/)",
|
"respect": "[Yes](https://www.semrush.com/bot/)",
|
||||||
|
@ -293,6 +503,13 @@
|
||||||
"operator": "[Sidetrade](https://www.sidetrade.com)",
|
"operator": "[Sidetrade](https://www.sidetrade.com)",
|
||||||
"respect": "Unclear at this time."
|
"respect": "Unclear at this time."
|
||||||
},
|
},
|
||||||
|
"TikTokSpider": {
|
||||||
|
"operator": "ByteDance",
|
||||||
|
"respect": "Unclear at this time.",
|
||||||
|
"function": "LLM training.",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "Downloads data to train LLMS, as per Bytespider."
|
||||||
|
},
|
||||||
"Timpibot": {
|
"Timpibot": {
|
||||||
"operator": "[Timpi](https://timpi.io)",
|
"operator": "[Timpi](https://timpi.io)",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -314,6 +531,27 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
|
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
|
||||||
},
|
},
|
||||||
|
"wpbot": {
|
||||||
|
"operator": "[QuantumCloud](https://www.quantumcloud.com)",
|
||||||
|
"respect": "Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9)",
|
||||||
|
"function": "Live chat support and lead generation.",
|
||||||
|
"frequency": "Unclear at this time.",
|
||||||
|
"description": "wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support."
|
||||||
|
},
|
||||||
|
"YandexAdditional": {
|
||||||
|
"operator": "[Yandex](https://yandex.ru)",
|
||||||
|
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
|
||||||
|
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "Retrieves data used for YandexGPT quick answers features."
|
||||||
|
},
|
||||||
|
"YandexAdditionalBot": {
|
||||||
|
"operator": "[Yandex](https://yandex.ru)",
|
||||||
|
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
|
||||||
|
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
|
||||||
|
"frequency": "No information.",
|
||||||
|
"description": "Retrieves data used for YandexGPT quick answers features."
|
||||||
|
},
|
||||||
"YouBot": {
|
"YouBot": {
|
||||||
"operator": "[You](https://about.you.com/youchat/)",
|
"operator": "[You](https://about.you.com/youchat/)",
|
||||||
"respect": "[Yes](https://about.you.com/youbot/)",
|
"respect": "[Yes](https://about.you.com/youbot/)",
|
||||||
|
@ -321,4 +559,4 @@
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
"description": "Retrieves data used for You.com web search engine and LLMs."
|
"description": "Retrieves data used for You.com web search engine and LLMs."
|
||||||
}
|
}
|
||||||
}
|
}
|
34
robots.txt
34
robots.txt
|
@ -1,22 +1,33 @@
|
||||||
User-agent: AI2Bot
|
User-agent: AI2Bot
|
||||||
User-agent: Ai2Bot-Dolma
|
User-agent: Ai2Bot-Dolma
|
||||||
|
User-agent: aiHitBot
|
||||||
User-agent: Amazonbot
|
User-agent: Amazonbot
|
||||||
|
User-agent: Andibot
|
||||||
User-agent: anthropic-ai
|
User-agent: anthropic-ai
|
||||||
User-agent: Applebot
|
User-agent: Applebot
|
||||||
User-agent: Applebot-Extended
|
User-agent: Applebot-Extended
|
||||||
|
User-agent: bedrockbot
|
||||||
User-agent: Brightbot 1.0
|
User-agent: Brightbot 1.0
|
||||||
User-agent: Bytespider
|
User-agent: Bytespider
|
||||||
User-agent: CCBot
|
User-agent: CCBot
|
||||||
User-agent: ChatGPT-User
|
User-agent: ChatGPT-User
|
||||||
|
User-agent: Claude-SearchBot
|
||||||
|
User-agent: Claude-User
|
||||||
User-agent: Claude-Web
|
User-agent: Claude-Web
|
||||||
User-agent: ClaudeBot
|
User-agent: ClaudeBot
|
||||||
User-agent: cohere-ai
|
User-agent: cohere-ai
|
||||||
User-agent: cohere-training-data-crawler
|
User-agent: cohere-training-data-crawler
|
||||||
|
User-agent: Cotoyogi
|
||||||
User-agent: Crawlspace
|
User-agent: Crawlspace
|
||||||
User-agent: Diffbot
|
User-agent: Diffbot
|
||||||
User-agent: DuckAssistBot
|
User-agent: DuckAssistBot
|
||||||
|
User-agent: EchoboxBot
|
||||||
User-agent: FacebookBot
|
User-agent: FacebookBot
|
||||||
|
User-agent: facebookexternalhit
|
||||||
|
User-agent: Factset_spyderbot
|
||||||
|
User-agent: FirecrawlAgent
|
||||||
User-agent: FriendlyCrawler
|
User-agent: FriendlyCrawler
|
||||||
|
User-agent: Google-CloudVertexBot
|
||||||
User-agent: Google-Extended
|
User-agent: Google-Extended
|
||||||
User-agent: GoogleOther
|
User-agent: GoogleOther
|
||||||
User-agent: GoogleOther-Image
|
User-agent: GoogleOther-Image
|
||||||
|
@ -28,20 +39,43 @@ User-agent: ImagesiftBot
|
||||||
User-agent: img2dataset
|
User-agent: img2dataset
|
||||||
User-agent: ISSCyberRiskCrawler
|
User-agent: ISSCyberRiskCrawler
|
||||||
User-agent: Kangaroo Bot
|
User-agent: Kangaroo Bot
|
||||||
|
User-agent: meta-externalagent
|
||||||
User-agent: Meta-ExternalAgent
|
User-agent: Meta-ExternalAgent
|
||||||
|
User-agent: meta-externalfetcher
|
||||||
User-agent: Meta-ExternalFetcher
|
User-agent: Meta-ExternalFetcher
|
||||||
|
User-agent: MistralAI-User/1.0
|
||||||
|
User-agent: MyCentralAIScraperBot
|
||||||
|
User-agent: NovaAct
|
||||||
User-agent: OAI-SearchBot
|
User-agent: OAI-SearchBot
|
||||||
User-agent: omgili
|
User-agent: omgili
|
||||||
User-agent: omgilibot
|
User-agent: omgilibot
|
||||||
|
User-agent: Operator
|
||||||
User-agent: PanguBot
|
User-agent: PanguBot
|
||||||
|
User-agent: Panscient
|
||||||
|
User-agent: panscient.com
|
||||||
|
User-agent: Perplexity-User
|
||||||
User-agent: PerplexityBot
|
User-agent: PerplexityBot
|
||||||
User-agent: PetalBot
|
User-agent: PetalBot
|
||||||
|
User-agent: PhindBot
|
||||||
|
User-agent: Poseidon Research Crawler
|
||||||
|
User-agent: QualifiedBot
|
||||||
|
User-agent: QuillBot
|
||||||
|
User-agent: quillbot.com
|
||||||
|
User-agent: SBIntuitionsBot
|
||||||
User-agent: Scrapy
|
User-agent: Scrapy
|
||||||
|
User-agent: SemrushBot
|
||||||
|
User-agent: SemrushBot-BA
|
||||||
|
User-agent: SemrushBot-CT
|
||||||
User-agent: SemrushBot-OCOB
|
User-agent: SemrushBot-OCOB
|
||||||
|
User-agent: SemrushBot-SI
|
||||||
User-agent: SemrushBot-SWA
|
User-agent: SemrushBot-SWA
|
||||||
User-agent: Sidetrade indexer bot
|
User-agent: Sidetrade indexer bot
|
||||||
|
User-agent: TikTokSpider
|
||||||
User-agent: Timpibot
|
User-agent: Timpibot
|
||||||
User-agent: VelenPublicWebCrawler
|
User-agent: VelenPublicWebCrawler
|
||||||
User-agent: Webzio-Extended
|
User-agent: Webzio-Extended
|
||||||
|
User-agent: wpbot
|
||||||
|
User-agent: YandexAdditional
|
||||||
|
User-agent: YandexAdditionalBot
|
||||||
User-agent: YouBot
|
User-agent: YouBot
|
||||||
Disallow: /
|
Disallow: /
|
||||||
|
|
|
@ -1,48 +1,82 @@
|
||||||
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|
||||||
|-----|----------|-----------------------|----------|------------------|-------------|
|
|------|----------|-----------------------|----------|------------------|-------------|
|
||||||
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
| Ai2Bot-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
|
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
|
||||||
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
||||||
| anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| Andibot | [Andi](https://andisearch.com/) | Unclear at this time | Search engine using generative AI, AI Search Assistant | No information provided. | Scrapes website and provides AI summary. |
|
||||||
|
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
||||||
| Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
|
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
|
||||||
| Brightbot 1.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
|
| bedrockbot | [Amazon](https://amazon.com) | [Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector) | Data scraping for custom AI applications. | Unclear at this time. | Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application. |
|
||||||
|
| Brightbot 1\.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
|
||||||
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|
||||||
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
|
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
|
||||||
| ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|
||||||
| Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| Claude\-SearchBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. | No information provided. | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. |
|
||||||
|
| Claude\-User | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. | No information provided. | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. |
|
||||||
|
| Claude\-Web | Anthropic | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web |
|
||||||
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
||||||
| cohere-training-data-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
|
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
|
||||||
|
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
|
||||||
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
|
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
|
||||||
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
||||||
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
|
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
|
||||||
|
| EchoboxBot | [Echobox](https://echobox.com) | Unclear at this time. | Data collection to support AI-powered products. | Unclear at this time. | Supports company's AI-powered social and email management products. |
|
||||||
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
||||||
|
| facebookexternalhit | Meta/Facebook | [No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) | Ostensibly only for sharing, but likely used as an AI crawler as well | Unclear at this time. | Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is "to crawl the content of an app or website that was shared on one of Meta’s family of apps…". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary. |
|
||||||
|
| Factset\_spyderbot | [Factset](https://www.factset.com/ai) | Unclear at this time. | AI model training. | No information provided. | Scrapes data for AI training. |
|
||||||
|
| FirecrawlAgent | [Firecrawl](https://www.firecrawl.dev/) | Yes | AI scraper and LLM training | No information provided. | Scrapes data for AI systems and LLM training. |
|
||||||
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
||||||
| Google-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
| Google\-CloudVertexBot | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Build and manage AI models for businesses employing Vertex AI | No information. | Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents. |
|
||||||
|
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
||||||
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GoogleOther-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GoogleOther-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
|
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
|
||||||
| iaskspider/2.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
|
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
|
||||||
| ICC-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
|
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
|
||||||
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
|
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
|
||||||
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
|
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
|
||||||
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
||||||
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
||||||
| Meta-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
| meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
||||||
| Meta-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
| Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
|
||||||
| OAI-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
| meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
||||||
|
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
||||||
|
| MistralAI\-User/1\.0 | Mistral AI | Yes | Takes action based on user prompts. | Only when prompted by a user. | MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. |
|
||||||
|
| MyCentralAIScraperBot | Unclear at this time. | Unclear at this time. | AI data scraper | Unclear at this time. | Operator and data use is unclear at this time. |
|
||||||
|
| NovaAct | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact |
|
||||||
|
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
||||||
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
||||||
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
||||||
|
| Operator | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator |
|
||||||
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
|
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
|
||||||
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
|
| Panscient | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
|
||||||
|
| panscient\.com | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
|
||||||
|
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
|
||||||
|
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
|
||||||
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
||||||
|
| PhindBot | [phind](https://www.phind.com/) | Unclear at this time. | AI-enhanced search engine. | No explicit frequency provided. | Company offers an AI agent that uses AI and generate extra web query on the fly |
|
||||||
|
| Poseidon Research Crawler | [Poseidon Research](https://www.poseidonresearch.com) | Unclear at this time. | AI research crawler | No explicit frequency provided. | Lab focused on scaling the interpretability research necessary to make better AI systems possible. |
|
||||||
|
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
|
||||||
|
| QuillBot | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
|
||||||
|
| quillbot\.com | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
|
||||||
|
| SBIntuitionsBot | [SB Intuitions](https://www.sbintuitions.co.jp/en/) | [Yes](https://www.sbintuitions.co.jp/en/bot/) | Uses data gathered in AI development and information analysis. | No information. | AI development and information analysis |
|
||||||
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
||||||
| SemrushBot-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
| SemrushBot | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
| SemrushBot-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
| SemrushBot\-BA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
|
| SemrushBot\-CT | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
|
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
|
| SemrushBot\-SI | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
|
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
||||||
|
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
|
||||||
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
||||||
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
||||||
| Webzio-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
||||||
|
| wpbot | [QuantumCloud](https://www.quantumcloud.com) | Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9) | Live chat support and lead generation. | Unclear at this time. | wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support. |
|
||||||
|
| YandexAdditional | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
|
||||||
|
| YandexAdditionalBot | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
|
||||||
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
|
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue