Compare commits

...

183 commits
v1.21 ... main

Author SHA1 Message Date
dark-visitors
4ed17b8e4a Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-17 01:00:21 +00:00
ai.robots.txt
5326c202b5 Merge pull request #154 from paulrudy/main
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
re-add facebookexternalhit
2025-06-16 15:12:42 +00:00
a31ae1e6d0
Merge pull request #154 from paulrudy/main
re-add facebookexternalhit
2025-06-16 08:12:31 -07:00
paulrudy
7535893aec re-add facebookexternalhit 2025-06-15 16:49:07 -07:00
ai.robots.txt
eb05f2f527 Merge pull request #153 from sergiospagnuolo/Poseidon
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Update robots.json with new crawler
2025-06-14 14:04:03 +00:00
26a46c409d
Merge pull request #153 from sergiospagnuolo/Poseidon
Update robots.json with new crawler
2025-06-14 07:03:52 -07:00
dark-visitors
2b68568ac2 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-14 00:58:11 +00:00
Sérgio Spagnuolo
b05f2fee00
Update robots.json with new crawler
Update with Poseidon Research Crawler as found in nytimes.com/robots.txt
2025-06-13 17:15:13 -03:00
ai.robots.txt
e53d81c66d Merge pull request #152 from ai-robots-txt/MyCentralAIScraperBot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds MyCentralAIScraperBot
2025-06-13 09:28:41 +00:00
Glyn Normington
20e327e74e
Merge pull request #152 from ai-robots-txt/MyCentralAIScraperBot
chore(robots.json): adds MyCentralAIScraperBot
2025-06-13 10:28:32 +01:00
Glyn Normington
8f17718e76
Fix typo 2025-06-13 10:28:12 +01:00
d760f9216f
chore(robots.json): adds MyCentralAIScraperBot 2025-06-12 13:08:29 -07:00
ai.robots.txt
842e2256e8 Merge pull request #150 from ai-robots-txt/semrush-bots
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
/ ai-robots-txt (push) Has been cancelled
chore(robots.json): adds additional SemrushBot user agents
2025-06-12 07:12:00 +00:00
Glyn Normington
229ea20426
Merge pull request #150 from ai-robots-txt/semrush-bots
chore(robots.json): adds additional SemrushBot user agents
2025-06-12 08:11:51 +01:00
14d68f05ba
chore(robots.json): adds additional SemrushBot user agents 2025-06-11 13:50:53 -07:00
dark-visitors
cf598b6b71 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-10 01:00:37 +00:00
ai.robots.txt
3759a6bf14 chore(robots.json): adds EchoboxBot (#148)
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-09 15:44:36 +00:00
7867c3e26c
chore(robots.json): adds EchoboxBot (#148) 2025-06-09 16:44:25 +01:00
dark-visitors
e21f6ae1b6 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-06 00:59:25 +00:00
ai.robots.txt
ac7ed17e71 Merge pull request #145 from ai-robots-txt/aws-bedrockbot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds bedrockbot
2025-06-05 16:51:17 +00:00
Glyn Normington
81747e6772
Merge pull request #145 from ai-robots-txt/aws-bedrockbot
chore(robots.json): adds bedrockbot
2025-06-05 17:51:03 +01:00
528d77bf07
chore(robots.json): adds bedrockbot 2025-06-05 09:14:23 -07:00
dark-visitors
77393df5aa Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-05 00:59:28 +00:00
ai.robots.txt
75ea75a95b Merge pull request #143 from ai-robots-txt/panscient
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds Panscient
2025-06-04 18:04:06 +00:00
Glyn Normington
2fca1ddcf1
Merge pull request #143 from ai-robots-txt/panscient
chore(robots.json): adds Panscient
2025-06-04 19:03:53 +01:00
ai.robots.txt
9c28c63a0c Merge pull request #142 from ai-robots-txt/quillbot
chore(robots.json): adds Quillbot
2025-06-04 17:54:57 +00:00
395c013eea
Merge pull request #142 from ai-robots-txt/quillbot
chore(robots.json): adds Quillbot
2025-06-04 10:54:46 -07:00
4568d69b0e
chore(robots.json): adds Panscient 2025-06-04 10:54:14 -07:00
03831a7eb5
chore(robots.json): adds Quillbot 2025-06-04 10:46:58 -07:00
dark-visitors
2b5a59a303 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-04 01:00:07 +00:00
ai.robots.txt
3efabc603d Merge pull request #141 from Ivan-Chupin/patch-1
Add SBIntuitionsBot
2025-06-03 23:28:48 +00:00
b35f9a31d7
Merge pull request #141 from Ivan-Chupin/patch-1
Add SBIntuitionsBot
2025-06-03 16:28:36 -07:00
Ivan Chupin
8f75f4a2f5
Add SBIntuitionsBot 2025-06-04 03:48:42 +05:00
ai.robots.txt
080946c360 Merge pull request #140 from ai-robots-txt/yandex-bots
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds YandexAdditional crawlers
2025-06-03 19:51:25 +00:00
Glyn Normington
7eec033cad
Merge pull request #140 from ai-robots-txt/yandex-bots
chore(robots.json): adds YandexAdditional crawlers
2025-06-03 20:51:14 +01:00
3187fd8a32
chore(robots.json): adds YandexAdditional crawlers 2025-06-03 12:41:57 -07:00
ai.robots.txt
d239e7e5ad Merge pull request #139 from ai-robots-txt/workflow-fix
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard
2025-06-03 01:52:35 +00:00
Glyn Normington
9dbf34010a
Merge pull request #139 from ai-robots-txt/workflow-fix
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard
2025-06-03 02:52:23 +01:00
dark-visitors
87016d1504 Update from Dark Visitors 2025-06-03 01:00:29 +00:00
899ce01c55
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard 2025-06-02 14:56:09 -07:00
Glyn Normington
4af776f0a0
Merge pull request #136 from ai-robots-txt/imgproxy-revert
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): revert "adds imgproxy crawler"
2025-06-02 20:21:10 +01:00
1dd66b6969
Revert "chore(robots.json): adds imgproxy crawler"
This reverts commit b65f45e408.
2025-06-02 11:53:06 -07:00
814df6b9a0
Merge pull request #134 from not-not-the-imp/patch-1
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Add AndiBot and PhindBot
2025-05-31 16:03:16 -07:00
268922f8f2
Update robots.json 2025-05-31 16:02:05 -07:00
4259b25ccc
Update robots.json 2025-05-31 16:01:09 -07:00
d22b9ec51a
Update robots.json 2025-05-31 16:00:13 -07:00
imp
3e8edd083e
Add AndiBot and PhindBot
Fixes #75
2025-05-23 13:03:49 +01:00
ai.robots.txt
093ab81d78 Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-05-23 00:58:57 +00:00
dark-visitors
7bf7f9164d Update from Dark Visitors
Some checks failed
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
/ ai-robots-txt (push) Has been cancelled
2025-05-22 00:58:45 +00:00
ai.robots.txt
fedb658cc0 Merge pull request #133 from ai-robots-txt/wpbot
chore(robots.json): adds wpbot
2025-05-21 21:06:05 +00:00
Glyn Normington
851eabe059
Merge pull request #133 from ai-robots-txt/wpbot
chore(robots.json): adds wpbot
2025-05-21 22:05:51 +01:00
ai.robots.txt
7c5389f4a0 Merge pull request #98 from kylebuckingham/main
Updating Claude Bots
2025-05-21 19:00:23 +00:00
af597586b6
Merge pull request #98 from kylebuckingham/main
Updating Claude Bots
2025-05-21 12:00:11 -07:00
b1d9a60a38
chore(robots.json): adds wpbot 2025-05-21 11:40:33 -07:00
ai.robots.txt
1c2acd75b7 Merge pull request #126 from ai-robots-txt/mistral-bot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds MistralAI-User/1.0 crawler
2025-05-21 15:27:26 +00:00
Glyn Normington
202d3c3b9a
Merge pull request #126 from ai-robots-txt/mistral-bot
chore(robots.json): adds MistralAI-User/1.0 crawler
2025-05-21 16:27:14 +01:00
Glyn Normington
0a78fe1e76
Merge pull request #132 from ai-robots-txt/crawler-policy-update
chore(README): updates the opening line of our README to clarify the types of agents we block
2025-05-21 15:13:35 +01:00
8b151b2cdc
Update README.md
Co-authored-by: Glyn Normington <glyn.normington@gmail.com>
2025-05-21 06:52:36 -07:00
8a8001cbec
chore(README): updates the opening line of our README to clarify the types of agents we block 2025-05-20 13:55:25 -07:00
Glyn Normington
fe1267e290
Merge pull request #131 from Mihitoko/mention-x-robots-tag-for-bing
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Mention X-Robots-Tag header as alternative for bing
2025-05-20 07:52:32 +01:00
Mihitoko
9297c7dfa3
Mention X-Robots-Tag header as alternative for bing 2025-05-20 00:10:05 +02:00
dark-visitors
7a2e6cba52 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-05-17 00:57:28 +00:00
ai.robots.txt
dd1ed174b7 Merge pull request #129 from ai-robots-txt/google-cloudvertexbot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds Google-CloudVertexBot
2025-05-16 11:35:15 +00:00
Glyn Normington
89c0fbaf86
Merge pull request #129 from ai-robots-txt/google-cloudvertexbot
chore(robots.json): adds Google-CloudVertexBot
2025-05-16 12:35:04 +01:00
ca918a963f
chore(robots.json): adds Google-CloudVertexBot 2025-05-15 21:16:49 -07:00
5fba0b746d
chore(robots.json): adds MistralAI-User/1.0 crawler 2025-05-15 20:45:20 -07:00
dark-visitors
16d1de7094 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-05-16 00:59:08 +00:00
Glyn Normington
73f6f67adf
Merge pull request #125 from holysoles/lint_robots_json
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
lint robots.json during pull requests
2025-05-15 17:26:15 +01:00
Patrick Evans
498aa50760 lint robots.json during pull requests 2025-05-15 11:15:25 -05:00
ai.robots.txt
1c470babbe Merge pull request #123 from joehoyle/patch-1
Fix JSON syntax error
2025-05-15 16:12:30 +00:00
Adam Newbold
84d63916d2
Merge pull request #123 from joehoyle/patch-1
Fix JSON syntax error
2025-05-15 12:12:21 -04:00
Joe Hoyle
0c56b96fd9
Fix JSON syntax error 2025-05-15 11:26:47 -04:00
28e69e631b
Merge pull request #122 from ai-robots-txt/qualified-bot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
chore(robots.json): adds QualifiedBot crawler
2025-05-15 07:17:51 -07:00
9539256cb3
chore(robots.json): adds QualifiedBot crawler 2025-05-15 07:16:07 -07:00
9659c88b0c
Merge pull request #121 from solution-libre/add-traefik-plugin
Some checks are pending
/ run-tests (push) Waiting to run
Add Traefik plugin to the README.md file
2025-05-14 16:45:34 -07:00
Florent Poinsaut
c66d180295
Merge branch 'main' into add-traefik-plugin 2025-05-14 22:06:56 +02:00
Glyn Normington
9a9b1b41c0
Merge pull request #119 from ai-robots-txt/bing-ai-opt-out-instructions
Some checks are pending
/ run-tests (push) Waiting to run
Bing AI opt-out instructions
2025-05-14 19:18:20 +01:00
Florent Poinsaut
b4610a725c Add Traefik plugin 2025-05-14 14:11:56 +02:00
36a52a88d8
Bing AI opt-out instructions 2025-05-12 20:20:18 -07:00
ai.robots.txt
678380727e Merge pull request #115 from glyn/syntax
Some checks failed
/ run-tests (push) Has been cancelled
/ ai-robots-txt (push) Has been cancelled
Fix Python syntax error
2025-05-01 10:29:06 +00:00
Glyn Normington
fb8188c49d
Merge pull request #115 from glyn/syntax
Fix Python syntax error
2025-05-01 11:28:54 +01:00
Glyn Normington
ec995cd686 Fix Python syntax error 2025-05-01 11:27:40 +01:00
Crazyroostereye
1310dbae46
Added a Caddyfile converter (#110)
Co-authored-by: Julian Beittel <julian@beittel.net>
Co-authored-by: Glyn Normington <work@underlap.org>
2025-05-01 11:21:32 +01:00
Glyn Normington
91a88e2fa8
Merge pull request #113 from rwijnen-um/feature/haproxy
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
HAProxy converter added.
2025-04-28 09:00:16 +01:00
Rik Wijnen
a4a9f2ac2b Tests for HAProxy file added. 2025-04-28 09:30:26 +02:00
Rik Wijnen
66da70905f Fixed incorrect English sentence. 2025-04-28 09:09:40 +02:00
Rik Wijnen
50e739dd73 HAProxy converter added. 2025-04-28 08:51:02 +02:00
ai.robots.txt
c6c7f1748f Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-26 00:55:12 +00:00
dark-visitors
934ac7b318 Update from Dark Visitors
Some checks failed
/ run-tests (push) Waiting to run
/ ai-robots-txt (push) Has been cancelled
2025-04-25 00:56:57 +00:00
ai.robots.txt
4654e14e9c Merge pull request #112 from maiavixen/main
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
Fixed meta-external* being titlecase, and removed period for consistency
2025-04-24 07:00:34 +00:00
Glyn Normington
9bf31fbca8
Merge pull request #112 from maiavixen/main
Fixed meta-external* being titlecase, and removed period for consistency
2025-04-24 08:00:24 +01:00
maia
9d846ced45
Update robots.json
Lowercase meta-external* as that was not technically the UA for the bots, also removed a period in the "respect" for consistency
2025-04-24 04:08:20 +02:00
dark-visitors
8d25a424d9 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
2025-04-23 00:56:52 +00:00
ai.robots.txt
bbec639c14 Merge pull request #109 from dennislee1/patch-1
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
AI bots to consider adding
2025-04-22 14:50:26 +00:00
422cf9e29b
Merge pull request #109 from dennislee1/patch-1
AI bots to consider adding
2025-04-22 07:50:14 -07:00
Dennis Lee
33c5ce1326
Update robots.json
Updated robots list with five new proposed AI bots:

aiHitBot
Cotoyogi
Factset_spyderbot
FirecrawlAgent
TikTokSpider
2025-04-21 18:55:11 +01:00
774b1ddf52
Merge pull request #107 from glyn/sponsorship
Some checks failed
/ run-tests (push) Has been cancelled
Clarify our position on sponsorship
2025-04-18 11:40:06 -07:00
Glyn Normington
b1856e6988 Donations 2025-04-18 18:40:44 +01:00
Glyn Normington
d05ede8fe1 Clarify our position on sponsorship
Some firms, including those with .ai domains, have
offered to sponsor this project. So make our position
clear.
2025-04-18 17:46:56 +01:00
Kyle Buckingham
fd41de8522
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-04-16 16:43:03 -07:00
Kyle Buckingham
4a6f37d727
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-04-16 16:42:58 -07:00
ai.robots.txt
e0cdb278fb Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-16 00:57:11 +00:00
dark-visitors
a96e330989 Update from Dark Visitors
Some checks are pending
/ run-tests (push) Waiting to run
2025-04-15 00:57:01 +00:00
156e6baa09
Merge pull request #105 from jsheard/patch-1
Some checks are pending
/ run-tests (push) Waiting to run
Include "AI Agents" from Dark Visitors
2025-04-14 10:08:38 -07:00
Joshua Sheard
d9f882a9b2
Include "AI Agents" from Dark Visitors 2025-04-14 15:46:01 +01:00
dark-visitors
305188b2e7 Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-11 00:55:52 +00:00
ai.robots.txt
4a764bba18 Merge pull request #102 from ai-robots-txt/imgproxy-bot
Some checks are pending
/ run-tests (push) Waiting to run
chore(robots.json): adds imgproxy crawler
2025-04-10 19:22:34 +00:00
a891ad7213
Merge pull request #102 from ai-robots-txt/imgproxy-bot
chore(robots.json): adds imgproxy crawler
2025-04-10 12:22:23 -07:00
b65f45e408
chore(robots.json): adds imgproxy crawler 2025-04-10 10:12:51 -07:00
Glyn Normington
49e58b1573
Merge pull request #100 from fbartho/fb/fix-perplexity-users
Some checks failed
/ run-tests (push) Has been cancelled
Fix html-mangled hyphen in 'Perplexity-Users' bot name
2025-04-05 17:32:19 +01:00
Frederic Barthelemy
c6f308cbd0
PR Feedback: log special-case, comment consistency 2025-04-05 09:01:52 -07:00
Frederic Barthelemy
5f5a89c38c
Fix html-mangled hyphen in Perplexity-Users
Fixes: #99
2025-04-04 17:34:14 -07:00
Frederic Barthelemy
6b0349f37d
fix python complaining about f-string syntax
```
python code/tests.py
Traceback (most recent call last):
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/tests.py", line 7, in <module>
    from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/robots.py", line 144
    return f"({"|".join(map(re.escape, lst))})"
                ^
SyntaxError: f-string: expecting '}'
```
2025-04-04 15:20:30 -07:00
Kyle Buckingham
8dc36aa2e2
Update robots.txt 2025-04-01 15:23:28 -07:00
Kyle Buckingham
ae8f74c10c
Update robots.json 2025-04-01 15:22:04 -07:00
ai.robots.txt
5b8650b99b Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-03-29 00:54:10 +00:00
dark-visitors
c249de99a3 Update from Dark Visitors 2025-03-28 00:54:28 +00:00
ec18af7624
Revert "Merge pull request #91 from deyigifts/perplexity-user"
This reverts commit 68d1d93714.
2025-03-27 12:51:22 -07:00
ai.robots.txt
6851413c52 Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:15 +00:00
Glyn Normington
dba03d809c
Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:05 +00:00
ai.robots.txt
68d1d93714 Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 19:29:30 +00:00
1183187be9
Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 12:29:21 -07:00
Thomas Leister
7c3b5a2cb2
Add tests for Nginx config generator 2025-03-27 18:28:21 +01:00
Thomas Leister
4f3f4cd0dd
Add assembled version of nginx-block-ai-bots.conf file 2025-03-27 12:43:36 +01:00
Thomas Leister
5a312c5f4d
Mention Nginx config feature in README 2025-03-27 12:43:29 +01:00
Thomas Leister
da85207314
Implement new function "json_to_nginx" which outputs an Nginx
configuration snippet
2025-03-27 12:27:09 +01:00
deyigifts
6ecfcdfcbf
Update perplexity bot
Update based on perplexity bot docs
2025-03-24 14:16:57 +08:00
5e7c3c432f
Merge pull request #83 from glyn/81-doc-testing
Document testing in README
2025-02-19 09:19:44 -08:00
Glyn Normington
9f41d4c11c
Merge pull request #84 from sideeffect42/tests-workflow
Add run-tests workflow
2025-02-18 19:42:55 +00:00
Dennis Camera
8a74896333 Add workflow to run tests on pull request or push to main 2025-02-18 20:30:27 +01:00
Glyn Normington
1d55a205e4 Document testing in README
Fixes: https://github.com/ai-robots-txt/ai.robots.txt/issues/81
2025-02-18 16:49:08 +00:00
Glyn Normington
8494a7fcaa
Merge pull request #80 from sideeffect42/htaccess-allow-robots_txt
.htaccess: Allow robots access to `/robots.txt`
2025-02-18 16:42:36 +00:00
Dennis Camera
c7c1e7b96f robots.py: Make executable 2025-02-18 12:55:17 +01:00
Dennis Camera
17b826a6d3 Update tests and convert to stock unittest
For these simple tests Python's built-in unittest framework is more than enough.
No additional dependencies are required.

Added some more test cases with "special" characters to test the escaping code
better.
2025-02-18 12:55:15 +01:00
Dennis Camera
0bd3fa63b8 table-of-bot-metrics.md: Escape robot names for Markdown table
Some characters which could occur in a crawler's name have a special meaning in
Markdown. They are escaped to prevent them from having unintended side effects.

The escaping is only applied to the first (Name) column of the table. The rest
of the columns is expected to already be Markdown encoded in robots.json.
2025-02-18 12:53:27 +01:00
Dennis Camera
a884a2afb9 .htaccess: Make regex in RewriteCond safe
Improve the regular expression by removing unneeded anchors and
escaping special characters (not just space) to prevent false positives
or a misbehaving rewrite rule.
2025-02-18 12:53:22 +01:00
Dennis Camera
c0d418cd87 .htaccess: Allow robots access to /robots.txt 2025-02-18 12:49:29 +01:00
dark-visitors
abfd6dfcd1 Update from Dark Visitors 2025-02-17 00:53:32 +00:00
ai.robots.txt
693289bb29 chore: add Brightbot 1.0 2025-02-16 21:37:52 +00:00
a9ec4ffa6f
chore: add Brightbot 1.0 2025-02-16 13:36:39 -08:00
Glyn Normington
03aa829913
Merge pull request #79 from always-be-testing/main
List of AI bots Cloudflare considers "Verified"
2025-02-16 04:33:40 +00:00
always-be-testing
5b13c2e504
add more concise message about verified bots
Co-authored-by: Glyn Normington <work@underlap.org>
2025-02-15 11:22:10 -05:00
always-be-testing
af87b85d7f include return after heading 2025-02-14 12:39:08 -05:00
always-be-testing
f99339922f grammar update and include syntax for verified bot condition 2025-02-14 12:36:33 -05:00
always-be-testing
e396a2ec78 forgot to include heading 2025-02-14 12:31:20 -05:00
always-be-testing
261a2b83b9 update README to inclide list of ai bots Cloudflare considers verified 2025-02-14 12:26:19 -05:00
dark-visitors
bebffccc0c Update from Dark Visitors 2025-02-02 00:52:50 +00:00
ai.robots.txt
89d4c6e5ca Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:51:01 +00:00
Glyn Normington
f9e2c5810b
Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:50:50 +00:00
nisbet-hubbard
05b79b8a58
Update robots.json 2025-01-27 19:41:03 +08:00
dark-visitors
9c060dee1c Update from Dark Visitors 2025-01-21 00:49:22 +00:00
ai.robots.txt
6c552a3daa Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:42 +00:00
Glyn Normington
f621fb4852
Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:29 +00:00
Joshua Sheard
7427d96bac
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 10:59:02 +00:00
Glyn Normington
81cc81b35e
Merge pull request #68 from MassiminoilTrace/main
Implementing automatic htaccess generation
2025-01-20 07:33:54 +00:00
Massimo Gismondi
4f03818280 Removed if condition and added a little comments 2025-01-20 06:51:06 +01:00
Massimo Gismondi
a9956f7825 Removed additional sections 2025-01-20 06:50:48 +01:00
Massimo Gismondi
33c38ee70b
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:28:32 +01:00
Massimo Gismondi
52241bdca6
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:56 +01:00
Massimo Gismondi
013b7abfa1
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:02 +01:00
Massimo Gismondi
70fd6c0fb1
Add mention of htaccess in readme
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:25:07 +01:00
Joshua Sheard
5aa08bc002
Add Crawlspace 2025-01-19 22:03:50 +00:00
Massimo Gismondi
d65128d10a
Removed paragraph in favour of future FAQ.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:41:09 +01:00
Massimo Gismondi
1cc4b59dfc
Shortened htaccess instructions
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:40:03 +01:00
Massimo Gismondi
8aee2f24bb
Fixed space in comment
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:39:07 +01:00
Massimo Gismondi
b455af66e7 Adding clarification about performance and code comment 2025-01-17 21:42:08 +01:00
Massimo Gismondi
189e75bbfd Adding usage instructions 2025-01-17 21:25:23 +01:00
Massimo Gismondi
933aa6159d Implementing htaccess generation 2025-01-07 11:02:29 +01:00
Glyn Normington
b7f908e305
Merge pull request #66 from fabianegli/patch-1
Allow Action to succeed even if no changes were made
2025-01-07 03:54:40 +00:00
ai.robots.txt
ec454b71d3 Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 20:51:56 +00:00
565dca3dc0
Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 12:51:43 -08:00
Jordan Atwood
143f8f2285
Block SemrushBot 2025-01-06 12:34:38 -08:00
8e98cc6049
Merge pull request #61 from glyn/improve-naming
Rename Python code
2025-01-06 08:10:47 -08:00
Fabian Egli
30ee957011
bail when NO changes are staged 2025-01-06 12:05:42 +01:00
Fabian Egli
83cd546470
allow Action to succeed even if no changes were made
Before, the Action would fail in case there were no changes made to any files by the converter.
2025-01-06 11:39:41 +01:00
ai.robots.txt
ca8620e28b Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:20 +00:00
Glyn Normington
b9df958b39
Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:01 +00:00
Glyn Normington
c01a684036 Convert robots.json more frequently
Specifically, when github workflows or code
is changed as either of these can affect the
conversion results.

Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60
2025-01-05 05:03:50 +00:00
Glyn Normington
d2be15447c
Merge pull request #62 from ai-robots-txt/missing-dependency
Ensure dependency installed
2025-01-05 01:46:27 +00:00
Glyn Normington
9e372d0696 Ensure dependency installed
Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60#issuecomment-2571437913
Ref: https://stackoverflow.com/questions/11783875/importerror-no-module-named-bs4-beautifulsoup
2025-01-05 01:45:33 +00:00
Glyn Normington
996b9c678c Improve job name
The purpose of the job is to convert the JSON file
to the other files.
2025-01-04 05:28:41 +00:00
Glyn Normington
e4c12ee2f8 Rename in test code 2025-01-04 05:03:48 +00:00
Glyn Normington
3a43714908 Rename Python code
The name dark_visitors.py gives the impression that the code is entirely
related to the dark visitors website, whereas the update command relates
to dark visitors and the convert command is unrelated to dark visitors.
2025-01-04 04:55:34 +00:00
22 changed files with 909 additions and 71 deletions

View file

@ -16,13 +16,19 @@ jobs:
git config --global user.name "dark-visitors"
git config --global user.email "dark-visitors@users.noreply.github.com"
echo "Updating robots.json with data from darkvisitor.com ..."
python code/dark_visitors.py --update
python code/robots.py --update
echo "... done."
git --no-pager diff
git add -A
git diff --quiet && git diff --staged --quiet || (git commit -m "Update from Dark Visitors" && git push)
if ! git diff --cached --quiet; then
git commit -m "Update from Dark Visitors"
git push
else
echo "No changes to commit."
fi
shell: bash
call-main:
convert:
name: convert
needs: dark-visitors
uses: ./.github/workflows/main.yml
secrets: inherit

View file

@ -8,6 +8,8 @@ on:
push:
paths:
- 'robots.json'
- '.github/workflows/**'
- 'code/**'
branches:
- "main"
@ -20,15 +22,23 @@ jobs:
with:
fetch-depth: 2
- run: |
pip install beautifulsoup4
git config --global user.name "ai.robots.txt"
git config --global user.email "ai.robots.txt@users.noreply.github.com"
git log -1
git status
echo "Updating robots.txt and table-of-bot-metrics.md if necessary ..."
python code/dark_visitors.py --convert
python code/robots.py --convert
echo "... done."
git --no-pager diff
git add -A
if [ -z "$(git diff --staged)" ]; then
# To have the action run successfully, if no changes are staged, we
# manually skip the later commits because they fail with exit code 1
# and this would then display as a failure for the Action.
echo "No staged changes to commit. Skipping commit and push."
exit 0
fi
if [ -n "${{ inputs.message }}" ]; then
git commit -m "${{ inputs.message }}"
else

28
.github/workflows/run-tests.yml vendored Normal file
View file

@ -0,0 +1,28 @@
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
run-tests:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Install dependencies
run: |
pip install -U requests beautifulsoup4
- name: Run tests
run: |
code/tests.py
lint-json:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: JQ Json Lint
run: jq . robots.json

3
.htaccess Normal file
View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

3
Caddyfile Normal file
View file

@ -0,0 +1,3 @@
@aibots {
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)"
}

8
FAQ.md
View file

@ -55,3 +55,11 @@ That depends on your stack.
## How can I contribute?
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
## I'd like to donate money
That's kind of you, but we don't need your money. If you insist, we'd love you to make a donation to the [American Civil Liberties Union](https://www.aclu.org/), the [Disasters Emergency Committee](https://www.dec.org.uk/), or a similar organisation.
## Can my company sponsor ai.robots.txt?
No, thank you. We do not accept sponsorship of any kind. We prefer to maintain our independence. Our costs are negligible as we are entirely volunteer-based and community-driven.

View file

@ -2,15 +2,56 @@
<img src="/assets/images/noai-logo.png" width="100" />
This is an open list of web crawlers associated with AI companies and the training of LLMs to block. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
This list contains AI-related crawlers of all types, regardless of purpose. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling.
## Usage
This repository provides the following files:
- `robots.txt`
- `.htaccess`
- `nginx-block-ai-bots.conf`
- `Caddyfile`
- `haproxy-block-ai-bots.txt`
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
`.htaccess` may be used to configure web servers such as [Apache httpd](https://httpd.apache.org/) to return an error page when one of the listed AI crawlers sends a request to the web server.
Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/current/howto/htaccess.html), more performant methods than an `.htaccess` file exist.
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
`Caddyfile` includes a Header Regex matcher group you can copy or import into your Caddyfile, the rejection can then be handled as followed `abort @aibots`
`haproxy-block-ai-bots.txt` may be used to configure HAProxy to block AI bots. To implement it;
1. Add the file to the config directory of HAProxy
2. Add the following lines in the `frontend` section;
```
acl ai_robot hdr_sub(user-agent) -i -f /etc/haproxy/haproxy-block-ai-bots.txt
http-request deny if ai_robot
```
(Note that the path of the `haproxy-block-ai-bots.txt` may be different in your environment.)
[Bing uses the data it crawls for AI and training, you may opt out by adding a `meta` tag to the `head` of your site.](./docs/additional-steps/bing.md)
### Related
- [Robots.txt Traefik plugin](https://plugins.traefik.io/plugins/681b2f3fba3486128fc34fae/robots-txt-plugin):
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
file on-the-fly.
## Contributing
A note about contributing: updates should be added/made to `robots.json`. A GitHub action, courtesy of [Adam](https://github.com/newbold), will then generate the updated `robots.txt` and `table-of-bot-metrics.md`.
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
```console
code/tests.py
```
## Subscribe to updates
@ -27,7 +68,7 @@ Alternatively, you can also subscribe to new releases with your GitHub account b
## Report abusive crawlers
If you use [Cloudflare's hard block](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) alongside this list, you can report abusive crawlers that don't respect `robots.txt` [here](https://docs.google.com/forms/d/e/1FAIpQLScbUZ2vlNSdcsb8LyTeSF7uLzQI96s0BKGoJ6wQ6ocUFNOKEg/viewform).
But even if you don't use Cloudflare's hard block, their list of [verified bots](https://radar.cloudflare.com/traffic/verified-bots) may come in handy.
## Additional resources
- [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight

92
code/dark_visitors.py → code/robots.py Normal file → Executable file
View file

@ -1,8 +1,11 @@
import json
from pathlib import Path
#!/usr/bin/env python3
import json
import re
import requests
from bs4 import BeautifulSoup
from pathlib import Path
def load_robots_json():
@ -27,6 +30,7 @@ def updated_robots_json(soup):
"""Update AI scraper information with data from darkvisitors."""
existing_content = load_robots_json()
to_include = [
"AI Agents",
"AI Assistants",
"AI Data Scrapers",
"AI Search Crawlers",
@ -47,6 +51,7 @@ def updated_robots_json(soup):
continue
for agent in section.find_all("a", href=True):
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
name = clean_robot_name(name)
desc = agent.find("p").get_text().strip()
default_values = {
@ -98,8 +103,24 @@ def updated_robots_json(soup):
return sorted_robots
def ingest_darkvisitors():
def clean_robot_name(name):
""" Clean the robot name by removing some characters that were mangled by html software once. """
# This was specifically spotted in "Perplexity-User"
# Looks like a non-breaking hyphen introduced by the HTML rendering software
# Reading the source page for Perplexity: https://docs.perplexity.ai/guides/bots
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
# and it's only the Row-Heading that has the special hyphen
#
# Technically, there's no reason there wouldn't someday be a bot that
# actually uses a non-breaking hyphen, but that seems unlikely,
# so this solution should be fine for now.
result = re.sub(r"\u2011", "-", name)
if result != name:
print(f"\tCleaned '{name}' to '{result}' - unicode/html mangled chars normalized.")
return result
def ingest_darkvisitors():
old_robots_json = load_robots_json()
soup = get_agent_soup()
if soup:
@ -121,21 +142,63 @@ def json_to_txt(robots_json):
return robots_txt
def escape_md(s):
return re.sub(r"([]*\\|`(){}<>#+-.!_[])", r"\\\1", s)
def json_to_table(robots_json):
"""Compose a markdown table with the information in robots.json"""
table = "| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |\n"
table += "|-----|----------|-----------------------|----------|------------------|-------------|\n"
table += "|------|----------|-----------------------|----------|------------------|-------------|\n"
for name, robot in robots_json.items():
table += f'| {name} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
table += f'| {escape_md(name)} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
return table
def list_to_pcre(lst):
# Python re is not 100% identical to PCRE which is used by Apache, but it
# should probably be close enough in the real world for re.escape to work.
formatted = "|".join(map(re.escape, lst))
return f"({formatted})"
def json_to_htaccess(robot_json):
# Creates a .htaccess filter file. It uses a regular expression to filter out
# User agents that contain any of the blocked values.
htaccess = "RewriteEngine On\n"
htaccess += f"RewriteCond %{{HTTP_USER_AGENT}} {list_to_pcre(robot_json.keys())} [NC]\n"
htaccess += "RewriteRule !^/?robots\\.txt$ - [F,L]\n"
return htaccess
def json_to_nginx(robot_json):
# Creates an Nginx config file. This config snippet can be included in
# nginx server{} blocks to block AI bots.
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
return config
def json_to_caddy(robot_json):
caddyfile = "@aibots {\n "
caddyfile += f' header_regexp User-Agent "{list_to_pcre(robot_json.keys())}"'
caddyfile += "\n}"
return caddyfile
def json_to_haproxy(robots_json):
# Creates a source file for HAProxy. Follow instructions in the README to implement it.
txt = "\n".join(f"{k}" for k in robots_json.keys())
return txt
def update_file_if_changed(file_name, converter):
"""Update files if newer content is available and log the (in)actions."""
new_content = converter(load_robots_json())
old_content = Path(file_name).read_text(encoding="utf-8")
filepath = Path(file_name)
# "touch" will create the file if it doesn't exist yet
filepath.touch()
old_content = filepath.read_text(encoding="utf-8")
if old_content == new_content:
print(f"{file_name} is already up to date.")
else:
@ -150,6 +213,23 @@ def conversions():
file_name="./table-of-bot-metrics.md",
converter=json_to_table,
)
update_file_if_changed(
file_name="./.htaccess",
converter=json_to_htaccess,
)
update_file_if_changed(
file_name="./nginx-block-ai-bots.conf",
converter=json_to_nginx,
)
update_file_if_changed(
file_name="./Caddyfile",
converter=json_to_caddy,
)
update_file_if_changed(
file_name="./haproxy-block-ai-bots.txt",
converter=json_to_haproxy,
)
if __name__ == "__main__":

View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

View file

@ -0,0 +1,3 @@
@aibots {
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)"
}

View file

@ -0,0 +1,47 @@
AI2Bot
Ai2Bot-Dolma
Amazonbot
anthropic-ai
Applebot
Applebot-Extended
Bytespider
CCBot
ChatGPT-User
Claude-Web
ClaudeBot
cohere-ai
Diffbot
FacebookBot
facebookexternalhit
FriendlyCrawler
Google-Extended
GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iaskspider/2.0
ICC-Crawler
ImagesiftBot
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
Meta-ExternalAgent
Meta-ExternalFetcher
OAI-SearchBot
omgili
omgilibot
Perplexity-User
PerplexityBot
PetalBot
Scrapy
Sidetrade indexer bot
Timpibot
VelenPublicWebCrawler
Webzio-Extended
YouBot
crawler.with.dots
star***crawler
Is this a crawler?
a[mazing]{42}(robot)
2^32$
curl|sudo bash

View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
return 403;
}

View file

@ -223,6 +223,13 @@
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
@ -278,5 +285,47 @@
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
},
"crawler.with.dots": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression dots need to be escaped."
},
"star***crawler": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression stars need to be escaped."
},
"Is this a crawler?": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression spaces and question marks need to be escaped."
},
"a[mazing]{42}(robot)": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped."
},
"2^32$": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression RE anchor characters need to be escaped."
},
"curl|sudo bash": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression pipes need to be escaped."
}
}

View file

@ -30,6 +30,7 @@ User-agent: Meta-ExternalFetcher
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: Scrapy
@ -38,4 +39,10 @@ User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: YouBot
User-agent: crawler.with.dots
User-agent: star***crawler
User-agent: Is this a crawler?
User-agent: a[mazing]{42}(robot)
User-agent: 2^32$
User-agent: curl|sudo bash
Disallow: /

View file

@ -1,42 +1,49 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|-----|----------|-----------------------|----------|------------------|-------------|
|------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
| facebookexternalhit | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | No information. | Unclear at this time. | Unclear at this time. |
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
| Google-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| Meta-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
| crawler\.with\.dots | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression dots need to be escaped. |
| star\*\*\*crawler | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression stars need to be escaped. |
| Is this a crawler? | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression spaces and question marks need to be escaped. |
| a\[mazing\]\{42\}\(robot\) | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped. |
| 2^32$ | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression RE anchor characters need to be escaped. |
| curl\|sudo bash | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression pipes need to be escaped. |

101
code/tests.py Normal file → Executable file
View file

@ -1,21 +1,94 @@
"""These tests can be run with pytest.
This requires pytest: pip install pytest
cd to the `code` directory and run `pytest`
"""
#!/usr/bin/env python3
"""To run these tests just execute this script."""
import json
from pathlib import Path
import unittest
from dark_visitors import json_to_txt, json_to_table
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx, json_to_haproxy, json_to_caddy
class RobotsUnittestExtensions:
def loadJson(self, pathname):
with open(pathname, "rt") as f:
return json.load(f)
def assertEqualsFile(self, f, s):
with open(f, "rt") as f:
f_contents = f.read()
return self.assertMultiLineEqual(f_contents, s)
def test_robots_txt_creation():
robots_json = json.loads(Path("test_files/robots.json").read_text())
robots_txt = json_to_txt(robots_json)
assert Path("test_files/robots.txt").read_text() == robots_txt
class TestRobotsTXTGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_robots_txt_generation(self):
robots_txt = json_to_txt(self.robots_dict)
self.assertEqualsFile("test_files/robots.txt", robots_txt)
def test_table_of_bot_metrices_md():
robots_json = json.loads(Path("test_files/robots.json").read_text())
robots_table = json_to_table(robots_json)
assert Path("test_files/table-of-bot-metrics.md").read_text() == robots_table
class TestTableMetricsGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 32768
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_table_generation(self):
robots_table = json_to_table(self.robots_dict)
self.assertEqualsFile("test_files/table-of-bot-metrics.md", robots_table)
class TestHtaccessGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_htaccess_generation(self):
robots_htaccess = json_to_htaccess(self.robots_dict)
self.assertEqualsFile("test_files/.htaccess", robots_htaccess)
class TestNginxConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_nginx_generation(self):
robots_nginx = json_to_nginx(self.robots_dict)
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
class TestHaproxyConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_haproxy_generation(self):
robots_haproxy = json_to_haproxy(self.robots_dict)
self.assertEqualsFile("test_files/haproxy-block-ai-bots.txt", robots_haproxy)
class TestRobotsNameCleaning(unittest.TestCase):
def test_clean_name(self):
from robots import clean_robot_name
self.assertEqual(clean_robot_name("PerplexityUser"), "Perplexity-User")
class TestCaddyfileGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_caddyfile_generation(self):
robots_caddyfile = json_to_caddy(self.robots_dict)
self.assertEqualsFile("test_files/Caddyfile", robots_caddyfile)
if __name__ == "__main__":
import os
os.chdir(os.path.dirname(__file__))
unittest.main(verbosity=2)

View file

@ -0,0 +1,40 @@
# Bing (bingbot)
It's not well publicised, but Bing uses the data it crawls for AI and training.
However, the current thinking is, blocking a search engine of this size using `robots.txt` seems a quite drastic approach as it is second only to Google and could significantly impact your website in search results.
Additionally, Bing powers a number of search engines such as Yahoo and AOL, and its search results are also used in Duck Duck Go, amongst others.
Fortunately, Bing supports a relatively simple opt-out method, requiring an additional step.
## How to opt-out of AI training
You must add a metatag in the `<head>` of your webpage or set the [X-Robots-Tag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Robots-Tag) HTTP header in your response. This also needs to be added to every page or response on your website.
If using the metatag, the line you need to add is:
```plaintext
<meta name="robots" content="noarchive">
```
Or include the HTTP response header:
```plaintext
X-Robots-Tag: noarchive
```
By adding this line or header, you are signifying to Bing: "Do not use the content for training Microsoft's generative AI foundation models."
## Will my site be negatively affected
Simple answer, no.
The original use of "noarchive" has been retired by all search engines. Google retired its use in 2024.
The use of this metatag will not impact your site in search engines or in any other meaningful way if you add it to your page(s).
It is now solely used by a handful of crawlers, such as Bingbot and Amazonbot, to signify to them not to use your data for AI/training.
## Resources
Bing Blog AI opt-out announcement: https://blogs.bing.com/webmaster/september-2023/Announcing-new-options-for-webmasters-to-control-usage-of-their-content-in-Bing-Chat
Bing metatag information, including AI opt-out: https://www.bing.com/webmasters/help/which-robots-metatags-does-bing-support-5198d240

80
haproxy-block-ai-bots.txt Normal file
View file

@ -0,0 +1,80 @@
AI2Bot
Ai2Bot-Dolma
aiHitBot
Amazonbot
Andibot
anthropic-ai
Applebot
Applebot-Extended
bedrockbot
Brightbot 1.0
Bytespider
CCBot
ChatGPT-User
Claude-SearchBot
Claude-User
Claude-Web
ClaudeBot
cohere-ai
cohere-training-data-crawler
Cotoyogi
Crawlspace
Diffbot
DuckAssistBot
EchoboxBot
FacebookBot
facebookexternalhit
Factset_spyderbot
FirecrawlAgent
FriendlyCrawler
Google-CloudVertexBot
Google-Extended
GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iaskspider/2.0
ICC-Crawler
ImagesiftBot
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
meta-externalagent
Meta-ExternalAgent
meta-externalfetcher
Meta-ExternalFetcher
MistralAI-User/1.0
MyCentralAIScraperBot
NovaAct
OAI-SearchBot
omgili
omgilibot
Operator
PanguBot
Panscient
panscient.com
Perplexity-User
PerplexityBot
PetalBot
PhindBot
Poseidon Research Crawler
QualifiedBot
QuillBot
quillbot.com
SBIntuitionsBot
Scrapy
SemrushBot
SemrushBot-BA
SemrushBot-CT
SemrushBot-OCOB
SemrushBot-SI
SemrushBot-SWA
Sidetrade indexer bot
TikTokSpider
Timpibot
VelenPublicWebCrawler
Webzio-Extended
wpbot
YandexAdditional
YandexAdditionalBot
YouBot

3
nginx-block-ai-bots.conf Normal file
View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)") {
return 403;
}

View file

@ -13,6 +13,13 @@
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"aiHitBot": {
"operator": "[aiHit](https://www.aihitdata.com/about)",
"respect": "Yes",
"function": "A massive, artificial intelligence/machine learning, automated system.",
"frequency": "No information provided.",
"description": "Scrapes data for AI systems."
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
@ -20,6 +27,13 @@
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"Andibot": {
"operator": "[Andi](https://andisearch.com/)",
"respect": "Unclear at this time",
"function": "Search engine using generative AI, AI Search Assistant",
"frequency": "No information provided.",
"description": "Scrapes website and provides AI summary."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
@ -41,6 +55,20 @@
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"bedrockbot": {
"operator": "[Amazon](https://amazon.com)",
"respect": "[Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector)",
"function": "Data scraping for custom AI applications.",
"frequency": "Unclear at this time.",
"description": "Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application."
},
"Brightbot 1.0": {
"operator": "Browsing.ai",
"respect": "Unclear at this time.",
"function": "LLM/AI training.",
"frequency": "Unclear at this time.",
"description": "Scrapes data to train LLMs and AI products focused on website customer support."
},
"Bytespider": {
"operator": "ByteDance",
"respect": "No",
@ -62,12 +90,26 @@
"frequency": "Only when prompted by a user.",
"description": "Used by plugins in ChatGPT to answer queries based on user input."
},
"Claude-Web": {
"Claude-SearchBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
"description": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses."
},
"Claude-User": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent.",
"frequency": "No information provided.",
"description": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent."
},
"Claude-Web": {
"operator": "Anthropic",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web"
},
"ClaudeBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
@ -90,6 +132,20 @@
"frequency": "Unclear at this time.",
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
},
"Cotoyogi": {
"operator": "[ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/)",
"respect": "Yes",
"function": "AI LLM Scraper.",
"frequency": "No information provided.",
"description": "Scrapes data for AI training in Japanese language."
},
"Crawlspace": {
"operator": "[Crawlspace](https://crawlspace.dev)",
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
"function": "Scrapes data",
"frequency": "Unclear at this time.",
"description": "Provides crawling services for any purpose, probably including AI model training."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
"respect": "At the discretion of Diffbot users.",
@ -104,6 +160,13 @@
"frequency": "Unclear at this time.",
"description": "DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot"
},
"EchoboxBot": {
"operator": "[Echobox](https://echobox.com)",
"respect": "Unclear at this time.",
"function": "Data collection to support AI-powered products.",
"frequency": "Unclear at this time.",
"description": "Supports company's AI-powered social and email management products."
},
"FacebookBot": {
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
@ -111,6 +174,27 @@
"frequency": "Up to 1 page per second",
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
},
"facebookexternalhit": {
"operator": "Meta/Facebook",
"respect": "[No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313)",
"function": "Ostensibly only for sharing, but likely used as an AI crawler as well",
"frequency": "Unclear at this time.",
"description": "Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is \"to crawl the content of an app or website that was shared on one of Meta\u2019s family of apps\u2026\". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary."
},
"Factset_spyderbot": {
"operator": "[Factset](https://www.factset.com/ai)",
"respect": "Unclear at this time.",
"function": "AI model training.",
"frequency": "No information provided.",
"description": "Scrapes data for AI training."
},
"FirecrawlAgent": {
"operator": "[Firecrawl](https://www.firecrawl.dev/)",
"respect": "Yes",
"function": "AI scraper and LLM training",
"frequency": "No information provided.",
"description": "Scrapes data for AI systems and LLM training."
},
"FriendlyCrawler": {
"description": "Unclear who the operator is; but data is used for training/machine learning.",
"frequency": "Unclear at this time.",
@ -118,6 +202,13 @@
"operator": "Unknown",
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
},
"Google-CloudVertexBot": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "Build and manage AI models for businesses employing Vertex AI",
"frequency": "No information.",
"description": "Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents."
},
"Google-Extended": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
@ -195,13 +286,27 @@
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"Meta-ExternalAgent": {
"meta-externalagent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes.",
"respect": "Yes",
"function": "Used to train models and improve products.",
"frequency": "No information.",
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
},
"Meta-ExternalAgent": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent"
},
"meta-externalfetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"Meta-ExternalFetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
@ -209,6 +314,27 @@
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"MistralAI-User/1.0": {
"operator": "Mistral AI",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response.",
"respect": "Yes"
},
"MyCentralAIScraperBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI data scraper",
"frequency": "Unclear at this time.",
"description": "Operator and data use is unclear at this time."
},
"NovaAct": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact"
},
"OAI-SearchBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "[Yes](https://platform.openai.com/docs/bots)",
@ -230,6 +356,13 @@
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"Operator": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator"
},
"PanguBot": {
"operator": "the Chinese company Huawei",
"respect": "Unclear at this time.",
@ -237,12 +370,33 @@
"frequency": "Unclear at this time.",
"description": "PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot"
},
"Panscient": {
"operator": "[Panscient](https://panscient.com)",
"respect": "[Yes](https://panscient.com/faq.htm)",
"function": "Data collection and analysis using machine learning and AI.",
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
},
"panscient.com": {
"operator": "[Panscient](https://panscient.com)",
"respect": "[Yes](https://panscient.com/faq.htm)",
"function": "Data collection and analysis using machine learning and AI.",
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
"function": "Used to answer queries at the request of users.",
"frequency": "Takes action based on user prompts.",
"description": "Operated by Perplexity to obtain results in response to user queries."
"respect": "[Yes](https://docs.perplexity.ai/guides/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in Perplexity."
},
"PetalBot": {
"description": "Operated by Huawei to provide search and AI assistant services.",
@ -251,6 +405,48 @@
"operator": "[Huawei](https://huawei.com/)",
"respect": "Yes"
},
"PhindBot": {
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly",
"frequency": "No explicit frequency provided.",
"function": "AI-enhanced search engine.",
"operator": "[phind](https://www.phind.com/)",
"respect": "Unclear at this time."
},
"Poseidon Research Crawler": {
"operator": "[Poseidon Research](https://www.poseidonresearch.com)",
"description": "Lab focused on scaling the interpretability research necessary to make better AI systems possible.",
"frequency": "No explicit frequency provided.",
"function": "AI research crawler",
"respect": "Unclear at this time."
},
"QualifiedBot": {
"description": "Operated by Qualified as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI agents and other related products; usage can be assumed to support said products.",
"operator": "[Qualified](https://www.qualified.com)",
"respect": "Unclear at this time."
},
"QuillBot": {
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI detection, writing tools and other services.",
"operator": "[Quillbot](https://quillbot.com)",
"respect": "Unclear at this time."
},
"quillbot.com": {
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI detection, writing tools and other services.",
"operator": "[Quillbot](https://quillbot.com)",
"respect": "Unclear at this time."
},
"SBIntuitionsBot": {
"description": "AI development and information analysis",
"respect": "[Yes](https://www.sbintuitions.co.jp/en/bot/)",
"frequency": "No information.",
"function": "Uses data gathered in AI development and information analysis.",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
"frequency": "No information.",
@ -258,6 +454,48 @@
"operator": "[Zyte](https://www.zyte.com)",
"respect": "Unclear at this time."
},
"SemrushBot": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-BA": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-CT": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-OCOB": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-SI": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-SWA": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Checks URLs on your site for SWA tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
@ -265,6 +503,13 @@
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"TikTokSpider": {
"operator": "ByteDance",
"respect": "Unclear at this time.",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, as per Bytespider."
},
"Timpibot": {
"operator": "[Timpi](https://timpi.io)",
"respect": "Unclear at this time.",
@ -286,6 +531,27 @@
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"wpbot": {
"operator": "[QuantumCloud](https://www.quantumcloud.com)",
"respect": "Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9)",
"function": "Live chat support and lead generation.",
"frequency": "Unclear at this time.",
"description": "wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support."
},
"YandexAdditional": {
"operator": "[Yandex](https://yandex.ru)",
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
"frequency": "No information.",
"description": "Retrieves data used for YandexGPT quick answers features."
},
"YandexAdditionalBot": {
"operator": "[Yandex](https://yandex.ru)",
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
"frequency": "No information.",
"description": "Retrieves data used for YandexGPT quick answers features."
},
"YouBot": {
"operator": "[You](https://about.you.com/youchat/)",
"respect": "[Yes](https://about.you.com/youbot/)",

View file

@ -1,19 +1,33 @@
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: aiHitBot
User-agent: Amazonbot
User-agent: Andibot
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: bedrockbot
User-agent: Brightbot 1.0
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: Claude-SearchBot
User-agent: Claude-User
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Cotoyogi
User-agent: Crawlspace
User-agent: Diffbot
User-agent: DuckAssistBot
User-agent: EchoboxBot
User-agent: FacebookBot
User-agent: facebookexternalhit
User-agent: Factset_spyderbot
User-agent: FirecrawlAgent
User-agent: FriendlyCrawler
User-agent: Google-CloudVertexBot
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GoogleOther-Image
@ -25,18 +39,43 @@ User-agent: ImagesiftBot
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: meta-externalagent
User-agent: Meta-ExternalAgent
User-agent: meta-externalfetcher
User-agent: Meta-ExternalFetcher
User-agent: MistralAI-User/1.0
User-agent: MyCentralAIScraperBot
User-agent: NovaAct
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: Operator
User-agent: PanguBot
User-agent: Panscient
User-agent: panscient.com
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: PhindBot
User-agent: Poseidon Research Crawler
User-agent: QualifiedBot
User-agent: QuillBot
User-agent: quillbot.com
User-agent: SBIntuitionsBot
User-agent: Scrapy
User-agent: SemrushBot
User-agent: SemrushBot-BA
User-agent: SemrushBot-CT
User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SI
User-agent: SemrushBot-SWA
User-agent: Sidetrade indexer bot
User-agent: TikTokSpider
User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: wpbot
User-agent: YandexAdditional
User-agent: YandexAdditionalBot
User-agent: YouBot
Disallow: /

View file

@ -1,43 +1,82 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|-----|----------|-----------------------|----------|------------------|-------------|
|------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Andibot | [Andi](https://andisearch.com/) | Unclear at this time | Search engine using generative AI, AI Search Assistant | No information provided. | Scrapes website and provides AI summary. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| bedrockbot | [Amazon](https://amazon.com) | [Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector) | Data scraping for custom AI applications. | Unclear at this time. | Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application. |
| Brightbot 1\.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-SearchBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. | No information provided. | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. |
| Claude\-User | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. | No information provided. | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. |
| Claude\-Web | Anthropic | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
| EchoboxBot | [Echobox](https://echobox.com) | Unclear at this time. | Data collection to support AI-powered products. | Unclear at this time. | Supports company's AI-powered social and email management products. |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
| facebookexternalhit | Meta/Facebook | [No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) | Ostensibly only for sharing, but likely used as an AI crawler as well | Unclear at this time. | Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is "to crawl the content of an app or website that was shared on one of Metas family of apps…". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary. |
| Factset\_spyderbot | [Factset](https://www.factset.com/ai) | Unclear at this time. | AI model training. | No information provided. | Scrapes data for AI training. |
| FirecrawlAgent | [Firecrawl](https://www.firecrawl.dev/) | Yes | AI scraper and LLM training | No information provided. | Scrapes data for AI systems and LLM training. |
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
| Google-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| Google\-CloudVertexBot | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Build and manage AI models for businesses employing Vertex AI | No information. | Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| Meta-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
| meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| MistralAI\-User/1\.0 | Mistral AI | Yes | Takes action based on user prompts. | Only when prompted by a user. | MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. |
| MyCentralAIScraperBot | Unclear at this time. | Unclear at this time. | AI data scraper | Unclear at this time. | Operator and data use is unclear at this time. |
| NovaAct | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| Operator | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator |
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
| Panscient | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
| panscient\.com | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| PhindBot | [phind](https://www.phind.com/) | Unclear at this time. | AI-enhanced search engine. | No explicit frequency provided. | Company offers an AI agent that uses AI and generate extra web query on the fly |
| Poseidon Research Crawler | [Poseidon Research](https://www.poseidonresearch.com) | Unclear at this time. | AI research crawler | No explicit frequency provided. | Lab focused on scaling the interpretability research necessary to make better AI systems possible. |
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
| QuillBot | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
| quillbot\.com | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
| SBIntuitionsBot | [SB Intuitions](https://www.sbintuitions.co.jp/en/) | [Yes](https://www.sbintuitions.co.jp/en/bot/) | Uses data gathered in AI development and information analysis. | No information. | AI development and information analysis |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| SemrushBot | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-BA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-CT | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-SI | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| wpbot | [QuantumCloud](https://www.quantumcloud.com) | Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9) | Live chat support and lead generation. | Unclear at this time. | wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support. |
| YandexAdditional | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YandexAdditionalBot | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |