Compare commits

...

392 commits
v1.8 ... main

Author SHA1 Message Date
dark-visitors
4ed17b8e4a Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-17 01:00:21 +00:00
ai.robots.txt
5326c202b5 Merge pull request #154 from paulrudy/main
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
re-add facebookexternalhit
2025-06-16 15:12:42 +00:00
a31ae1e6d0
Merge pull request #154 from paulrudy/main
re-add facebookexternalhit
2025-06-16 08:12:31 -07:00
paulrudy
7535893aec re-add facebookexternalhit 2025-06-15 16:49:07 -07:00
ai.robots.txt
eb05f2f527 Merge pull request #153 from sergiospagnuolo/Poseidon
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Update robots.json with new crawler
2025-06-14 14:04:03 +00:00
26a46c409d
Merge pull request #153 from sergiospagnuolo/Poseidon
Update robots.json with new crawler
2025-06-14 07:03:52 -07:00
dark-visitors
2b68568ac2 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-14 00:58:11 +00:00
Sérgio Spagnuolo
b05f2fee00
Update robots.json with new crawler
Update with Poseidon Research Crawler as found in nytimes.com/robots.txt
2025-06-13 17:15:13 -03:00
ai.robots.txt
e53d81c66d Merge pull request #152 from ai-robots-txt/MyCentralAIScraperBot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds MyCentralAIScraperBot
2025-06-13 09:28:41 +00:00
Glyn Normington
20e327e74e
Merge pull request #152 from ai-robots-txt/MyCentralAIScraperBot
chore(robots.json): adds MyCentralAIScraperBot
2025-06-13 10:28:32 +01:00
Glyn Normington
8f17718e76
Fix typo 2025-06-13 10:28:12 +01:00
d760f9216f
chore(robots.json): adds MyCentralAIScraperBot 2025-06-12 13:08:29 -07:00
ai.robots.txt
842e2256e8 Merge pull request #150 from ai-robots-txt/semrush-bots
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
/ ai-robots-txt (push) Has been cancelled
chore(robots.json): adds additional SemrushBot user agents
2025-06-12 07:12:00 +00:00
Glyn Normington
229ea20426
Merge pull request #150 from ai-robots-txt/semrush-bots
chore(robots.json): adds additional SemrushBot user agents
2025-06-12 08:11:51 +01:00
14d68f05ba
chore(robots.json): adds additional SemrushBot user agents 2025-06-11 13:50:53 -07:00
dark-visitors
cf598b6b71 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-10 01:00:37 +00:00
ai.robots.txt
3759a6bf14 chore(robots.json): adds EchoboxBot (#148)
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-09 15:44:36 +00:00
7867c3e26c
chore(robots.json): adds EchoboxBot (#148) 2025-06-09 16:44:25 +01:00
dark-visitors
e21f6ae1b6 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-06-06 00:59:25 +00:00
ai.robots.txt
ac7ed17e71 Merge pull request #145 from ai-robots-txt/aws-bedrockbot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds bedrockbot
2025-06-05 16:51:17 +00:00
Glyn Normington
81747e6772
Merge pull request #145 from ai-robots-txt/aws-bedrockbot
chore(robots.json): adds bedrockbot
2025-06-05 17:51:03 +01:00
528d77bf07
chore(robots.json): adds bedrockbot 2025-06-05 09:14:23 -07:00
dark-visitors
77393df5aa Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-05 00:59:28 +00:00
ai.robots.txt
75ea75a95b Merge pull request #143 from ai-robots-txt/panscient
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds Panscient
2025-06-04 18:04:06 +00:00
Glyn Normington
2fca1ddcf1
Merge pull request #143 from ai-robots-txt/panscient
chore(robots.json): adds Panscient
2025-06-04 19:03:53 +01:00
ai.robots.txt
9c28c63a0c Merge pull request #142 from ai-robots-txt/quillbot
chore(robots.json): adds Quillbot
2025-06-04 17:54:57 +00:00
395c013eea
Merge pull request #142 from ai-robots-txt/quillbot
chore(robots.json): adds Quillbot
2025-06-04 10:54:46 -07:00
4568d69b0e
chore(robots.json): adds Panscient 2025-06-04 10:54:14 -07:00
03831a7eb5
chore(robots.json): adds Quillbot 2025-06-04 10:46:58 -07:00
dark-visitors
2b5a59a303 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-06-04 01:00:07 +00:00
ai.robots.txt
3efabc603d Merge pull request #141 from Ivan-Chupin/patch-1
Add SBIntuitionsBot
2025-06-03 23:28:48 +00:00
b35f9a31d7
Merge pull request #141 from Ivan-Chupin/patch-1
Add SBIntuitionsBot
2025-06-03 16:28:36 -07:00
Ivan Chupin
8f75f4a2f5
Add SBIntuitionsBot 2025-06-04 03:48:42 +05:00
ai.robots.txt
080946c360 Merge pull request #140 from ai-robots-txt/yandex-bots
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds YandexAdditional crawlers
2025-06-03 19:51:25 +00:00
Glyn Normington
7eec033cad
Merge pull request #140 from ai-robots-txt/yandex-bots
chore(robots.json): adds YandexAdditional crawlers
2025-06-03 20:51:14 +01:00
3187fd8a32
chore(robots.json): adds YandexAdditional crawlers 2025-06-03 12:41:57 -07:00
ai.robots.txt
d239e7e5ad Merge pull request #139 from ai-robots-txt/workflow-fix
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard
2025-06-03 01:52:35 +00:00
Glyn Normington
9dbf34010a
Merge pull request #139 from ai-robots-txt/workflow-fix
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard
2025-06-03 02:52:23 +01:00
dark-visitors
87016d1504 Update from Dark Visitors 2025-06-03 01:00:29 +00:00
899ce01c55
chore(ai_robots_update.yml): correct workflow by revising git flags + adding guard 2025-06-02 14:56:09 -07:00
Glyn Normington
4af776f0a0
Merge pull request #136 from ai-robots-txt/imgproxy-revert
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): revert "adds imgproxy crawler"
2025-06-02 20:21:10 +01:00
1dd66b6969
Revert "chore(robots.json): adds imgproxy crawler"
This reverts commit b65f45e408.
2025-06-02 11:53:06 -07:00
814df6b9a0
Merge pull request #134 from not-not-the-imp/patch-1
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Add AndiBot and PhindBot
2025-05-31 16:03:16 -07:00
268922f8f2
Update robots.json 2025-05-31 16:02:05 -07:00
4259b25ccc
Update robots.json 2025-05-31 16:01:09 -07:00
d22b9ec51a
Update robots.json 2025-05-31 16:00:13 -07:00
imp
3e8edd083e
Add AndiBot and PhindBot
Fixes #75
2025-05-23 13:03:49 +01:00
ai.robots.txt
093ab81d78 Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-05-23 00:58:57 +00:00
dark-visitors
7bf7f9164d Update from Dark Visitors
Some checks failed
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
/ ai-robots-txt (push) Has been cancelled
2025-05-22 00:58:45 +00:00
ai.robots.txt
fedb658cc0 Merge pull request #133 from ai-robots-txt/wpbot
chore(robots.json): adds wpbot
2025-05-21 21:06:05 +00:00
Glyn Normington
851eabe059
Merge pull request #133 from ai-robots-txt/wpbot
chore(robots.json): adds wpbot
2025-05-21 22:05:51 +01:00
ai.robots.txt
7c5389f4a0 Merge pull request #98 from kylebuckingham/main
Updating Claude Bots
2025-05-21 19:00:23 +00:00
af597586b6
Merge pull request #98 from kylebuckingham/main
Updating Claude Bots
2025-05-21 12:00:11 -07:00
b1d9a60a38
chore(robots.json): adds wpbot 2025-05-21 11:40:33 -07:00
ai.robots.txt
1c2acd75b7 Merge pull request #126 from ai-robots-txt/mistral-bot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds MistralAI-User/1.0 crawler
2025-05-21 15:27:26 +00:00
Glyn Normington
202d3c3b9a
Merge pull request #126 from ai-robots-txt/mistral-bot
chore(robots.json): adds MistralAI-User/1.0 crawler
2025-05-21 16:27:14 +01:00
Glyn Normington
0a78fe1e76
Merge pull request #132 from ai-robots-txt/crawler-policy-update
chore(README): updates the opening line of our README to clarify the types of agents we block
2025-05-21 15:13:35 +01:00
8b151b2cdc
Update README.md
Co-authored-by: Glyn Normington <glyn.normington@gmail.com>
2025-05-21 06:52:36 -07:00
8a8001cbec
chore(README): updates the opening line of our README to clarify the types of agents we block 2025-05-20 13:55:25 -07:00
Glyn Normington
fe1267e290
Merge pull request #131 from Mihitoko/mention-x-robots-tag-for-bing
Some checks failed
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
Mention X-Robots-Tag header as alternative for bing
2025-05-20 07:52:32 +01:00
Mihitoko
9297c7dfa3
Mention X-Robots-Tag header as alternative for bing 2025-05-20 00:10:05 +02:00
dark-visitors
7a2e6cba52 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
/ lint-json (push) Has been cancelled
2025-05-17 00:57:28 +00:00
ai.robots.txt
dd1ed174b7 Merge pull request #129 from ai-robots-txt/google-cloudvertexbot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
chore(robots.json): adds Google-CloudVertexBot
2025-05-16 11:35:15 +00:00
Glyn Normington
89c0fbaf86
Merge pull request #129 from ai-robots-txt/google-cloudvertexbot
chore(robots.json): adds Google-CloudVertexBot
2025-05-16 12:35:04 +01:00
ca918a963f
chore(robots.json): adds Google-CloudVertexBot 2025-05-15 21:16:49 -07:00
5fba0b746d
chore(robots.json): adds MistralAI-User/1.0 crawler 2025-05-15 20:45:20 -07:00
dark-visitors
16d1de7094 Update from Dark Visitors
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
2025-05-16 00:59:08 +00:00
Glyn Normington
73f6f67adf
Merge pull request #125 from holysoles/lint_robots_json
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
/ lint-json (push) Waiting to run
lint robots.json during pull requests
2025-05-15 17:26:15 +01:00
Patrick Evans
498aa50760 lint robots.json during pull requests 2025-05-15 11:15:25 -05:00
ai.robots.txt
1c470babbe Merge pull request #123 from joehoyle/patch-1
Fix JSON syntax error
2025-05-15 16:12:30 +00:00
Adam Newbold
84d63916d2
Merge pull request #123 from joehoyle/patch-1
Fix JSON syntax error
2025-05-15 12:12:21 -04:00
Joe Hoyle
0c56b96fd9
Fix JSON syntax error 2025-05-15 11:26:47 -04:00
28e69e631b
Merge pull request #122 from ai-robots-txt/qualified-bot
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
chore(robots.json): adds QualifiedBot crawler
2025-05-15 07:17:51 -07:00
9539256cb3
chore(robots.json): adds QualifiedBot crawler 2025-05-15 07:16:07 -07:00
9659c88b0c
Merge pull request #121 from solution-libre/add-traefik-plugin
Some checks are pending
/ run-tests (push) Waiting to run
Add Traefik plugin to the README.md file
2025-05-14 16:45:34 -07:00
Florent Poinsaut
c66d180295
Merge branch 'main' into add-traefik-plugin 2025-05-14 22:06:56 +02:00
Glyn Normington
9a9b1b41c0
Merge pull request #119 from ai-robots-txt/bing-ai-opt-out-instructions
Some checks are pending
/ run-tests (push) Waiting to run
Bing AI opt-out instructions
2025-05-14 19:18:20 +01:00
Florent Poinsaut
b4610a725c Add Traefik plugin 2025-05-14 14:11:56 +02:00
36a52a88d8
Bing AI opt-out instructions 2025-05-12 20:20:18 -07:00
ai.robots.txt
678380727e Merge pull request #115 from glyn/syntax
Some checks failed
/ run-tests (push) Has been cancelled
/ ai-robots-txt (push) Has been cancelled
Fix Python syntax error
2025-05-01 10:29:06 +00:00
Glyn Normington
fb8188c49d
Merge pull request #115 from glyn/syntax
Fix Python syntax error
2025-05-01 11:28:54 +01:00
Glyn Normington
ec995cd686 Fix Python syntax error 2025-05-01 11:27:40 +01:00
Crazyroostereye
1310dbae46
Added a Caddyfile converter (#110)
Co-authored-by: Julian Beittel <julian@beittel.net>
Co-authored-by: Glyn Normington <work@underlap.org>
2025-05-01 11:21:32 +01:00
Glyn Normington
91a88e2fa8
Merge pull request #113 from rwijnen-um/feature/haproxy
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
HAProxy converter added.
2025-04-28 09:00:16 +01:00
Rik Wijnen
a4a9f2ac2b Tests for HAProxy file added. 2025-04-28 09:30:26 +02:00
Rik Wijnen
66da70905f Fixed incorrect English sentence. 2025-04-28 09:09:40 +02:00
Rik Wijnen
50e739dd73 HAProxy converter added. 2025-04-28 08:51:02 +02:00
ai.robots.txt
c6c7f1748f Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-26 00:55:12 +00:00
dark-visitors
934ac7b318 Update from Dark Visitors
Some checks failed
/ run-tests (push) Waiting to run
/ ai-robots-txt (push) Has been cancelled
2025-04-25 00:56:57 +00:00
ai.robots.txt
4654e14e9c Merge pull request #112 from maiavixen/main
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
Fixed meta-external* being titlecase, and removed period for consistency
2025-04-24 07:00:34 +00:00
Glyn Normington
9bf31fbca8
Merge pull request #112 from maiavixen/main
Fixed meta-external* being titlecase, and removed period for consistency
2025-04-24 08:00:24 +01:00
maia
9d846ced45
Update robots.json
Lowercase meta-external* as that was not technically the UA for the bots, also removed a period in the "respect" for consistency
2025-04-24 04:08:20 +02:00
dark-visitors
8d25a424d9 Update from Dark Visitors
Some checks failed
/ ai-robots-txt (push) Has been cancelled
/ run-tests (push) Has been cancelled
2025-04-23 00:56:52 +00:00
ai.robots.txt
bbec639c14 Merge pull request #109 from dennislee1/patch-1
Some checks are pending
/ ai-robots-txt (push) Waiting to run
/ run-tests (push) Waiting to run
AI bots to consider adding
2025-04-22 14:50:26 +00:00
422cf9e29b
Merge pull request #109 from dennislee1/patch-1
AI bots to consider adding
2025-04-22 07:50:14 -07:00
Dennis Lee
33c5ce1326
Update robots.json
Updated robots list with five new proposed AI bots:

aiHitBot
Cotoyogi
Factset_spyderbot
FirecrawlAgent
TikTokSpider
2025-04-21 18:55:11 +01:00
774b1ddf52
Merge pull request #107 from glyn/sponsorship
Some checks failed
/ run-tests (push) Has been cancelled
Clarify our position on sponsorship
2025-04-18 11:40:06 -07:00
Glyn Normington
b1856e6988 Donations 2025-04-18 18:40:44 +01:00
Glyn Normington
d05ede8fe1 Clarify our position on sponsorship
Some firms, including those with .ai domains, have
offered to sponsor this project. So make our position
clear.
2025-04-18 17:46:56 +01:00
Kyle Buckingham
fd41de8522
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-04-16 16:43:03 -07:00
Kyle Buckingham
4a6f37d727
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-04-16 16:42:58 -07:00
ai.robots.txt
e0cdb278fb Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-16 00:57:11 +00:00
dark-visitors
a96e330989 Update from Dark Visitors
Some checks are pending
/ run-tests (push) Waiting to run
2025-04-15 00:57:01 +00:00
156e6baa09
Merge pull request #105 from jsheard/patch-1
Some checks are pending
/ run-tests (push) Waiting to run
Include "AI Agents" from Dark Visitors
2025-04-14 10:08:38 -07:00
Joshua Sheard
d9f882a9b2
Include "AI Agents" from Dark Visitors 2025-04-14 15:46:01 +01:00
dark-visitors
305188b2e7 Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-04-11 00:55:52 +00:00
ai.robots.txt
4a764bba18 Merge pull request #102 from ai-robots-txt/imgproxy-bot
Some checks are pending
/ run-tests (push) Waiting to run
chore(robots.json): adds imgproxy crawler
2025-04-10 19:22:34 +00:00
a891ad7213
Merge pull request #102 from ai-robots-txt/imgproxy-bot
chore(robots.json): adds imgproxy crawler
2025-04-10 12:22:23 -07:00
b65f45e408
chore(robots.json): adds imgproxy crawler 2025-04-10 10:12:51 -07:00
Glyn Normington
49e58b1573
Merge pull request #100 from fbartho/fb/fix-perplexity-users
Some checks failed
/ run-tests (push) Has been cancelled
Fix html-mangled hyphen in 'Perplexity-Users' bot name
2025-04-05 17:32:19 +01:00
Frederic Barthelemy
c6f308cbd0
PR Feedback: log special-case, comment consistency 2025-04-05 09:01:52 -07:00
Frederic Barthelemy
5f5a89c38c
Fix html-mangled hyphen in Perplexity-Users
Fixes: #99
2025-04-04 17:34:14 -07:00
Frederic Barthelemy
6b0349f37d
fix python complaining about f-string syntax
```
python code/tests.py
Traceback (most recent call last):
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/tests.py", line 7, in <module>
    from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/robots.py", line 144
    return f"({"|".join(map(re.escape, lst))})"
                ^
SyntaxError: f-string: expecting '}'
```
2025-04-04 15:20:30 -07:00
Kyle Buckingham
8dc36aa2e2
Update robots.txt 2025-04-01 15:23:28 -07:00
Kyle Buckingham
ae8f74c10c
Update robots.json 2025-04-01 15:22:04 -07:00
ai.robots.txt
5b8650b99b Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-03-29 00:54:10 +00:00
dark-visitors
c249de99a3 Update from Dark Visitors 2025-03-28 00:54:28 +00:00
ec18af7624
Revert "Merge pull request #91 from deyigifts/perplexity-user"
This reverts commit 68d1d93714.
2025-03-27 12:51:22 -07:00
ai.robots.txt
6851413c52 Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:15 +00:00
Glyn Normington
dba03d809c
Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:05 +00:00
ai.robots.txt
68d1d93714 Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 19:29:30 +00:00
1183187be9
Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 12:29:21 -07:00
Thomas Leister
7c3b5a2cb2
Add tests for Nginx config generator 2025-03-27 18:28:21 +01:00
Thomas Leister
4f3f4cd0dd
Add assembled version of nginx-block-ai-bots.conf file 2025-03-27 12:43:36 +01:00
Thomas Leister
5a312c5f4d
Mention Nginx config feature in README 2025-03-27 12:43:29 +01:00
Thomas Leister
da85207314
Implement new function "json_to_nginx" which outputs an Nginx
configuration snippet
2025-03-27 12:27:09 +01:00
deyigifts
6ecfcdfcbf
Update perplexity bot
Update based on perplexity bot docs
2025-03-24 14:16:57 +08:00
5e7c3c432f
Merge pull request #83 from glyn/81-doc-testing
Document testing in README
2025-02-19 09:19:44 -08:00
Glyn Normington
9f41d4c11c
Merge pull request #84 from sideeffect42/tests-workflow
Add run-tests workflow
2025-02-18 19:42:55 +00:00
Dennis Camera
8a74896333 Add workflow to run tests on pull request or push to main 2025-02-18 20:30:27 +01:00
Glyn Normington
1d55a205e4 Document testing in README
Fixes: https://github.com/ai-robots-txt/ai.robots.txt/issues/81
2025-02-18 16:49:08 +00:00
Glyn Normington
8494a7fcaa
Merge pull request #80 from sideeffect42/htaccess-allow-robots_txt
.htaccess: Allow robots access to `/robots.txt`
2025-02-18 16:42:36 +00:00
Dennis Camera
c7c1e7b96f robots.py: Make executable 2025-02-18 12:55:17 +01:00
Dennis Camera
17b826a6d3 Update tests and convert to stock unittest
For these simple tests Python's built-in unittest framework is more than enough.
No additional dependencies are required.

Added some more test cases with "special" characters to test the escaping code
better.
2025-02-18 12:55:15 +01:00
Dennis Camera
0bd3fa63b8 table-of-bot-metrics.md: Escape robot names for Markdown table
Some characters which could occur in a crawler's name have a special meaning in
Markdown. They are escaped to prevent them from having unintended side effects.

The escaping is only applied to the first (Name) column of the table. The rest
of the columns is expected to already be Markdown encoded in robots.json.
2025-02-18 12:53:27 +01:00
Dennis Camera
a884a2afb9 .htaccess: Make regex in RewriteCond safe
Improve the regular expression by removing unneeded anchors and
escaping special characters (not just space) to prevent false positives
or a misbehaving rewrite rule.
2025-02-18 12:53:22 +01:00
Dennis Camera
c0d418cd87 .htaccess: Allow robots access to /robots.txt 2025-02-18 12:49:29 +01:00
dark-visitors
abfd6dfcd1 Update from Dark Visitors 2025-02-17 00:53:32 +00:00
ai.robots.txt
693289bb29 chore: add Brightbot 1.0 2025-02-16 21:37:52 +00:00
a9ec4ffa6f
chore: add Brightbot 1.0 2025-02-16 13:36:39 -08:00
Glyn Normington
03aa829913
Merge pull request #79 from always-be-testing/main
List of AI bots Cloudflare considers "Verified"
2025-02-16 04:33:40 +00:00
always-be-testing
5b13c2e504
add more concise message about verified bots
Co-authored-by: Glyn Normington <work@underlap.org>
2025-02-15 11:22:10 -05:00
always-be-testing
af87b85d7f include return after heading 2025-02-14 12:39:08 -05:00
always-be-testing
f99339922f grammar update and include syntax for verified bot condition 2025-02-14 12:36:33 -05:00
always-be-testing
e396a2ec78 forgot to include heading 2025-02-14 12:31:20 -05:00
always-be-testing
261a2b83b9 update README to inclide list of ai bots Cloudflare considers verified 2025-02-14 12:26:19 -05:00
dark-visitors
bebffccc0c Update from Dark Visitors 2025-02-02 00:52:50 +00:00
ai.robots.txt
89d4c6e5ca Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:51:01 +00:00
Glyn Normington
f9e2c5810b
Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:50:50 +00:00
nisbet-hubbard
05b79b8a58
Update robots.json 2025-01-27 19:41:03 +08:00
dark-visitors
9c060dee1c Update from Dark Visitors 2025-01-21 00:49:22 +00:00
ai.robots.txt
6c552a3daa Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:42 +00:00
Glyn Normington
f621fb4852
Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:29 +00:00
Joshua Sheard
7427d96bac
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 10:59:02 +00:00
Glyn Normington
81cc81b35e
Merge pull request #68 from MassiminoilTrace/main
Implementing automatic htaccess generation
2025-01-20 07:33:54 +00:00
Massimo Gismondi
4f03818280 Removed if condition and added a little comments 2025-01-20 06:51:06 +01:00
Massimo Gismondi
a9956f7825 Removed additional sections 2025-01-20 06:50:48 +01:00
Massimo Gismondi
33c38ee70b
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:28:32 +01:00
Massimo Gismondi
52241bdca6
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:56 +01:00
Massimo Gismondi
013b7abfa1
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:02 +01:00
Massimo Gismondi
70fd6c0fb1
Add mention of htaccess in readme
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:25:07 +01:00
Joshua Sheard
5aa08bc002
Add Crawlspace 2025-01-19 22:03:50 +00:00
Massimo Gismondi
d65128d10a
Removed paragraph in favour of future FAQ.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:41:09 +01:00
Massimo Gismondi
1cc4b59dfc
Shortened htaccess instructions
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:40:03 +01:00
Massimo Gismondi
8aee2f24bb
Fixed space in comment
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:39:07 +01:00
Massimo Gismondi
b455af66e7 Adding clarification about performance and code comment 2025-01-17 21:42:08 +01:00
Massimo Gismondi
189e75bbfd Adding usage instructions 2025-01-17 21:25:23 +01:00
Massimo Gismondi
933aa6159d Implementing htaccess generation 2025-01-07 11:02:29 +01:00
Glyn Normington
b7f908e305
Merge pull request #66 from fabianegli/patch-1
Allow Action to succeed even if no changes were made
2025-01-07 03:54:40 +00:00
ai.robots.txt
ec454b71d3 Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 20:51:56 +00:00
565dca3dc0
Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 12:51:43 -08:00
Jordan Atwood
143f8f2285
Block SemrushBot 2025-01-06 12:34:38 -08:00
8e98cc6049
Merge pull request #61 from glyn/improve-naming
Rename Python code
2025-01-06 08:10:47 -08:00
Fabian Egli
30ee957011
bail when NO changes are staged 2025-01-06 12:05:42 +01:00
Fabian Egli
83cd546470
allow Action to succeed even if no changes were made
Before, the Action would fail in case there were no changes made to any files by the converter.
2025-01-06 11:39:41 +01:00
ai.robots.txt
ca8620e28b Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:20 +00:00
Glyn Normington
b9df958b39
Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:01 +00:00
Glyn Normington
c01a684036 Convert robots.json more frequently
Specifically, when github workflows or code
is changed as either of these can affect the
conversion results.

Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60
2025-01-05 05:03:50 +00:00
Glyn Normington
d2be15447c
Merge pull request #62 from ai-robots-txt/missing-dependency
Ensure dependency installed
2025-01-05 01:46:27 +00:00
Glyn Normington
9e372d0696 Ensure dependency installed
Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60#issuecomment-2571437913
Ref: https://stackoverflow.com/questions/11783875/importerror-no-module-named-bs4-beautifulsoup
2025-01-05 01:45:33 +00:00
Glyn Normington
996b9c678c Improve job name
The purpose of the job is to convert the JSON file
to the other files.
2025-01-04 05:28:41 +00:00
Glyn Normington
e4c12ee2f8 Rename in test code 2025-01-04 05:03:48 +00:00
Glyn Normington
3a43714908 Rename Python code
The name dark_visitors.py gives the impression that the code is entirely
related to the dark visitors website, whereas the update command relates
to dark visitors and the convert command is unrelated to dark visitors.
2025-01-04 04:55:34 +00:00
dark-visitors
2036a68c1f Update from Dark Visitors 2024-12-04 00:55:50 +00:00
Glyn Normington
24666e8b15
Merge pull request #58 from fabianegli/fabianegli-restore-attribution
Restore attribution
2024-11-29 09:05:16 +00:00
fabianegli
eb8e1a49b5 Revert "specify file encodings in tests"
This reverts commit bd38c30194.
2024-11-29 09:02:47 +01:00
fabianegli
b64284d684 restore correct attribution logic to before PR #55 2024-11-26 09:41:46 +01:00
fabianegli
bd38c30194 specify file encodings in tests 2024-11-26 09:12:11 +01:00
dark-visitors
609ddca392 Updated from new robots.json 2024-11-24 00:57:06 +00:00
dark-visitors
37065f9118 Update from Dark Visitors 2024-11-24 00:57:05 +00:00
dark-visitors
58985737e7 Updated from new robots.json 2024-11-19 16:46:21 +00:00
584e66cb99
Merge pull request #56 from glyn/40-exclude-facebookexternalhit
Allow facebookexternalhit
2024-11-19 08:46:05 -08:00
Glyn Normington
80002f5e17 Allow facebookexternalhit
At the time of writing, this crawler does not
appear to be for the purpose of AI.

See: https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/
(accessed on 19 November 2024).

Fixes https://github.com/ai-robots-txt/ai.robots.txt/issues/40
2024-11-19 03:33:45 +00:00
Glyn Normington
71db599b41
Merge pull request #55 from norwd/feature/add-robots.txt-file-to-release
Create workflow to upload `robots.txt` file as release artefact
2024-11-13 01:39:11 +00:00
Y. Meyer-Norwood
e8f0784a00
Explicitly use release tag for checkout 2024-11-13 10:26:37 +13:00
Y. Meyer-Norwood
94ceb3cffd
Add authentication for gh command 2024-11-11 13:04:55 +13:00
Y. Meyer-Norwood
adfd4af872
Create upload-robots-txt-file-to-release.yml 2024-11-11 12:58:40 +13:00
Glyn Normington
d50615d394 Improve formatting
This clarifies the scope of the tip is Apache httpd.
2024-11-10 01:06:13 +00:00
Glyn Normington
2c88909be3 Fix formatting 2024-11-10 01:02:18 +00:00
Glyn Normington
6f58ddc623
Merge pull request #54 from glyn/rationale
Clarify our rationale
2024-11-10 00:58:29 +00:00
Glyn Normington
9295b6a963 Clarify our rationale
I deleted the point about excessive load on
crawled sites as any other crawler could potentially
be guilty of this and I wouldn't want our scope to
creep to all crawlers.

Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/53#issuecomment-2466042550
2024-11-09 04:45:47 +00:00
dark-visitors
9e06cf3bc9 Updated from new robots.json 2024-10-29 00:52:12 +00:00
dark-visitors
bc0a0ad0e9 Update from Dark Visitors 2024-10-29 00:52:12 +00:00
dark-visitors
fe5f407673 Update from Dark Visitors 2024-10-27 00:54:47 +00:00
Adam Newbold
a66b16827d
Merge pull request #51 from fabianegli/php-to-python-plus-tests
PHP to Python plus tests and stuff
2024-10-22 21:32:58 -04:00
fabianegli
3ab22bc498 make conversions and updates separately triggerable 2024-10-19 19:56:41 +02:00
fabianegli
6ab8fb2d37 no more failure when run without network 2024-10-19 19:11:01 +02:00
fabianegli
7e2b3ab037 rename action 2024-10-19 19:09:34 +02:00
fabianegli
0c05461f84 simplify repo and added some tests 2024-10-19 13:06:34 +02:00
fabianegli
6bb598820e ignore venv 2024-10-19 11:56:00 +02:00
Glyn Normington
d62cab66c5
Merge pull request #50 from glyn/fix-typo
Fix typo and trigger rerun of main job
2024-10-19 04:43:09 +01:00
ai.robots.txt
6a359e7fd7 Fix typo and trigger rerun of main job 2024-10-19 03:43:00 +00:00
Glyn Normington
38a388097c Fix typo and trigger rerun of main job 2024-10-19 04:42:27 +01:00
Glyn Normington
83c8603071
Merge pull request #49 from glyn/php-diagnostics
PHP diagnostics
2024-10-19 04:34:53 +01:00
ai.robots.txt
a80bd18fb8 Dump out file contents in PHP script 2024-10-19 03:34:29 +00:00
Glyn Normington
bdf30be7dc Dump out file contents in PHP script 2024-10-19 04:33:46 +01:00
Glyn Normington
4d47b17c45
Merge pull request #47 from fabianegli/fabianegli-patch-1
log the diff in the update actions
2024-10-19 02:58:05 +01:00
dark-visitors
faf81efb12 Daily update from Dark Visitors 2024-10-19 01:17:15 +00:00
Fabian Egli
25adc6b802
log git repository status 2024-10-19 00:28:41 +02:00
Fabian Egli
b584f613cd
add some signposts to the log 2024-10-19 00:13:09 +02:00
Fabian Egli
b3068a8d90
add some signposts 2024-10-19 00:12:25 +02:00
Fabian Egli
a46d06d436
log changes made by the action in main.yml 2024-10-19 00:04:15 +02:00
Fabian Egli
cfaade6e2f
log the diff in the update action daily_update.yml 2024-10-19 00:01:15 +02:00
04f630f7f8
Merge pull request #45 from glyn/faq-update
Update the FAQ
2024-10-18 06:35:47 -07:00
Glyn Normington
898c8ab82d
Merge pull request #46 from isagalaev/case-insensitive-sorting
Sort the content of robots.json by keys, case-insensitively
2024-10-18 07:57:56 +01:00
Ivan Sagalaev
7bb5efd462
Sort the content case-insensitively before dumping to JSON 2024-10-17 21:08:43 -04:00
Glyn Normington
e6bb7cae9e Augment the "why" FAQ
Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2419078796
2024-10-17 12:27:05 +01:00
Glyn Normington
b229f5b936 Re-order the FAQ
The "why" question should come first.
2024-10-17 12:25:54 +01:00
dark-visitors
b1491d2694 Daily update from Dark Visitors 2024-10-09 01:17:37 +00:00
ai.robots.txt
9be286626d Merge pull request #43 from lxjv/main
Update robots.json with Claude respect link
2024-10-08 02:30:17 +00:00
Glyn Normington
01993b98c3
Merge pull request #43 from lxjv/main
Update robots.json with Claude respect link
2024-10-08 03:30:07 +01:00
Laker Turner
dc15afe847
Update robots.json with Claude respect link 2024-10-07 17:38:01 +01:00
ai.robots.txt
6da804e826 chore: add ISSCyberRiskCrawler 2024-09-30 23:50:18 +00:00
9c2394f23b
chore: add ISSCyberRiskCrawler 2024-09-30 16:25:20 -07:00
ai.robots.txt
6d9ce1d62a chore: add sidetrade bot 2024-09-28 20:58:18 +00:00
6a988be27f
chore: add sidetrade bot 2024-09-28 13:58:00 -07:00
ai.robots.txt
632e9d6510 Daily update from Dark Visitors 2024-09-28 01:17:19 +00:00
dark-visitors
7851cea4fd Daily update from Dark Visitors 2024-09-27 01:18:04 +00:00
Glyn Normington
75343c790e
Merge pull request #38 from urvish-p80/main
Add an additional resource - README.md
2024-09-27 01:26:04 +01:00
ai.robots.txt
44d975c799 Merge pull request #42 from commoncrawl/main
feat: make CCBot entry more accurate
2024-09-27 00:21:49 +00:00
Glyn Normington
2f67e77ddb
Merge pull request #42 from commoncrawl/main
feat: make CCBot entry more accurate
2024-09-27 01:21:37 +01:00
Greg Lindahl
a6de89e6bd feat: make CCBot entry more accurate 2024-09-26 21:41:28 +00:00
60bdfa7eb3
Merge pull request #41 from cityrolr/patch-1
Update README.md
2024-09-24 12:53:52 -07:00
Julian Mair
af05890b07
Update README.md
For people who don't use or don't want to use RSS for this, I've added a little explanation of how to subscribe to releases via GitHub.
2024-09-23 23:27:27 +02:00
Urvish Patel
0106d4b15a
Add additional resource - README.md
A detailed blogpost to - See the live dashboard showing the websites that are blocking AI Bots such as GPTBot, CCBot, Google-extended and ByteSpider from crawling and scraping the content on their website. Learn which AI crawlers / scrapers do what and how to block them using Robots.txt.
2024-09-23 08:19:27 -04:00
ai.robots.txt
6b8d7f5890 Daily update from Dark Visitors 2024-09-09 01:16:21 +00:00
dark-visitors
5963cbf9f7 Daily update from Dark Visitors 2024-09-08 01:19:31 +00:00
Glyn Normington
b15b8062ce
Merge pull request #36 from cramforce/patch-1
Add instructions for AI bot blocking on Vercel
2024-09-08 01:26:07 +01:00
Malte Ubl
809851ae88
Add instructions for AI bot blocking on Vercel 2024-09-07 15:59:25 -07:00
ai.robots.txt
1c1b423684 chore: add iaskspider/2.0 2024-09-07 02:05:43 +00:00
8373294404
chore: add iaskspider/2.0 2024-09-06 19:05:26 -07:00
b30ca5f193
Merge pull request #35 from nisbet-hubbard/patch-7
Improve main workflow
2024-09-02 18:40:57 -07:00
ai.robots.txt
fb5c995243 Daily update from Dark Visitors 2024-09-03 01:12:57 +00:00
ai.robots.txt
7151f6c569 Removing previously generated files 2024-09-03 01:12:56 +00:00
nisbet-hubbard
cc18b8617c
Update main.yml 2024-09-03 07:48:48 +08:00
ai.robots.txt
c9325c9e18 Daily update from Dark Visitors 2024-09-02 01:15:07 +00:00
ai.robots.txt
567bd00aec Removing previously generated files 2024-09-02 01:15:07 +00:00
ai.robots.txt
543e993b08 Daily update from Dark Visitors 2024-09-01 01:24:53 +00:00
ai.robots.txt
01589718df Removing previously generated files 2024-09-01 01:24:52 +00:00
ai.robots.txt
9a7f556d87 Daily update from Dark Visitors 2024-08-31 01:13:04 +00:00
ai.robots.txt
9a4ebb57ee Removing previously generated files 2024-08-31 01:13:04 +00:00
ai.robots.txt
054c97ad4f Daily update from Dark Visitors 2024-08-30 01:13:29 +00:00
ai.robots.txt
b2970316d8 Removing previously generated files 2024-08-30 01:13:29 +00:00
ai.robots.txt
008a34ceb4 chore: add ai2bot 2024-08-29 03:07:52 +00:00
ai.robots.txt
3bce634e4a Removing previously generated files 2024-08-29 03:07:51 +00:00
0f8723558f
chore: add ai2bot 2024-08-28 20:07:32 -07:00
ai.robots.txt
6dc900b582 Daily update from Dark Visitors 2024-08-29 01:13:19 +00:00
ai.robots.txt
71eefcdb05 Removing previously generated files 2024-08-29 01:13:19 +00:00
ai.robots.txt
1d417ffab9 Daily update from Dark Visitors 2024-08-28 01:12:35 +00:00
ai.robots.txt
00ef18f93c Removing previously generated files 2024-08-28 01:12:35 +00:00
ai.robots.txt
84a2376f65 Daily update from Dark Visitors 2024-08-27 01:12:20 +00:00
ai.robots.txt
699862f4bd Removing previously generated files 2024-08-27 01:12:19 +00:00
ai.robots.txt
ccec3eef15 Daily update from Dark Visitors 2024-08-26 01:11:41 +00:00
ai.robots.txt
6cb9bc8ebf Removing previously generated files 2024-08-26 01:11:40 +00:00
ai.robots.txt
42a7ca7eda Daily update from Dark Visitors 2024-08-25 01:16:28 +00:00
ai.robots.txt
907866301f Removing previously generated files 2024-08-25 01:16:27 +00:00
ai.robots.txt
b202b9e1e3 Daily update from Dark Visitors 2024-08-24 01:09:29 +00:00
ai.robots.txt
ac1250cfa5 Removing previously generated files 2024-08-24 01:09:29 +00:00
ai.robots.txt
d95f2e8072 Daily update from Dark Visitors 2024-08-23 01:10:54 +00:00
ai.robots.txt
61d851baf5 Removing previously generated files 2024-08-23 01:10:53 +00:00
dark-visitors
7bfc1647a8 Daily update from Dark Visitors 2024-08-22 01:11:43 +00:00
ai.robots.txt
3580a7096f Daily update from Dark Visitors 2024-08-21 01:10:11 +00:00
ai.robots.txt
fad335178f Removing previously generated files 2024-08-21 01:10:10 +00:00
ai.robots.txt
358df0833e Daily update from Dark Visitors 2024-08-20 01:10:11 +00:00
ai.robots.txt
7e0dd921db Removing previously generated files 2024-08-20 01:10:11 +00:00
ai.robots.txt
591a99c320 Daily update from Dark Visitors 2024-08-19 01:11:49 +00:00
ai.robots.txt
394e447c78 Removing previously generated files 2024-08-19 01:11:49 +00:00
ab4a6547f6
Merge branch 'main' of github.com:ai-robots-txt/ai.robots.txt 2024-08-18 11:34:47 -07:00
1d3194f75d
chore: update readme 2024-08-18 11:34:43 -07:00
2363e57608
chore: minor update 2024-08-18 11:34:08 -07:00
ai.robots.txt
b8e68c12f3 Daily update from Dark Visitors 2024-08-18 01:14:50 +00:00
ai.robots.txt
60ff792ba9 Removing previously generated files 2024-08-18 01:14:49 +00:00
ai.robots.txt
3afcefdff5 Daily update from Dark Visitors 2024-08-17 01:08:17 +00:00
ai.robots.txt
558d5871b2 Removing previously generated files 2024-08-17 01:08:17 +00:00
ai.robots.txt
2a075cb2f1 Daily update from Dark Visitors 2024-08-16 01:10:14 +00:00
ai.robots.txt
3ef9cb7ce4 Removing previously generated files 2024-08-16 01:10:13 +00:00
dark-visitors
5937434aff Daily update from Dark Visitors 2024-08-15 01:07:15 +00:00
407b9e12e6
chore: sort output 2024-08-14 17:10:29 -07:00
bc66d10afd
chore: update faq 2024-08-14 09:21:26 -07:00
ai.robots.txt
df5b6ef647 Daily update from Dark Visitors 2024-08-14 01:11:03 +00:00
ai.robots.txt
2c8ed062b9 Removing previously generated files 2024-08-14 01:11:02 +00:00
ai.robots.txt
2e8e8af8e4 Daily update from Dark Visitors 2024-08-13 01:12:03 +00:00
ai.robots.txt
f1d0c5b1fe Removing previously generated files 2024-08-13 01:12:02 +00:00
ai.robots.txt
53a39b2f71 Daily update from Dark Visitors 2024-08-12 01:12:23 +00:00
ai.robots.txt
274d48b8f0 Removing previously generated files 2024-08-12 01:12:23 +00:00
ai.robots.txt
6472e07f09 Daily update from Dark Visitors 2024-08-11 01:16:04 +00:00
ai.robots.txt
cb98669cc2 Removing previously generated files 2024-08-11 01:16:03 +00:00
7662d06eb3
Merge pull request #33 from nisbet-hubbard/patch-6
Add links for reporting and FAQ to README.md
2024-08-09 19:42:36 -07:00
ai.robots.txt
53449ad1bd Daily update from Dark Visitors 2024-08-10 01:10:53 +00:00
ai.robots.txt
4242f8cc7b Removing previously generated files 2024-08-10 01:10:53 +00:00
nisbet-hubbard
46540633ba
Update README.md 2024-08-10 08:22:28 +08:00
ai.robots.txt
21e5cd96a9 Daily update from Dark Visitors 2024-08-09 01:11:12 +00:00
ai.robots.txt
ed7d7d3fdf Removing previously generated files 2024-08-09 01:11:11 +00:00
ai.robots.txt
57f006150b Daily update from Dark Visitors 2024-08-08 01:10:13 +00:00
ai.robots.txt
40f9325a4f Removing previously generated files 2024-08-08 01:10:12 +00:00
ai.robots.txt
0122dea1e9 Merge pull request #32 from ChenghaoMou/main
Tracking Dark Visitors Automatically
2024-08-07 22:40:24 +00:00
ai.robots.txt
663b85cc07 Removing previously generated files 2024-08-07 22:40:24 +00:00
Adam Newbold
5c8b4593f4
Merge pull request #32 from ChenghaoMou/main
Tracking Dark Visitors Automatically
2024-08-07 18:40:13 -04:00
Chenghao Mou
6f96795edc restore cron 2024-08-07 12:43:44 +01:00
ai.robots.txt
ab17662f96 Daily update from Dark Visitors 2024-08-07 11:41:00 +00:00
ai.robots.txt
8738c66c65 Removing previously generated files 2024-08-07 11:40:59 +00:00
Chenghao Mou
b00067bc86 restore files deleted by failed workflow and fix main commit message 2024-08-07 12:36:21 +01:00
ai.robots.txt
4a63c482c4 Removing previously generated files 2024-08-07 11:31:02 +00:00
Chenghao Mou
366e49dc6d restore files deleted by failed workflow and fix main commit message 2024-08-07 12:21:40 +01:00
ai.robots.txt
aaa55594e1 Removing previously generated files 2024-08-07 11:13:16 +00:00
Chenghao Mou
fbebbbfefb restore files deleted by failed workflow 2024-08-07 12:02:50 +01:00
dark-visitors
6a275366be Daily update from Dark Visitors 2024-08-07 10:50:45 +00:00
Chenghao Mou
09c6b78b46 fix job dependency 2024-08-07 11:45:37 +01:00
ai.robots.txt
d4f34363ec Removing previously generated files 2024-08-07 10:40:50 +00:00
ai.robots.txt
30eaff1447 call main after update 2024-08-07 10:32:13 +00:00
ai.robots.txt
bd3eee7a30 Removing previously generated files 2024-08-07 10:32:12 +00:00
Chenghao Mou
944bee0f56 call main after update 2024-08-07 11:31:58 +01:00
dark-visitors
cebf809391 Daily update from Dark Visitors 2024-08-07 00:14:26 +00:00
ai.robots.txt
3d4bf2c3db restore original robots.json 2024-08-06 18:50:54 +00:00
ai.robots.txt
d6a5e8cd81 Removing previously generated files 2024-08-06 18:50:53 +00:00
Chenghao Mou
4cf82b703f restore original robots.json 2024-08-06 19:50:38 +01:00
Chenghao Mou
0b6eba8dd5 skip push if no change 2024-08-06 19:41:38 +01:00
Chenghao Mou
379c339f97 skip push if no change 2024-08-06 19:41:38 +01:00
Chenghao Mou
01edb6c78c
Merge branch 'ai-robots-txt:main' into main 2024-08-06 19:35:03 +01:00
Chenghao Mou
2a3685385c restrict scope 2024-08-06 19:33:49 +01:00
85275e55b8
Merge pull request #31 from glyn/addfaq
Add FAQ
2024-08-06 11:15:29 -07:00
Chenghao Mou
8c6482fb45 restore the cron 2024-08-06 18:12:41 +01:00
dark-visitors
63c7e742c3 Daily update from Dark Visitors 2024-08-06 16:54:29 +00:00
Chenghao Mou
55e92f4324 update existing ones 2024-08-06 17:48:06 +01:00
Chenghao Mou
52d54cf127 restore the cron 2024-08-06 17:28:07 +01:00
dark-visitors
fdd261dad4 Daily update from Dark Visitors 2024-08-06 16:27:02 +00:00
ai.robots.txt
6d2285f5e0 Add FAQ 2024-08-06 16:21:01 +00:00
ai.robots.txt
83d9397f17 Removing previously generated files 2024-08-06 16:21:00 +00:00
Glyn Normington
b4d25bf0cb Add FAQ 2024-08-06 17:20:26 +01:00
Chenghao Mou
8ab1e30a6c test workflow 2024-08-06 17:12:26 +01:00
Chenghao Mou
192bf67631 add dark visitor workflow 2024-08-06 17:02:23 +01:00
ai.robots.txt
e12ddc0f42 Merge pull request #29 from jbowdre/dev
only build on changes to robots.json
2024-08-06 15:44:54 +00:00
ai.robots.txt
b54e274bbc Removing previously generated files 2024-08-06 15:44:53 +00:00
3e91a84d11
Merge pull request #29 from jbowdre/dev
only build on changes to robots.json
2024-08-04 16:04:59 -07:00
John Bowdre
b0a93aeb70 only build on changes to robots.json 2024-08-04 17:45:18 -05:00
ai.robots.txt
eb924b9856 Merge pull request #28 from jsheard/patch-2
Add Cloudflares first-party scraper blocking to FAQ
2024-08-04 21:54:17 +00:00
ai.robots.txt
1cfc071498 Removing previously generated files 2024-08-04 21:54:16 +00:00
24c3509a6e
Merge pull request #28 from jsheard/patch-2
Add Cloudflares first-party scraper blocking to FAQ
2024-08-04 14:54:06 -07:00
ai.robots.txt
c2f177870f Merge pull request #27 from jsheard/patch-1
Fix Imagesift user agent
2024-08-04 21:53:48 +00:00
ai.robots.txt
0072b8f5f0 Removing previously generated files 2024-08-04 21:53:47 +00:00
9c7257e7cf
Merge pull request #27 from jsheard/patch-1
Fix Imagesift user agent
2024-08-04 14:53:36 -07:00
Joshua Sheard
8dbbdbf44c
Add Cloudflares first-party scraper blocking to FAQ 2024-08-04 21:38:02 +01:00
Joshua Sheard
146fd4ffba
Fix Imagesift user agent 2024-08-04 21:33:04 +01:00
ai.robots.txt
c7b781034e chore: restore FriendlyCrawler + ImageSift 2024-08-04 19:29:01 +00:00
ai.robots.txt
9a8fa66772 Removing previously generated files 2024-08-04 19:29:00 +00:00
1ca936ce11
chore: restore FriendlyCrawler + ImageSift 2024-08-04 12:28:48 -07:00
ai.robots.txt
8de5bc8e01 Merge pull request #25 from mirium999/add_icc_crawler
Add ICC-Crawler
2024-08-04 01:21:56 +00:00
ai.robots.txt
8c632e1ba4 Removing previously generated files 2024-08-04 01:21:55 +00:00
Adam Newbold
8d4d52cdab
Merge pull request #25 from mirium999/add_icc_crawler
Add ICC-Crawler
2024-08-03 21:21:45 -04:00
Mirium999
5826c18909 Add ICC-Crawler 2024-08-04 10:11:25 +09:00
ai.robots.txt
ffbad453f3 Merge pull request #24 from nisbet-hubbard/patch-5
Add last line of defence to FAQ
2024-08-03 14:27:47 +00:00
ai.robots.txt
b1907d86be Removing previously generated files 2024-08-03 14:27:46 +00:00
55c585e9e3
Merge pull request #24 from nisbet-hubbard/patch-5
Add last line of defence to FAQ
2024-08-03 07:27:37 -07:00
nisbet-hubbard
2b56c72bac
Update FAQ.md 2024-08-03 14:27:25 +08:00
nisbet-hubbard
b24e5cb3bb
Update FAQ.md 2024-08-03 14:12:50 +08:00
nisbet-hubbard
74b1502839
Update FAQ.md 2024-08-03 14:04:58 +08:00
ai.robots.txt
d8de1ebdd5 chore: contribution note 2024-08-02 16:32:00 +00:00
ai.robots.txt
9d8d3de8ed Removing previously generated files 2024-08-02 16:31:59 +00:00
349c35eed6
chore: contribution note 2024-08-02 09:31:48 -07:00
ai.robots.txt
b144225ece chore: drop in additional data 2024-08-01 22:33:23 +00:00
ai.robots.txt
06b950bce9 Removing previously generated files 2024-08-01 22:33:23 +00:00
b20dfec1e4
chore: drop in additional data 2024-08-01 15:33:07 -07:00
ai.robots.txt
f18f0d99b9 chore: remove test data 2024-08-01 22:29:02 +00:00
ai.robots.txt
747cc834c4 Removing previously generated files 2024-08-01 22:29:01 +00:00
efabf3e721
chore: remove test data 2024-08-01 15:25:55 -07:00
Adam Newbold
1fdc79dacb Adding GitHub Action 2024-08-01 18:17:19 -04:00
17a84f2c2d
chore: update robots table 2024-08-01 15:06:49 -07:00
6c596a50ea
chore: move FAQ into repo 2024-08-01 07:53:43 -07:00
6a8e7a8eb0
Merge pull request #22 from nisbet-hubbard/patch-4
Add `PetalBot` (and `facebookexternalhit`?)
2024-08-01 07:49:30 -07:00
nisbet-hubbard
df89722038
Add PetalBot (and facebookexternalhit?) 2024-07-31 18:27:29 +08:00
fa7b64ae4b
chore: add Scrapy 2024-07-30 10:28:46 -07:00
55b4505e30
chore: add Timpibot 2024-07-29 12:38:22 -07:00
25 changed files with 1931 additions and 46 deletions

14
.github/FUNDING.yml vendored
View file

@ -1,14 +0,0 @@
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: cory
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

36
.github/workflows/ai_robots_update.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: Updates for AI robots files
on:
schedule:
- cron: "0 0 * * *"
jobs:
dark-visitors:
runs-on: ubuntu-latest
name: dark-visitors
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- run: |
pip install beautifulsoup4 requests
git config --global user.name "dark-visitors"
git config --global user.email "dark-visitors@users.noreply.github.com"
echo "Updating robots.json with data from darkvisitor.com ..."
python code/robots.py --update
echo "... done."
git --no-pager diff
git add -A
if ! git diff --cached --quiet; then
git commit -m "Update from Dark Visitors"
git push
else
echo "No changes to commit."
fi
shell: bash
convert:
name: convert
needs: dark-visitors
uses: ./.github/workflows/main.yml
secrets: inherit
with:
message: "Update from Dark Visitors"

48
.github/workflows/main.yml vendored Normal file
View file

@ -0,0 +1,48 @@
on:
workflow_call:
inputs:
message:
type: string
required: true
description: The message to commit
push:
paths:
- 'robots.json'
- '.github/workflows/**'
- 'code/**'
branches:
- "main"
jobs:
ai-robots-txt:
runs-on: ubuntu-latest
name: ai-robots-txt
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- run: |
pip install beautifulsoup4
git config --global user.name "ai.robots.txt"
git config --global user.email "ai.robots.txt@users.noreply.github.com"
git log -1
git status
echo "Updating robots.txt and table-of-bot-metrics.md if necessary ..."
python code/robots.py --convert
echo "... done."
git --no-pager diff
git add -A
if [ -z "$(git diff --staged)" ]; then
# To have the action run successfully, if no changes are staged, we
# manually skip the later commits because they fail with exit code 1
# and this would then display as a failure for the Action.
echo "No staged changes to commit. Skipping commit and push."
exit 0
fi
if [ -n "${{ inputs.message }}" ]; then
git commit -m "${{ inputs.message }}"
else
git commit -m "${{ github.event.head_commit.message }}"
fi
git push
shell: bash

28
.github/workflows/run-tests.yml vendored Normal file
View file

@ -0,0 +1,28 @@
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
run-tests:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Install dependencies
run: |
pip install -U requests beautifulsoup4
- name: Run tests
run: |
code/tests.py
lint-json:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: JQ Json Lint
run: jq . robots.json

View file

@ -0,0 +1,29 @@
---
name: "Upload robots.txt file to release"
run-name: "Upload robots.txt file to release"
on:
release:
types:
- published
permissions:
contents: write
jobs:
upload-robots-txt-file-to-release:
name: "Upload robots.txt file to release"
runs-on: ubuntu-latest
steps:
- name: "Checkout"
uses: actions/checkout@v4
with:
ref: ${{ github.event.release.tag_name }}
- name: "Upload"
run: gh --repo "${REPO}" release upload "${TAG}" robots.txt
env:
GH_TOKEN: ${{ github.token }}
REPO: ${{ github.repository }}
TAG: ${{ github.event.release.tag_name }}

3
.gitignore vendored
View file

@ -1 +1,4 @@
.DS_Store .DS_Store
.venv
venv
__pycache__

3
.htaccess Normal file
View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

3
Caddyfile Normal file
View file

@ -0,0 +1,3 @@
@aibots {
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)"
}

65
FAQ.md Normal file
View file

@ -0,0 +1,65 @@
# Frequently asked questions
## Why should we block these crawlers?
They're extractive, confer no benefit to the creators of data they're ingesting and also have wide-ranging negative externalities: particularly copyright abuse and environmental impact.
**[How Tech Giants Cut Corners to Harvest Data for A.I.](https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m)**
> OpenAI, Google and Meta ignored corporate policies, altered their own rules and discussed skirting copyright law as they sought online information to train their newest artificial intelligence systems.
**[How AI copyright lawsuits could make the whole industry go extinct](https://www.theverge.com/24062159/ai-copyright-fair-use-lawsuits-new-york-times-openai-chatgpt-decoder-podcast)**
> The New York Times' lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.
**[Reconciling the contrasting narratives on the environmental impact of large language models](https://www.nature.com/articles/s41598-024-76682-6)**
> Studies have shown that the training of just one LLM can consume as much energy as five cars do across their lifetimes. The water footprint of AI is also substantial; for example, recent work has highlighted that water consumption associated with AI models involves data centers using millions of gallons of water per day for cooling. Additionally, the energy consumption and carbon emissions of AI are projected to grow quickly in the coming years [...].
**[Scientists Predict AI to Generate Millions of Tons of E-Waste](https://www.sciencealert.com/scientists-predict-ai-to-generate-millions-of-tons-of-e-waste)**
> we could end up with between 1.2 million and 5 million metric tons of additional electronic waste by the end of this decade [the 2020's].
## How do we know AI companies/bots respect `robots.txt`?
The short answer is that we don't. `robots.txt` is a well-established standard, but compliance is voluntary. There is no enforcement mechanism.
## Why might AI web crawlers respect `robots.txt`?
Larger and/or reputable companies developing AI models probably wouldn't want to damage their reputation by ignoring `robots.txt`.
Also, given the contentious nature of AI and the possibility of legislation limiting its development, companies developing AI models will probably want to be seen to be behaving ethically, and so should (eventually) respect `robots.txt`.
## Can we block crawlers based on user agent strings?
Yes, provided the crawlers identify themselves and your application/hosting supports doing so.
Some crawlers — [such as Perplexity](https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/) — do not identify themselves via their user agent strings and, as such, are difficult to block.
## What can we do if a bot doesn't respect `robots.txt`?
That depends on your stack.
- Nginx
- [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight
- [Blocking AI web crawlers](https://underlap.org/blocking-ai-web-crawlers) by Glyn Normington
- Apache httpd
- [Blockin' bots.](https://ethanmarcotte.com/wrote/blockin-bots/) by Ethan Marcotte
- [Blocking Bots With 11ty And Apache](https://flamedfury.com/posts/blocking-bots-with-11ty-and-apache/) by fLaMEd fury
> [!TIP]
> The snippets in these articles all use `mod_rewrite`, which [should be considered a last resort](https://httpd.apache.org/docs/trunk/rewrite/avoid.html). A good alternative that's less resource-intensive is `mod_setenvif`; see [httpd docs](https://httpd.apache.org/docs/trunk/rewrite/access.html#blocking-of-robots) for an example. You should also consider [setting this up in `httpd.conf` instead of `.htaccess`](https://httpd.apache.org/docs/trunk/howto/htaccess.html#when) if it's available to you.
- Netlify
- [Blockin' bots on Netlify](https://www.jeremiak.com/blog/block-bots-netlify-edge-functions/) by Jeremia Kimelman
- Cloudflare
- [Block AI bots, scrapers and crawlers with a single click](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) by Cloudflare
- [Im blocking AI crawlers](https://roelant.net/en/2024/im-blocking-ai-crawlers-part-2/) by Roelant
- Vercel
- [Block AI Bots Firewall Rule](https://vercel.com/templates/firewall/block-ai-bots-firewall-rule) by Vercel
## How can I contribute?
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
## I'd like to donate money
That's kind of you, but we don't need your money. If you insist, we'd love you to make a donation to the [American Civil Liberties Union](https://www.aclu.org/), the [Disasters Emergency Committee](https://www.dec.org.uk/), or a similar organisation.
## Can my company sponsor ai.robots.txt?
No, thank you. We do not accept sponsorship of any kind. We prefer to maintain our independence. Our costs are negligible as we are entirely volunteer-based and community-driven.

View file

@ -2,12 +2,57 @@
<img src="/assets/images/noai-logo.png" width="100" /> <img src="/assets/images/noai-logo.png" width="100" />
This is an open list of web crawlers associated with AI companies and the training of LLMs to block. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md). This list contains AI-related crawlers of all types, regardless of purpose. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers. A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling. If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling.
## Usage
This repository provides the following files:
- `robots.txt`
- `.htaccess`
- `nginx-block-ai-bots.conf`
- `Caddyfile`
- `haproxy-block-ai-bots.txt`
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
`.htaccess` may be used to configure web servers such as [Apache httpd](https://httpd.apache.org/) to return an error page when one of the listed AI crawlers sends a request to the web server.
Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/current/howto/htaccess.html), more performant methods than an `.htaccess` file exist.
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
`Caddyfile` includes a Header Regex matcher group you can copy or import into your Caddyfile, the rejection can then be handled as followed `abort @aibots`
`haproxy-block-ai-bots.txt` may be used to configure HAProxy to block AI bots. To implement it;
1. Add the file to the config directory of HAProxy
2. Add the following lines in the `frontend` section;
```
acl ai_robot hdr_sub(user-agent) -i -f /etc/haproxy/haproxy-block-ai-bots.txt
http-request deny if ai_robot
```
(Note that the path of the `haproxy-block-ai-bots.txt` may be different in your environment.)
[Bing uses the data it crawls for AI and training, you may opt out by adding a `meta` tag to the `head` of your site.](./docs/additional-steps/bing.md)
### Related
- [Robots.txt Traefik plugin](https://plugins.traefik.io/plugins/681b2f3fba3486128fc34fae/robots-txt-plugin):
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
file on-the-fly.
## Contributing
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
```console
code/tests.py
```
## Subscribe to updates ## Subscribe to updates
You can subscribe to list updates via RSS/Atom with the releases feed: You can subscribe to list updates via RSS/Atom with the releases feed:
@ -18,6 +63,12 @@ https://github.com/ai-robots-txt/ai.robots.txt/releases.atom
You can subscribe with [Feedly](https://feedly.com/i/subscription/feed/https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Inoreader](https://www.inoreader.com/?add_feed=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [The Old Reader](https://theoldreader.com/feeds/subscribe?url=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Feedbin](https://feedbin.me/?subscribe=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), or any other reader app. You can subscribe with [Feedly](https://feedly.com/i/subscription/feed/https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Inoreader](https://www.inoreader.com/?add_feed=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [The Old Reader](https://theoldreader.com/feeds/subscribe?url=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Feedbin](https://feedbin.me/?subscribe=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), or any other reader app.
Alternatively, you can also subscribe to new releases with your GitHub account by clicking the ⬇️ on "Watch" button at the top of this page, clicking "Custom" and selecting "Releases".
## Report abusive crawlers
If you use [Cloudflare's hard block](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) alongside this list, you can report abusive crawlers that don't respect `robots.txt` [here](https://docs.google.com/forms/d/e/1FAIpQLScbUZ2vlNSdcsb8LyTeSF7uLzQI96s0BKGoJ6wQ6ocUFNOKEg/viewform).
But even if you don't use Cloudflare's hard block, their list of [verified bots](https://radar.cloudflare.com/traffic/verified-bots) may come in handy.
## Additional resources ## Additional resources
- [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight - [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight
@ -25,7 +76,4 @@ You can subscribe with [Feedly](https://feedly.com/i/subscription/feed/https://g
- [Blocking Bots With 11ty And Apache](https://flamedfury.com/posts/blocking-bots-with-11ty-and-apache/) by fLaMEd fury - [Blocking Bots With 11ty And Apache](https://flamedfury.com/posts/blocking-bots-with-11ty-and-apache/) by fLaMEd fury
- [Blockin' bots on Netlify](https://www.jeremiak.com/blog/block-bots-netlify-edge-functions/) by Jeremia Kimelman - [Blockin' bots on Netlify](https://www.jeremiak.com/blog/block-bots-netlify-edge-functions/) by Jeremia Kimelman
- [Blocking AI web crawlers](https://underlap.org/blocking-ai-web-crawlers) by Glyn Normington - [Blocking AI web crawlers](https://underlap.org/blocking-ai-web-crawlers) by Glyn Normington
- [Block AI Bots from Crawling Websites Using Robots.txt](https://originality.ai/ai-bot-blocking) by Jonathan Gillham, Originality.AI
---
Thank you to [Glyn](https://github.com/glyn) for pushing [me](https://coryd.dev) to set this up after [I posted about blocking these crawlers](https://coryd.dev/posts/2024/go-ahead-and-block-ai-web-crawlers/).

263
code/robots.py Executable file
View file

@ -0,0 +1,263 @@
#!/usr/bin/env python3
import json
import re
import requests
from bs4 import BeautifulSoup
from pathlib import Path
def load_robots_json():
"""Load the robots.json contents into a dictionary."""
return json.loads(Path("./robots.json").read_text(encoding="utf-8"))
def get_agent_soup():
"""Retrieve current known agents from darkvisitors.com"""
session = requests.Session()
try:
response = session.get("https://darkvisitors.com/agents")
except requests.exceptions.ConnectionError:
print(
"ERROR: Could not gather the current agents from https://darkvisitors.com/agents"
)
return
return BeautifulSoup(response.text, "html.parser")
def updated_robots_json(soup):
"""Update AI scraper information with data from darkvisitors."""
existing_content = load_robots_json()
to_include = [
"AI Agents",
"AI Assistants",
"AI Data Scrapers",
"AI Search Crawlers",
# "Archivers",
# "Developer Helpers",
# "Fetchers",
# "Intelligence Gatherers",
# "Scrapers",
# "Search Engine Crawlers",
# "SEO Crawlers",
# "Uncategorized",
"Undocumented AI Agents",
]
for section in soup.find_all("div", {"class": "agent-links-section"}):
category = section.find("h2").get_text()
if category not in to_include:
continue
for agent in section.find_all("a", href=True):
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
name = clean_robot_name(name)
desc = agent.find("p").get_text().strip()
default_values = {
"Unclear at this time.",
"No information provided.",
"No information.",
"No explicit frequency provided.",
}
default_value = "Unclear at this time."
# Parse the operator information from the description if possible
operator = default_value
if "operated by " in desc:
try:
operator = desc.split("operated by ", 1)[1].split(".", 1)[0].strip()
except Exception as e:
print(f"Error: {e}")
def consolidate(field: str, value: str) -> str:
# New entry
if name not in existing_content:
return value
# New field
if field not in existing_content[name]:
return value
# Unclear value
if (
existing_content[name][field] in default_values
and value not in default_values
):
return value
# Existing value
return existing_content[name][field]
existing_content[name] = {
"operator": consolidate("operator", operator),
"respect": consolidate("respect", default_value),
"function": consolidate("function", f"{category}"),
"frequency": consolidate("frequency", default_value),
"description": consolidate(
"description",
f"{desc} More info can be found at https://darkvisitors.com/agents{agent['href']}",
),
}
print(f"Total: {len(existing_content)}")
sorted_keys = sorted(existing_content, key=lambda k: k.lower())
sorted_robots = {k: existing_content[k] for k in sorted_keys}
return sorted_robots
def clean_robot_name(name):
""" Clean the robot name by removing some characters that were mangled by html software once. """
# This was specifically spotted in "Perplexity-User"
# Looks like a non-breaking hyphen introduced by the HTML rendering software
# Reading the source page for Perplexity: https://docs.perplexity.ai/guides/bots
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
# and it's only the Row-Heading that has the special hyphen
#
# Technically, there's no reason there wouldn't someday be a bot that
# actually uses a non-breaking hyphen, but that seems unlikely,
# so this solution should be fine for now.
result = re.sub(r"\u2011", "-", name)
if result != name:
print(f"\tCleaned '{name}' to '{result}' - unicode/html mangled chars normalized.")
return result
def ingest_darkvisitors():
old_robots_json = load_robots_json()
soup = get_agent_soup()
if soup:
robots_json = updated_robots_json(soup)
print(
"robots.json is unchanged."
if robots_json == old_robots_json
else "robots.json got updates."
)
Path("./robots.json").write_text(
json.dumps(robots_json, indent=4), encoding="utf-8"
)
def json_to_txt(robots_json):
"""Compose the robots.txt from the robots.json file."""
robots_txt = "\n".join(f"User-agent: {k}" for k in robots_json.keys())
robots_txt += "\nDisallow: /\n"
return robots_txt
def escape_md(s):
return re.sub(r"([]*\\|`(){}<>#+-.!_[])", r"\\\1", s)
def json_to_table(robots_json):
"""Compose a markdown table with the information in robots.json"""
table = "| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |\n"
table += "|------|----------|-----------------------|----------|------------------|-------------|\n"
for name, robot in robots_json.items():
table += f'| {escape_md(name)} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
return table
def list_to_pcre(lst):
# Python re is not 100% identical to PCRE which is used by Apache, but it
# should probably be close enough in the real world for re.escape to work.
formatted = "|".join(map(re.escape, lst))
return f"({formatted})"
def json_to_htaccess(robot_json):
# Creates a .htaccess filter file. It uses a regular expression to filter out
# User agents that contain any of the blocked values.
htaccess = "RewriteEngine On\n"
htaccess += f"RewriteCond %{{HTTP_USER_AGENT}} {list_to_pcre(robot_json.keys())} [NC]\n"
htaccess += "RewriteRule !^/?robots\\.txt$ - [F,L]\n"
return htaccess
def json_to_nginx(robot_json):
# Creates an Nginx config file. This config snippet can be included in
# nginx server{} blocks to block AI bots.
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
return config
def json_to_caddy(robot_json):
caddyfile = "@aibots {\n "
caddyfile += f' header_regexp User-Agent "{list_to_pcre(robot_json.keys())}"'
caddyfile += "\n}"
return caddyfile
def json_to_haproxy(robots_json):
# Creates a source file for HAProxy. Follow instructions in the README to implement it.
txt = "\n".join(f"{k}" for k in robots_json.keys())
return txt
def update_file_if_changed(file_name, converter):
"""Update files if newer content is available and log the (in)actions."""
new_content = converter(load_robots_json())
filepath = Path(file_name)
# "touch" will create the file if it doesn't exist yet
filepath.touch()
old_content = filepath.read_text(encoding="utf-8")
if old_content == new_content:
print(f"{file_name} is already up to date.")
else:
Path(file_name).write_text(new_content, encoding="utf-8")
print(f"{file_name} has been updated.")
def conversions():
"""Triggers the conversions from the json file."""
update_file_if_changed(file_name="./robots.txt", converter=json_to_txt)
update_file_if_changed(
file_name="./table-of-bot-metrics.md",
converter=json_to_table,
)
update_file_if_changed(
file_name="./.htaccess",
converter=json_to_htaccess,
)
update_file_if_changed(
file_name="./nginx-block-ai-bots.conf",
converter=json_to_nginx,
)
update_file_if_changed(
file_name="./Caddyfile",
converter=json_to_caddy,
)
update_file_if_changed(
file_name="./haproxy-block-ai-bots.txt",
converter=json_to_haproxy,
)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser = argparse.ArgumentParser(
prog="ai-robots",
description="Collects and updates information about web scrapers of AI companies.",
epilog="One of the flags must be set.\n",
)
parser.add_argument(
"--update",
action="store_true",
help="Update the robots.json file with data from darkvisitors.com/agents",
)
parser.add_argument(
"--convert",
action="store_true",
help="Create the robots.txt and markdown table from robots.json",
)
args = parser.parse_args()
if not (args.update or args.convert):
print("ERROR: please provide one of the possible flags.")
parser.print_help()
if args.update:
ingest_darkvisitors()
if args.convert:
conversions()

View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

View file

@ -0,0 +1,3 @@
@aibots {
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)"
}

View file

@ -0,0 +1,47 @@
AI2Bot
Ai2Bot-Dolma
Amazonbot
anthropic-ai
Applebot
Applebot-Extended
Bytespider
CCBot
ChatGPT-User
Claude-Web
ClaudeBot
cohere-ai
Diffbot
FacebookBot
facebookexternalhit
FriendlyCrawler
Google-Extended
GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iaskspider/2.0
ICC-Crawler
ImagesiftBot
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
Meta-ExternalAgent
Meta-ExternalFetcher
OAI-SearchBot
omgili
omgilibot
Perplexity-User
PerplexityBot
PetalBot
Scrapy
Sidetrade indexer bot
Timpibot
VelenPublicWebCrawler
Webzio-Extended
YouBot
crawler.with.dots
star***crawler
Is this a crawler?
a[mazing]{42}(robot)
2^32$
curl|sudo bash

View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
return 403;
}

331
code/test_files/robots.json Normal file
View file

@ -0,0 +1,331 @@
{
"AI2Bot": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
"function": "Service improvement and enabling answers for Alexa users.",
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"Applebot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot"
},
"Applebot-Extended": {
"operator": "[Apple](https://support.apple.com/en-us/119829#datausage)",
"respect": "Yes",
"function": "Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.",
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"Bytespider": {
"operator": "ByteDance",
"respect": "No",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, including ChatGPT competitors."
},
"CCBot": {
"operator": "[Common Crawl Foundation](https://commoncrawl.org)",
"respect": "[Yes](https://commoncrawl.org/ccbot)",
"function": "Provides open crawl dataset, used for many purposes, including Machine Learning/AI.",
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"ChatGPT-User": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "Used by plugins in ChatGPT to answer queries based on user input."
},
"Claude-Web": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"ClaudeBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"cohere-ai": {
"operator": "[Cohere](https://cohere.com)",
"respect": "Unclear at this time.",
"function": "Retrieves data to provide responses to user-initiated prompts.",
"frequency": "Takes action based on user prompts.",
"description": "Retrieves data based on user prompts."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
"respect": "At the discretion of Diffbot users.",
"function": "Aggregates structured web data for monitoring and AI model training.",
"frequency": "Unclear at this time.",
"description": "Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training."
},
"FacebookBot": {
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
"function": "Training language models",
"frequency": "Up to 1 page per second",
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
},
"facebookexternalhit": {
"description": "Unclear at this time.",
"frequency": "Unclear at this time.",
"function": "No information.",
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)"
},
"FriendlyCrawler": {
"description": "Unclear who the operator is; but data is used for training/machine learning.",
"frequency": "Unclear at this time.",
"function": "We are using the data from the crawler to build datasets for machine learning experiments.",
"operator": "Unknown",
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
},
"Google-Extended": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "LLM training.",
"frequency": "No information.",
"description": "Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search."
},
"GoogleOther": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Image": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Video": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GPTBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Scrapes data to train OpenAI's products.",
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
"function": "Crawls sites to provide answers to user queries.",
"operator": "iAsk",
"respect": "No"
},
"ICC-Crawler": {
"description": "Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business.",
"frequency": "No information.",
"function": "Scrapes data to train and support AI technologies.",
"operator": "[NICT](https://nict.go.jp)",
"respect": "Yes"
},
"ImagesiftBot": {
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images.",
"frequency": "No information.",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products",
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
"function": "Scrapes images for use in LLMs.",
"operator": "[img2dataset](https://github.com/rom1504/img2dataset)",
"respect": "Unclear at this time."
},
"ISSCyberRiskCrawler": {
"description": "Used to train machine learning based models to quantify cyber risk.",
"frequency": "No information.",
"function": "Scrapes data to train machine learning models.",
"operator": "[ISS-Corporate](https://iss-cyber.com)",
"respect": "No"
},
"Kangaroo Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"Meta-ExternalAgent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes.",
"function": "Used to train models and improve products.",
"frequency": "No information.",
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
},
"Meta-ExternalFetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"OAI-SearchBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "[Yes](https://platform.openai.com/docs/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in SearchGPT."
},
"omgili": {
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/)",
"function": "Data is sold.",
"frequency": "No information.",
"description": "Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training."
},
"omgilibot": {
"description": "Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io.",
"frequency": "No information.",
"function": "Data is sold.",
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
"function": "Used to answer queries at the request of users.",
"frequency": "Takes action based on user prompts.",
"description": "Operated by Perplexity to obtain results in response to user queries."
},
"PetalBot": {
"description": "Operated by Huawei to provide search and AI assistant services.",
"frequency": "No explicit frequency provided.",
"function": "Used to provide recommendations in Hauwei assistant and AI search services.",
"operator": "[Huawei](https://huawei.com/)",
"respect": "Yes"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
"frequency": "No information.",
"function": "Scrapes data for a variety of uses including training AI.",
"operator": "[Zyte](https://www.zyte.com)",
"respect": "Unclear at this time."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
"function": "Extracts data for a variety of uses including training AI.",
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"Timpibot": {
"operator": "[Timpi](https://timpi.io)",
"respect": "Unclear at this time.",
"function": "Scrapes data for use in training LLMs.",
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
},
"Webzio-Extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"YouBot": {
"operator": "[You](https://about.you.com/youchat/)",
"respect": "[Yes](https://about.you.com/youbot/)",
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
},
"crawler.with.dots": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression dots need to be escaped."
},
"star***crawler": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression stars need to be escaped."
},
"Is this a crawler?": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression spaces and question marks need to be escaped."
},
"a[mazing]{42}(robot)": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped."
},
"2^32$": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression RE anchor characters need to be escaped."
},
"curl|sudo bash": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression pipes need to be escaped."
}
}

View file

@ -0,0 +1,48 @@
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: Amazonbot
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: cohere-ai
User-agent: Diffbot
User-agent: FacebookBot
User-agent: facebookexternalhit
User-agent: FriendlyCrawler
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iaskspider/2.0
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: Meta-ExternalAgent
User-agent: Meta-ExternalFetcher
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: Scrapy
User-agent: Sidetrade indexer bot
User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: YouBot
User-agent: crawler.with.dots
User-agent: star***crawler
User-agent: Is this a crawler?
User-agent: a[mazing]{42}(robot)
User-agent: 2^32$
User-agent: curl|sudo bash
Disallow: /

View file

@ -0,0 +1,49 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
| facebookexternalhit | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | No information. | Unclear at this time. | Unclear at this time. |
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
| crawler\.with\.dots | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression dots need to be escaped. |
| star\*\*\*crawler | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression stars need to be escaped. |
| Is this a crawler? | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression spaces and question marks need to be escaped. |
| a\[mazing\]\{42\}\(robot\) | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped. |
| 2^32$ | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression RE anchor characters need to be escaped. |
| curl\|sudo bash | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression pipes need to be escaped. |

94
code/tests.py Executable file
View file

@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""To run these tests just execute this script."""
import json
import unittest
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx, json_to_haproxy, json_to_caddy
class RobotsUnittestExtensions:
def loadJson(self, pathname):
with open(pathname, "rt") as f:
return json.load(f)
def assertEqualsFile(self, f, s):
with open(f, "rt") as f:
f_contents = f.read()
return self.assertMultiLineEqual(f_contents, s)
class TestRobotsTXTGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_robots_txt_generation(self):
robots_txt = json_to_txt(self.robots_dict)
self.assertEqualsFile("test_files/robots.txt", robots_txt)
class TestTableMetricsGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 32768
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_table_generation(self):
robots_table = json_to_table(self.robots_dict)
self.assertEqualsFile("test_files/table-of-bot-metrics.md", robots_table)
class TestHtaccessGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_htaccess_generation(self):
robots_htaccess = json_to_htaccess(self.robots_dict)
self.assertEqualsFile("test_files/.htaccess", robots_htaccess)
class TestNginxConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_nginx_generation(self):
robots_nginx = json_to_nginx(self.robots_dict)
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
class TestHaproxyConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_haproxy_generation(self):
robots_haproxy = json_to_haproxy(self.robots_dict)
self.assertEqualsFile("test_files/haproxy-block-ai-bots.txt", robots_haproxy)
class TestRobotsNameCleaning(unittest.TestCase):
def test_clean_name(self):
from robots import clean_robot_name
self.assertEqual(clean_robot_name("PerplexityUser"), "Perplexity-User")
class TestCaddyfileGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_caddyfile_generation(self):
robots_caddyfile = json_to_caddy(self.robots_dict)
self.assertEqualsFile("test_files/Caddyfile", robots_caddyfile)
if __name__ == "__main__":
import os
os.chdir(os.path.dirname(__file__))
unittest.main(verbosity=2)

View file

@ -0,0 +1,40 @@
# Bing (bingbot)
It's not well publicised, but Bing uses the data it crawls for AI and training.
However, the current thinking is, blocking a search engine of this size using `robots.txt` seems a quite drastic approach as it is second only to Google and could significantly impact your website in search results.
Additionally, Bing powers a number of search engines such as Yahoo and AOL, and its search results are also used in Duck Duck Go, amongst others.
Fortunately, Bing supports a relatively simple opt-out method, requiring an additional step.
## How to opt-out of AI training
You must add a metatag in the `<head>` of your webpage or set the [X-Robots-Tag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Robots-Tag) HTTP header in your response. This also needs to be added to every page or response on your website.
If using the metatag, the line you need to add is:
```plaintext
<meta name="robots" content="noarchive">
```
Or include the HTTP response header:
```plaintext
X-Robots-Tag: noarchive
```
By adding this line or header, you are signifying to Bing: "Do not use the content for training Microsoft's generative AI foundation models."
## Will my site be negatively affected
Simple answer, no.
The original use of "noarchive" has been retired by all search engines. Google retired its use in 2024.
The use of this metatag will not impact your site in search engines or in any other meaningful way if you add it to your page(s).
It is now solely used by a handful of crawlers, such as Bingbot and Amazonbot, to signify to them not to use your data for AI/training.
## Resources
Bing Blog AI opt-out announcement: https://blogs.bing.com/webmaster/september-2023/Announcing-new-options-for-webmasters-to-control-usage-of-their-content-in-Bing-Chat
Bing metatag information, including AI opt-out: https://www.bing.com/webmasters/help/which-robots-metatags-does-bing-support-5198d240

80
haproxy-block-ai-bots.txt Normal file
View file

@ -0,0 +1,80 @@
AI2Bot
Ai2Bot-Dolma
aiHitBot
Amazonbot
Andibot
anthropic-ai
Applebot
Applebot-Extended
bedrockbot
Brightbot 1.0
Bytespider
CCBot
ChatGPT-User
Claude-SearchBot
Claude-User
Claude-Web
ClaudeBot
cohere-ai
cohere-training-data-crawler
Cotoyogi
Crawlspace
Diffbot
DuckAssistBot
EchoboxBot
FacebookBot
facebookexternalhit
Factset_spyderbot
FirecrawlAgent
FriendlyCrawler
Google-CloudVertexBot
Google-Extended
GoogleOther
GoogleOther-Image
GoogleOther-Video
GPTBot
iaskspider/2.0
ICC-Crawler
ImagesiftBot
img2dataset
ISSCyberRiskCrawler
Kangaroo Bot
meta-externalagent
Meta-ExternalAgent
meta-externalfetcher
Meta-ExternalFetcher
MistralAI-User/1.0
MyCentralAIScraperBot
NovaAct
OAI-SearchBot
omgili
omgilibot
Operator
PanguBot
Panscient
panscient.com
Perplexity-User
PerplexityBot
PetalBot
PhindBot
Poseidon Research Crawler
QualifiedBot
QuillBot
quillbot.com
SBIntuitionsBot
Scrapy
SemrushBot
SemrushBot-BA
SemrushBot-CT
SemrushBot-OCOB
SemrushBot-SI
SemrushBot-SWA
Sidetrade indexer bot
TikTokSpider
Timpibot
VelenPublicWebCrawler
Webzio-Extended
wpbot
YandexAdditional
YandexAdditionalBot
YouBot

3
nginx-block-ai-bots.conf Normal file
View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|Andibot|anthropic\-ai|Applebot|Applebot\-Extended|bedrockbot|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-SearchBot|Claude\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|MistralAI\-User/1\.0|MyCentralAIScraperBot|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient\.com|Perplexity\-User|PerplexityBot|PetalBot|PhindBot|Poseidon\ Research\ Crawler|QualifiedBot|QuillBot|quillbot\.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot\-BA|SemrushBot\-CT|SemrushBot\-OCOB|SemrushBot\-SI|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot)") {
return 403;
}

562
robots.json Normal file
View file

@ -0,0 +1,562 @@
{
"AI2Bot": {
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes",
"function": "Content is used to train open language models.",
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"aiHitBot": {
"operator": "[aiHit](https://www.aihitdata.com/about)",
"respect": "Yes",
"function": "A massive, artificial intelligence/machine learning, automated system.",
"frequency": "No information provided.",
"description": "Scrapes data for AI systems."
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
"function": "Service improvement and enabling answers for Alexa users.",
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"Andibot": {
"operator": "[Andi](https://andisearch.com/)",
"respect": "Unclear at this time",
"function": "Search engine using generative AI, AI Search Assistant",
"frequency": "No information provided.",
"description": "Scrapes website and provides AI summary."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"Applebot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot"
},
"Applebot-Extended": {
"operator": "[Apple](https://support.apple.com/en-us/119829#datausage)",
"respect": "Yes",
"function": "Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.",
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"bedrockbot": {
"operator": "[Amazon](https://amazon.com)",
"respect": "[Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector)",
"function": "Data scraping for custom AI applications.",
"frequency": "Unclear at this time.",
"description": "Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application."
},
"Brightbot 1.0": {
"operator": "Browsing.ai",
"respect": "Unclear at this time.",
"function": "LLM/AI training.",
"frequency": "Unclear at this time.",
"description": "Scrapes data to train LLMs and AI products focused on website customer support."
},
"Bytespider": {
"operator": "ByteDance",
"respect": "No",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, including ChatGPT competitors."
},
"CCBot": {
"operator": "[Common Crawl Foundation](https://commoncrawl.org)",
"respect": "[Yes](https://commoncrawl.org/ccbot)",
"function": "Provides open crawl dataset, used for many purposes, including Machine Learning/AI.",
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"ChatGPT-User": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "Used by plugins in ChatGPT to answer queries based on user input."
},
"Claude-SearchBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses.",
"frequency": "No information provided.",
"description": "Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses."
},
"Claude-User": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent.",
"frequency": "No information provided.",
"description": "Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent."
},
"Claude-Web": {
"operator": "Anthropic",
"respect": "Unclear at this time.",
"function": "Undocumented AI Agents",
"frequency": "Unclear at this time.",
"description": "Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web"
},
"ClaudeBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"cohere-ai": {
"operator": "[Cohere](https://cohere.com)",
"respect": "Unclear at this time.",
"function": "Retrieves data to provide responses to user-initiated prompts.",
"frequency": "Takes action based on user prompts.",
"description": "Retrieves data based on user prompts."
},
"cohere-training-data-crawler": {
"operator": "Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
},
"Cotoyogi": {
"operator": "[ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/)",
"respect": "Yes",
"function": "AI LLM Scraper.",
"frequency": "No information provided.",
"description": "Scrapes data for AI training in Japanese language."
},
"Crawlspace": {
"operator": "[Crawlspace](https://crawlspace.dev)",
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
"function": "Scrapes data",
"frequency": "Unclear at this time.",
"description": "Provides crawling services for any purpose, probably including AI model training."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
"respect": "At the discretion of Diffbot users.",
"function": "Aggregates structured web data for monitoring and AI model training.",
"frequency": "Unclear at this time.",
"description": "Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training."
},
"DuckAssistBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot"
},
"EchoboxBot": {
"operator": "[Echobox](https://echobox.com)",
"respect": "Unclear at this time.",
"function": "Data collection to support AI-powered products.",
"frequency": "Unclear at this time.",
"description": "Supports company's AI-powered social and email management products."
},
"FacebookBot": {
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
"function": "Training language models",
"frequency": "Up to 1 page per second",
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
},
"facebookexternalhit": {
"operator": "Meta/Facebook",
"respect": "[No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313)",
"function": "Ostensibly only for sharing, but likely used as an AI crawler as well",
"frequency": "Unclear at this time.",
"description": "Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is \"to crawl the content of an app or website that was shared on one of Meta\u2019s family of apps\u2026\". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary."
},
"Factset_spyderbot": {
"operator": "[Factset](https://www.factset.com/ai)",
"respect": "Unclear at this time.",
"function": "AI model training.",
"frequency": "No information provided.",
"description": "Scrapes data for AI training."
},
"FirecrawlAgent": {
"operator": "[Firecrawl](https://www.firecrawl.dev/)",
"respect": "Yes",
"function": "AI scraper and LLM training",
"frequency": "No information provided.",
"description": "Scrapes data for AI systems and LLM training."
},
"FriendlyCrawler": {
"description": "Unclear who the operator is; but data is used for training/machine learning.",
"frequency": "Unclear at this time.",
"function": "We are using the data from the crawler to build datasets for machine learning experiments.",
"operator": "Unknown",
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
},
"Google-CloudVertexBot": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "Build and manage AI models for businesses employing Vertex AI",
"frequency": "No information.",
"description": "Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents."
},
"Google-Extended": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "LLM training.",
"frequency": "No information.",
"description": "Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search."
},
"GoogleOther": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Image": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Video": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GPTBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Scrapes data to train OpenAI's products.",
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
"function": "Crawls sites to provide answers to user queries.",
"operator": "iAsk",
"respect": "No"
},
"ICC-Crawler": {
"description": "Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business.",
"frequency": "No information.",
"function": "Scrapes data to train and support AI technologies.",
"operator": "[NICT](https://nict.go.jp)",
"respect": "Yes"
},
"ImagesiftBot": {
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images.",
"frequency": "No information.",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products",
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
"function": "Scrapes images for use in LLMs.",
"operator": "[img2dataset](https://github.com/rom1504/img2dataset)",
"respect": "Unclear at this time."
},
"ISSCyberRiskCrawler": {
"description": "Used to train machine learning based models to quantify cyber risk.",
"frequency": "No information.",
"function": "Scrapes data to train machine learning models.",
"operator": "[ISS-Corporate](https://iss-cyber.com)",
"respect": "No"
},
"Kangaroo Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"meta-externalagent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes",
"function": "Used to train models and improve products.",
"frequency": "No information.",
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
},
"Meta-ExternalAgent": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent"
},
"meta-externalfetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"Meta-ExternalFetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"MistralAI-User/1.0": {
"operator": "Mistral AI",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response.",
"respect": "Yes"
},
"MyCentralAIScraperBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI data scraper",
"frequency": "Unclear at this time.",
"description": "Operator and data use is unclear at this time."
},
"NovaAct": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact"
},
"OAI-SearchBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "[Yes](https://platform.openai.com/docs/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in SearchGPT."
},
"omgili": {
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/)",
"function": "Data is sold.",
"frequency": "No information.",
"description": "Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training."
},
"omgilibot": {
"description": "Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io.",
"frequency": "No information.",
"function": "Data is sold.",
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"Operator": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Agents",
"frequency": "Unclear at this time.",
"description": "Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator"
},
"PanguBot": {
"operator": "the Chinese company Huawei",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot"
},
"Panscient": {
"operator": "[Panscient](https://panscient.com)",
"respect": "[Yes](https://panscient.com/faq.htm)",
"function": "Data collection and analysis using machine learning and AI.",
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
},
"panscient.com": {
"operator": "[Panscient](https://panscient.com)",
"respect": "[Yes](https://panscient.com/faq.htm)",
"function": "Data collection and analysis using machine learning and AI.",
"frequency": "The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address.",
"description": "Compiles data on businesses and business professionals that is structured using AI and machine learning."
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[Yes](https://docs.perplexity.ai/guides/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in Perplexity."
},
"PetalBot": {
"description": "Operated by Huawei to provide search and AI assistant services.",
"frequency": "No explicit frequency provided.",
"function": "Used to provide recommendations in Hauwei assistant and AI search services.",
"operator": "[Huawei](https://huawei.com/)",
"respect": "Yes"
},
"PhindBot": {
"description": "Company offers an AI agent that uses AI and generate extra web query on the fly",
"frequency": "No explicit frequency provided.",
"function": "AI-enhanced search engine.",
"operator": "[phind](https://www.phind.com/)",
"respect": "Unclear at this time."
},
"Poseidon Research Crawler": {
"operator": "[Poseidon Research](https://www.poseidonresearch.com)",
"description": "Lab focused on scaling the interpretability research necessary to make better AI systems possible.",
"frequency": "No explicit frequency provided.",
"function": "AI research crawler",
"respect": "Unclear at this time."
},
"QualifiedBot": {
"description": "Operated by Qualified as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI agents and other related products; usage can be assumed to support said products.",
"operator": "[Qualified](https://www.qualified.com)",
"respect": "Unclear at this time."
},
"QuillBot": {
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI detection, writing tools and other services.",
"operator": "[Quillbot](https://quillbot.com)",
"respect": "Unclear at this time."
},
"quillbot.com": {
"description": "Operated by QuillBot as part of their suite of AI product offerings.",
"frequency": "No explicit frequency provided.",
"function": "Company offers AI detection, writing tools and other services.",
"operator": "[Quillbot](https://quillbot.com)",
"respect": "Unclear at this time."
},
"SBIntuitionsBot": {
"description": "AI development and information analysis",
"respect": "[Yes](https://www.sbintuitions.co.jp/en/bot/)",
"frequency": "No information.",
"function": "Uses data gathered in AI development and information analysis.",
"operator": "[SB Intuitions](https://www.sbintuitions.co.jp/en/)"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
"frequency": "No information.",
"function": "Scrapes data for a variety of uses including training AI.",
"operator": "[Zyte](https://www.zyte.com)",
"respect": "Unclear at this time."
},
"SemrushBot": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-BA": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-CT": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-OCOB": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-SI": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-SWA": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Checks URLs on your site for SWA tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
"function": "Extracts data for a variety of uses including training AI.",
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"TikTokSpider": {
"operator": "ByteDance",
"respect": "Unclear at this time.",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, as per Bytespider."
},
"Timpibot": {
"operator": "[Timpi](https://timpi.io)",
"respect": "Unclear at this time.",
"function": "Scrapes data for use in training LLMs.",
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
},
"Webzio-Extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"wpbot": {
"operator": "[QuantumCloud](https://www.quantumcloud.com)",
"respect": "Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9)",
"function": "Live chat support and lead generation.",
"frequency": "Unclear at this time.",
"description": "wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support."
},
"YandexAdditional": {
"operator": "[Yandex](https://yandex.ru)",
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
"frequency": "No information.",
"description": "Retrieves data used for YandexGPT quick answers features."
},
"YandexAdditionalBot": {
"operator": "[Yandex](https://yandex.ru)",
"respect": "[Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en)",
"function": "Scrapes/analyzes data for the YandexGPT LLM.",
"frequency": "No information.",
"description": "Retrieves data used for YandexGPT quick answers features."
},
"YouBot": {
"operator": "[You](https://about.you.com/youchat/)",
"respect": "[Yes](https://about.you.com/youbot/)",
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
}
}

View file

@ -1,27 +1,81 @@
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: aiHitBot
User-agent: Amazonbot User-agent: Amazonbot
User-agent: Andibot
User-agent: anthropic-ai User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended User-agent: Applebot-Extended
User-agent: bedrockbot
User-agent: Brightbot 1.0
User-agent: Bytespider User-agent: Bytespider
User-agent: CCBot User-agent: CCBot
User-agent: ChatGPT-User User-agent: ChatGPT-User
User-agent: ClaudeBot User-agent: Claude-SearchBot
User-agent: Claude-User
User-agent: Claude-Web User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: cohere-ai User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Cotoyogi
User-agent: Crawlspace
User-agent: Diffbot User-agent: Diffbot
User-agent: DuckAssistBot
User-agent: EchoboxBot
User-agent: FacebookBot User-agent: FacebookBot
User-agent: facebookexternalhit
User-agent: Factset_spyderbot
User-agent: FirecrawlAgent
User-agent: FriendlyCrawler User-agent: FriendlyCrawler
User-agent: Google-CloudVertexBot
User-agent: Google-Extended User-agent: Google-Extended
User-agent: GoogleOther User-agent: GoogleOther
User-agent: GoogleOther-Image User-agent: GoogleOther-Image
User-agent: GoogleOther-Video User-agent: GoogleOther-Video
User-agent: GPTBot User-agent: GPTBot
User-agent: iaskspider/2.0
User-agent: ICC-Crawler
User-agent: ImagesiftBot User-agent: ImagesiftBot
User-agent: img2dataset User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: meta-externalagent
User-agent: Meta-ExternalAgent User-agent: Meta-ExternalAgent
User-agent: meta-externalfetcher
User-agent: Meta-ExternalFetcher
User-agent: MistralAI-User/1.0
User-agent: MyCentralAIScraperBot
User-agent: NovaAct
User-agent: OAI-SearchBot User-agent: OAI-SearchBot
User-agent: omgili User-agent: omgili
User-agent: omgilibot User-agent: omgilibot
User-agent: Operator
User-agent: PanguBot
User-agent: Panscient
User-agent: panscient.com
User-agent: Perplexity-User
User-agent: PerplexityBot User-agent: PerplexityBot
User-agent: PetalBot
User-agent: PhindBot
User-agent: Poseidon Research Crawler
User-agent: QualifiedBot
User-agent: QuillBot
User-agent: quillbot.com
User-agent: SBIntuitionsBot
User-agent: Scrapy
User-agent: SemrushBot
User-agent: SemrushBot-BA
User-agent: SemrushBot-CT
User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SI
User-agent: SemrushBot-SWA
User-agent: Sidetrade indexer bot
User-agent: TikTokSpider
User-agent: Timpibot
User-agent: VelenPublicWebCrawler User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: wpbot
User-agent: YandexAdditional
User-agent: YandexAdditionalBot
User-agent: YouBot User-agent: YouBot
Disallow: / Disallow: /

View file

@ -1,26 +1,82 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description | | Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|----------------|---------|-----------------------|----------|------------------|-------------| |------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. | | Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. | | Andibot | [Andi](https://andisearch.com/) | Unclear at this time | Search engine using generative AI, AI Search Assistant | No information provided. | Scrapes website and provides AI summary. |
|Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. | | anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| bedrockbot | [Amazon](https://amazon.com) | [Yes](https://docs.aws.amazon.com/bedrock/latest/userguide/webcrawl-data-source-connector.html#configuration-webcrawl-connector) | Data scraping for custom AI applications. | Unclear at this time. | Connects to and crawls URLs that have been selected for use in a user's AWS bedrock application. |
| Brightbot 1\.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. | | Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|CCBot | [Common Crawl](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides crawl data for an open source repository that has been used to train LLMs. | Unclear at this time. | Sources data that is made openly available and is used to train AI models. | | CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
|ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. | | ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|ClaudeBot | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. | | Claude\-SearchBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. | No information provided. | Claude-SearchBot navigates the web to improve search result quality for users. It analyzes online content specifically to enhance the relevance and accuracy of search responses. |
|Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. | | Claude\-User | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. | No information provided. | Claude-User supports Claude AI users. When individuals ask questions to Claude, it may access websites using a Claude-User agent. |
|cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. | | Claude\-Web | Anthropic | Unclear at this time. | Undocumented AI Agents | Unclear at this time. | Claude-Web is an AI-related agent operated by Anthropic. It's currently unclear exactly what it's used for, since there's no official documentation. If you can provide more detail, please contact us. More info can be found at https://darkvisitors.com/agents/agents/claude-web |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. | | Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
| EchoboxBot | [Echobox](https://echobox.com) | Unclear at this time. | Data collection to support AI-powered products. | Unclear at this time. | Supports company's AI-powered social and email management products. |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. | | FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|Google-Extended| Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. | | facebookexternalhit | Meta/Facebook | [No](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) | Ostensibly only for sharing, but likely used as an AI crawler as well | Unclear at this time. | Note that excluding FacebookExternalHit will block incorporating OpenGraph data when sharing in social media, including rich links in Apple's Messages app. [According to Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/), its purpose is "to crawl the content of an app or website that was shared on one of Metas family of apps…". However, see discussions [here](https://github.com/ai-robots-txt/ai.robots.txt/pull/21) and [here](https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2524591313) for evidence to the contrary. |
|GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." | | Factset\_spyderbot | [Factset](https://www.factset.com/ai) | Unclear at this time. | AI model training. | No information provided. | Scrapes data for AI training. |
|GoogleOther-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." | | FirecrawlAgent | [Firecrawl](https://www.firecrawl.dev/) | Yes | AI scraper and LLM training | No information provided. | Scrapes data for AI systems and LLM training. |
|GoogleOther-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." | | FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. | | Google\-CloudVertexBot | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Build and manage AI models for businesses employing Vertex AI | No information. | Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. | | img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| Meta-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." | | ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|OAI-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information | Crawls sites to surface as results in SearchGPT. | | Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. | | meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. | | Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
|PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. | | meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." | | Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information | Retrieves data used for You.com web search engine and LLMs. | | MistralAI\-User/1\.0 | Mistral AI | Yes | Takes action based on user prompts. | Only when prompted by a user. | MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. |
| MyCentralAIScraperBot | Unclear at this time. | Unclear at this time. | AI data scraper | Unclear at this time. | Operator and data use is unclear at this time. |
| NovaAct | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| Operator | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator |
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
| Panscient | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
| panscient\.com | [Panscient](https://panscient.com) | [Yes](https://panscient.com/faq.htm) | Data collection and analysis using machine learning and AI. | The Panscient web crawler will request a page at most once every second from the same domain name or the same IP address. | Compiles data on businesses and business professionals that is structured using AI and machine learning. |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| PhindBot | [phind](https://www.phind.com/) | Unclear at this time. | AI-enhanced search engine. | No explicit frequency provided. | Company offers an AI agent that uses AI and generate extra web query on the fly |
| Poseidon Research Crawler | [Poseidon Research](https://www.poseidonresearch.com) | Unclear at this time. | AI research crawler | No explicit frequency provided. | Lab focused on scaling the interpretability research necessary to make better AI systems possible. |
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
| QuillBot | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
| quillbot\.com | [Quillbot](https://quillbot.com) | Unclear at this time. | Company offers AI detection, writing tools and other services. | No explicit frequency provided. | Operated by QuillBot as part of their suite of AI product offerings. |
| SBIntuitionsBot | [SB Intuitions](https://www.sbintuitions.co.jp/en/) | [Yes](https://www.sbintuitions.co.jp/en/bot/) | Uses data gathered in AI development and information analysis. | No information. | AI development and information analysis |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| SemrushBot | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-BA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-CT | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-SI | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| wpbot | [QuantumCloud](https://www.quantumcloud.com) | Unclear at this time; opt out provided via [Google Form](https://forms.gle/ajBaxygz9jSR8p8G9) | Live chat support and lead generation. | Unclear at this time. | wpbot is a used to support the functionality of the AI Chatbot for WordPress plugin. It supports the use of customer models, data collection and customer support. |
| YandexAdditional | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YandexAdditionalBot | [Yandex](https://yandex.ru) | [Yes](https://yandex.ru/support/webmaster/en/search-appearance/fast.html?lang=en) | Scrapes/analyzes data for the YandexGPT LLM. | No information. | Retrieves data used for YandexGPT quick answers features. |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |