X-Git-Url: https://git.immae.eu/?a=blobdiff_plain;f=doc%2FServer-security.md;h=50549a214617177633fac8cf81856c093d511811;hb=4fd053d6b29a1b6724eda17a3daddb29b1bf1ca3;hp=0d16e2840ccd3fc2636439d561b1fa3581ece1eb;hpb=055deb9cbc4cb32c4e9a5edcd77be23844a3f917;p=github%2Fshaarli%2FShaarli.git diff --git a/doc/Server-security.md b/doc/Server-security.md index 0d16e284..50549a21 100644 --- a/doc/Server-security.md +++ b/doc/Server-security.md @@ -58,3 +58,17 @@ before = common.conf failregex = \s-\s\s-\sLogin failed for user.*$ ignoreregex = ``` + +## Robots - Restricting search engines and web crawler traffic + +Creating a `robots.txt` with the following contents at the root of your Shaarli installation will prevent _honest_ web crawlers from indexing each and every link and Daily page from a Shaarli instance, thus getting rid of a certain amount of unsollicited network traffic. + +``` +User-agent: * +Disallow: / +``` + +See: +- http://www.robotstxt.org/ +- http://www.robotstxt.org/robotstxt.html +- http://www.robotstxt.org/meta.html