X-Git-Url: https://git.immae.eu/?a=blobdiff_plain;f=doc%2FServer-security.md;h=50549a214617177633fac8cf81856c093d511811;hb=fdf88d194874a533cf3a8de3d317d70018aa8a62;hp=0d16e2840ccd3fc2636439d561b1fa3581ece1eb;hpb=dc8e03bfc415d3771a45ccd324078a48b7466cbe;p=github%2Fshaarli%2FShaarli.git diff --git a/doc/Server-security.md b/doc/Server-security.md index 0d16e284..50549a21 100644 --- a/doc/Server-security.md +++ b/doc/Server-security.md @@ -58,3 +58,17 @@ before = common.conf failregex = \s-\s\s-\sLogin failed for user.*$ ignoreregex = ``` + +## Robots - Restricting search engines and web crawler traffic + +Creating a `robots.txt` with the following contents at the root of your Shaarli installation will prevent _honest_ web crawlers from indexing each and every link and Daily page from a Shaarli instance, thus getting rid of a certain amount of unsollicited network traffic. + +``` +User-agent: * +Disallow: / +``` + +See: +- http://www.robotstxt.org/ +- http://www.robotstxt.org/robotstxt.html +- http://www.robotstxt.org/meta.html