X-Git-Url: https://git.immae.eu/?a=blobdiff_plain;f=doc%2FServer-security.md;h=50549a214617177633fac8cf81856c093d511811;hb=7fde6de1212323418401c15efba06026c704ca87;hp=0d16e2840ccd3fc2636439d561b1fa3581ece1eb;hpb=47be06098396b5eef35234b88227d64ab81bd988;p=github%2Fshaarli%2FShaarli.git diff --git a/doc/Server-security.md b/doc/Server-security.md index 0d16e284..50549a21 100644 --- a/doc/Server-security.md +++ b/doc/Server-security.md @@ -58,3 +58,17 @@ before = common.conf failregex = \s-\s\s-\sLogin failed for user.*$ ignoreregex = ``` + +## Robots - Restricting search engines and web crawler traffic + +Creating a `robots.txt` with the following contents at the root of your Shaarli installation will prevent _honest_ web crawlers from indexing each and every link and Daily page from a Shaarli instance, thus getting rid of a certain amount of unsollicited network traffic. + +``` +User-agent: * +Disallow: / +``` + +See: +- http://www.robotstxt.org/ +- http://www.robotstxt.org/robotstxt.html +- http://www.robotstxt.org/meta.html