2 PHP settings are defined in:
3 - a main configuration file, usually found under `/etc/php5/php.ini`; some distributions provide different configuration environments, e.g.
4 - `/etc/php5/php.ini` - used when running console scripts
5 - `/etc/php5/apache2/php.ini` - used when a client requests PHP resources from Apache
6 - `/etc/php5/php-fpm.conf` - used when PHP requests are proxied to PHP-FPM
7 - additional configuration files/entries, depending on the installed/enabled extensions:
8 - `/etc/php/conf.d/xdebug.ini`
11 #### Console environment
14 Configuration File (php.ini) Path: /etc/php
15 Loaded Configuration File: /etc/php/php.ini
16 Scan for additional .ini files in: /etc/php/conf.d
17 Additional .ini files parsed: /etc/php/conf.d/xdebug.ini
20 #### Server environment
21 - create a `phpinfo.php` script located in a path supported by the web server, e.g.
22 - Apache (with user dirs enabled): `/home/myself/public_html/phpinfo.php`
23 - `/var/www/test/phpinfo.php`
24 - make sure the script is readable by the web server user/group (usually, `www`, `www-data` or `httpd`)
25 - access the script from a web browser
26 - look at the _Loaded Configuration File_ and _Scan this dir for additional .ini files_ entries
32 `fail2ban` is an intrusion prevention framework that reads server (Apache, SSH, etc.) and uses `iptables` profiles to block brute-force attempts:
33 - [Official website](http://www.fail2ban.org/wiki/index.php/Main_Page)
34 - [Source code](https://github.com/fail2ban/fail2ban)
36 ### Read Shaarli logs to ban IPs
37 Example configuration:
38 - allow 3 login attempts per IP address
39 - after 3 failures, permanently ban the corresponding IP adddress
41 `/etc/fail2ban/jail.local`
47 logpath = /var/www/path/to/shaarli/data/log.txt
52 `/etc/fail2ban/filter.d/shaarli-auth.conf`
57 failregex = \s-\s<HOST>\s-\sLogin failed for user.*$
61 ## Robots - Restricting search engines and web crawler traffic
63 Creating a `robots.txt` with the following contents at the root of your Shaarli installation will prevent _honest_ web crawlers from indexing each and every link and Daily page from a Shaarli instance, thus getting rid of a certain amount of unsollicited network traffic.
71 - http://www.robotstxt.org/
72 - http://www.robotstxt.org/robotstxt.html
73 - http://www.robotstxt.org/meta.html