| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now the role assumes you have a base backup available with
Barman. However if you don't have an initial barman backup you might
want to clone the primary server directly to setup your standby
server.
This PR adds a new `primary.pg_basebackup` option to the cluster
configuration which if enabled (set to `true`) will create a
`/root/standby-clone-{{ postgres_version }}-{{ postgres_cluster_name }}.sh`
script on the standby server which helps to initialise a standby
server.
⚠️ Breaking change: the current role behavior which creates a cloning
script fetching the initial backup from barman will not be
enabled by default anymore. You will need to add the new
`primary.restore_barman_directory` option in your role
configuration to do so. ⚠️
|
|\
| |
| | |
recovery: optional restore_command & allow custom command if needed
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now the role assumes you always want to use barman-wal-restore
script as a restore command to recover WAL files at startup time of a
standby server.
This PR adds a new `primary.restore_command` option which lets you
override the command to use.
⚠️ Breaking change: the PR renames the existing
`primary.restore_directory` option to
`primary.restore_barman_directory` ⚠️ in order to give more context to
this option which will automatically use the `barman-wal-restore`
script as a restore command.
Finally if none of the two options specified above are specified in
the `primary:` object then the `restore_command` is left commented out
in the PG configuration (which is totally fine as it will try to
recover WALs from the primary server directly see
[documentation](https://www.postgresql.org/docs/12/warm-standby.html#STANDBY-SERVER-OPERATION))
|
|\
| |
| | |
core: Add compatibility with PG 12 major version
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I have been having troubles to get the tests to pass with PG version
9.5 and 9.6.
Not sure why but when we try to start the secondaries (after the whole
cluster setup) they don't want to start with the following logs:
```
Starting PostgreSQL 9.6 database server: main
The PostgreSQL server failed to start. Please check the log output:
2020-05-11 13:18:11.883 UTC [6403] LOG: database system was shut down at 2020-05-11 13:16:41 UTC
ssh: connect to host postgres_barman port 22: Connection refused
ERROR: The required file is not available: 00000002.history
2020-05-11 13:18:12.236 UTC [6403] LOG: entering standby mode
2020-05-11 13:18:12.270 UTC [6409] [unknown]@[unknown] LOG: incomplete startup packet
ssh: connect to host postgres_barman port 22: Connection refused
ERROR: The required file is not available: 000000010000000000000001
2020-05-11 13:18:12.561 UTC [6403] WARNING: WAL was generated with wal_level=minimal, data may be missing
2020-05-11 13:18:12.561 UTC [6403] HINT: This happens if you temporarily set wal_level=minimal without taking a new base backup.
2020-05-11 13:18:12.561 UTC [6403] FATAL: hot standby is not possible because wal_level was not set to \"replica\" or higher on the master server
2020-05-11 13:18:12.561 UTC [6403] HINT: Either set wal_level to \"replica\" on the master, or turn off hot_standby here.
2020-05-11 13:18:12.563 UTC [6402] LOG: startup process (PID 6403) exited with exit code 1
2020-05-11 13:18:12.563 UTC [6402] LOG: aborting startup due to startup process failure
2020-05-11 13:18:12.576 UTC [6402] LOG: database system is shut down ... failed!
failed!
```
The fatal error being:
```
2020-05-11 13:18:12.561 UTC [6403] FATAL: hot standby is not possible because wal_level was not set to \"replica\" or higher on the master server
```
even if the cluster has been started with `logical` wal level from the
start.
It works with later version of PG 10+ so I can leave without those
versions being tested for now.
P.S.: for the sake of comparaison, here are the starting logs of the
secondaries with PG10 (the database starts accepting connections even
with the errors):
```
2020-05-11 15:45:52.640 UTC [8392] LOG: listening on IPv4 address "172.17.0.4", port 5432
2020-05-11 15:45:52.657 UTC [8392] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-05-11 15:45:52.806 UTC [8393] LOG: database system was shut down at 2020-05-11 15:44:10 UTC
ssh: connect to host postgres_barman port 22: Connection refused
ERROR: The required file is not available: 00000002.history
2020-05-11 15:45:53.226 UTC [8393] LOG: entering standby mode
ssh: connect to host postgres_barman port 22: Connection refused
ERROR: The required file is not available: 000000010000000000000001
2020-05-11 15:45:53.577 UTC [8393] LOG: consistent recovery state reached at 0/1632D20
2020-05-11 15:45:53.577 UTC [8393] LOG: invalid record length at 0/1632D20: wanted 24, got 0
2020-05-11 15:45:53.578 UTC [8392] LOG: database system is ready to accept read only connections
2020-05-11 15:45:53.598 UTC [8403] FATAL: could not connect to the primary server: FATAL: password authentication failed for user "replicator"
ssh: connect to host postgres_barman port 22: Connection refused
ERROR: The required file is not available: 00000002.history
2020-05-11 15:45:54.122 UTC [8409] [unknown]@[unknown] LOG: incomplete startup packet
```
|
| |
| |
| |
| |
| | |
This commit allows customisation of the `wal_level` PG config on all
supported PG versions
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds compatibility for postgresql 12 major version.
The PG12 configuration is as close as the PG11 one. All defaults have
been kept back to avoid breaking backward compatibility within this
ansible role.
Here is a complete side-by-side diff between the current PG11 conf and
the new PG12 conf:
```diff
# 0 selects the syste # 0 selects the syste
> #tcp_user_timeout = 0 # TCP_USER_TIMEOUT, i
> # 0 selects the syste
> #ssl_min_protocol_version = 'TLSv1'
> #ssl_max_protocol_version = ''
> #shared_memory_type = mmap # the default is the
> # supported by the op
> # mmap
> # sysv
> # windows
> # (change requires re
dynamic_shared_memory_type = posix # the default is the dynamic_shared_memory_type = posix # the default is the
# supported by the op # supported by the op
# posix # posix
# sysv # sysv
# windows # windows
# mmap # mmap
# use none to disable <
# (change requires re # (change requires re
wal_level = logical # minimal, replica, o | wal_level = {{ postgres_wal_level }} # min
# (change requires re # (change requires re
wal_log_hints = on # also do full page w wal_log_hints = on # also do full page w
# (change requires re # (change requires re
> #wal_init_zero = on # zero-fill new WAL f
> #wal_recycle = on # recycle WAL files
# (change requires re # (change requires re
> # - Archive Recovery -
>
> # These are only used in recovery mode.
>
> {% if postgres_primary %}
> {# In PG < 12 versions all the recovery settings were in a se
> restore_command = '/usr/bin/barman-wal-restore --user barman
> # placeholders: %p = path of
> # %f = file nam
> # e.g. 'cp /mnt/server/archiv
> # (change requires restart)
> {% else %}
> #restore_command = '' # command to use to restore a
> # placeholders: %p = path of
> # %f = file nam
> # e.g. 'cp /mnt/server/archiv
> # (change requires restart)
> {% endif %}
> #archive_cleanup_command = '' # command to execute at every
> #recovery_end_command = '' # command to execute at compl
>
> # - Recovery Target -
>
> # Set these only when performing a targeted recovery.
>
> #recovery_target = '' # 'immediate' to end recovery
> # consistent state is reached
> # (change requires restart)
> #recovery_target_name = '' # the named restore point to
> # (change requires restart)
> #recovery_target_time = '' # the time stamp up to which
> # (change requires restart)
> #recovery_target_xid = '' # the transaction ID up to wh
> # (change requires restart)
> #recovery_target_lsn = '' # the WAL LSN up to which rec
> # (change requires restart)
> #recovery_target_inclusive = on # Specifies whether to stop:
> # just after the specified re
> # just before the recovery ta
> # (change requires restart)
> {% if postgres_primary %}
> {# In PG < 12 versions all the recovery settings were in a se
> recovery_target_timeline='latest' # 'current', 'latest'
> # (change requires restart)
> {% else %}
> #recovery_target_timeline = 'latest' # 'current', 'latest'
> # (change requires restart)
> {% endif %}
> #recovery_target_action = 'pause' # 'pause', 'promote',
> # (change requires restart)
max_wal_senders = 10 # max number of walsender pro | #max_wal_senders = 10 # max number of walsender pro
# (change requires restart) # (change requires restart)
> {% if postgres_primary %}
> {# In PG < 12 versions this setting was defined in separate r
> primary_conninfo = 'host={{ postgres_primary.host }} port={{
> # (change requires re
> {% else %}
> #primary_conninfo = '' # connection string t
> # (change requires re
> {% endif %}
> #primary_slot_name = '' # replication slot on
> # (change requires re
> {% if postgres_primary %}
> {# In PG < 12 versions this setting was defined in separate r
> promote_trigger_file = '/var/lib/postgresql/{{ postgres_versi
> {% else %}
> #promote_trigger_file = '' # file name whose pre
> {% endif %}
# retrieve WAL after # retrieve WAL after
> #recovery_min_apply_delay = 0 # minimum delay for a
> #plan_cache_mode = auto # auto, force_generic
> # force_custom_plan
>
# debug5 <
# debug4 <
# debug3 <
# debug2 <
# debug1 <
# log <
# notice <
# warning <
# error <
<
log_min_duration_statement = 10000 # -1 is disabled, 0 l log_min_duration_statement = 10000 # -1 is disabled, 0 l
# and their durations # and their durations
# statements running # statements running
# of milliseconds # of milliseconds
> #log_transaction_sample_rate = 0.0 # Fraction of transac
> # are logged regardle
> # statements from all
> #client_min_messages = notice # values in order of
> # debug5
> # debug4
> # debug3
> # debug2
> # debug1
> # log
> # notice
> # warning
> # error
# only default tables # only default tables
> #default_table_access_method = 'heap'
> # selects precise out
> #data_sync_retry = off # retry or panic on f
> # data?
> # (change requires re
> # assignments, so they can usefully be given more than once.
```
|
|\
| |
| | |
Allow to set options for barman connectivity
|
|/
|
|
|
|
|
| |
- Allow to pass arbitrary options
- Build the URL in a dedicated step
- Allow to specify path prefix for barman files
- Add documentation in [README.md](README.md)
|
|\
| |
| | |
Allow to use any ansible become method
|
|/
|
|
| |
The variable can be set to 'sudo' if ansible uses sudo
|
|\
| |
| | |
feat(extensions): adds creation of extension on databases if needed
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
improvement: remove 'with_items' loop and use modern ansible loops
|
|/
|
|
| |
Ansible > 2.5 needed
|
|\
| |
| | |
standby: rsync from barman is optional if SSH access is already here
|
|/ |
|
| |
|
|\
| |
| | |
Fix Travis CI
|
| | |
|
| |
| |
| |
| | |
fix barman and postgres galaxy names
|
| | |
|
|\ \
| |/
|/| |
Add PostgreSQL 11
|
|/ |
|
|
|