From ec4bc9f4fe927d2758a01b231f4a03155d3044f6 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Wed, 18 Sep 2024 12:24:46 +0200 Subject: [PATCH 01/41] DOCS-125 Updated supernav title with technology name (#665) --- _resource/overrides/partials/header.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_resource/overrides/partials/header.html b/_resource/overrides/partials/header.html index e177d4ddd..2d0d6e740 100644 --- a/_resource/overrides/partials/header.html +++ b/_resource/overrides/partials/header.html @@ -43,7 +43,7 @@ - Percona Documentation + Percona Software for PostgreSQL Documentation From 5d04db2ac40c0d90b66dd48b1b629e748d7de09e Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 3 Oct 2024 15:09:47 +0200 Subject: [PATCH 02/41] Updated GitHub action to show the version instead of latest alias (#671) Removed the default reference --- .github/workflows/main.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 40eabfede..d01c4e430 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -43,8 +43,7 @@ jobs: - name: Deploy docs run: | mike deploy 16 -b publish -p - mike set-default 16 -b publish -p - mike retitle 16 "16 (LATEST)" -b publish -p + mike retitle 16 "16.4" -b publish -p # - name: Install Node.js 14.x # uses: percona-platform/setup-node@v2 From 4c26ebf91f4739e246abdde75bfd626ffdd4a18e Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 29 Oct 2024 15:28:52 +0100 Subject: [PATCH 03/41] PG-1112 Fixed URL to tde docs (#676) modified: docs/percona-ext.md --- docs/percona-ext.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/percona-ext.md b/docs/percona-ext.md index a68a188f5..0fc67434b 100644 --- a/docs/percona-ext.md +++ b/docs/percona-ext.md @@ -16,7 +16,7 @@ A query performance monitoring tool for PostgreSQL that brings more insight and An open-source extension designed to enhance PostgreSQL’s security by encrypting data files on disk. The encryption is transparent for users allowing them to access and manipulate the data and not to worry about the encryption process. -[pg_tde documentation :octicons-link-external-16:](https://percona-lab.github.io/pg_tde/main/){.md-button} +[pg_tde documentation :octicons-link-external-16:](https://percona.github.io/pg_tde/main/index.html){.md-button} \ No newline at end of file From 14e8004309ffd797b297fc44882a931e1c58a469 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 3 Dec 2024 16:11:29 +0100 Subject: [PATCH 04/41] PG-1214 Documented install and enable pgvector steps (#683) --- docs/apt.md | 7 ++++++- docs/enable-extensions.md | 14 +++++++++++--- docs/yum.md | 6 ++++++ 3 files changed, 23 insertions(+), 4 deletions(-) diff --git a/docs/apt.md b/docs/apt.md index de823101d..1d50264c4 100644 --- a/docs/apt.md +++ b/docs/apt.md @@ -135,11 +135,16 @@ Run all the commands in the following sections as root or using the `sudo` comma Install `pg_gather` - ```{.bash data-prompt="$"} $ sudo apt install percona-pg-gather ``` + Install `pgvector` + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql-{{pgversion}}-pgvector + ``` + Some extensions require additional setup in order to use them with Percona Distribution for PostgreSQL. For more information, refer to [Enabling extensions](enable-extensions.md). ### Start the service diff --git a/docs/enable-extensions.md b/docs/enable-extensions.md index 76c386a41..7e3787837 100644 --- a/docs/enable-extensions.md +++ b/docs/enable-extensions.md @@ -4,7 +4,7 @@ Some components require additional configuration before using them with Percona ## Patroni -Patroni is the third-party high availability solution for PostgreSQL. The [High Availability in PostgreSQL with Patroni](solutions/high-availability.md) chapter provides details about the solution overview and architecture deployment. +Patroni is the high availability solution for PostgreSQL. The [High Availability in PostgreSQL with Patroni](solutions/high-availability.md) chapter provides details about the solution overview and architecture deployment. While setting up a high availability PostgreSQL cluster with Patroni, you will need the following components: @@ -14,7 +14,7 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n - [HAProxy :octicons-link-external-16:](http://www.haproxy.org/). -If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section if this document. +If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section in this document. See the configuration guidelines for [Debian and Ubuntu](solutions/ha-setup-apt.md) and [RHEL and CentOS](solutions/ha-setup-yum.md). @@ -125,7 +125,7 @@ $ pgpool -f /pgpool.conf ## pg_stat_monitor -Please refer to [`pg_stat_monitor`](pg-stat-monitor.md#setup) for setup steps. +Please refer to [`pg_stat_monitor`](https://docs.percona.com/pg-stat-monitor/setup.html) for setup steps. ## wal2json @@ -137,6 +137,14 @@ wal_level = logical Start / restart the server to apply the changes. +## pgvector + +To get started, enable the extension for the database where you want to use it: + +```sql +CREATE EXTENSION vector; +``` + ## Next steps [Connect to PostgreSQL :material-arrow-right:](connect.md){.md-button} diff --git a/docs/yum.md b/docs/yum.md index 884961a70..f2b760f99 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -347,6 +347,12 @@ $ sudo yum -y install curl $ sudo yum install percona-pgpool-II-pg{{pgversion}} ``` + Install pgvector package suite: + + ```{.bash data-prompt="$"} + $ sudo yum install percona-pgvector_{{pgversion}} percona-pgvector_{{pgversion}}-debuginfo percona-pgvector_{{pgversion}}-debugsource percona-pgvector_{{pgversion}}-llvmjit + ``` + Some extensions require additional setup in order to use them with Percona Distribution for PostgreSQL. For more information, refer to [Enabling extensions](enable-extensions.md). ### Start the service From 7e61e74d17d0397d937543ba7a4a59914f4d0a14 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 3 Dec 2024 16:17:38 +0100 Subject: [PATCH 05/41] PG-1220 Removed the step to disable llvm toolset (16) (#682) PG-1220 Removed the step to disable llvm toolset modified: docs/yum.md --- docs/yum.md | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-) diff --git a/docs/yum.md b/docs/yum.md index f2b760f99..d483a4222 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -19,10 +19,10 @@ Depending on what operating system you are using, you may need to enable or disa === "RHEL8/Oracle Linux 8/Rocky Linux 8" - Disable the ``postgresql`` and ``llvm-toolset``modules: + Disable the ``postgresql`` module: ```{.bash data-prompt="$"} - $ sudo dnf module disable postgresql llvm-toolset + $ sudo dnf module disable postgresql ``` ### For `percona-postgresql{{pgversion}}-devel` package @@ -39,7 +39,6 @@ You may need to install the `percona-postgresql{{pgversion}}-devel` package when ```{.bash data-prompt="$"} $ sudo dnf install dnf-plugins-core - $ sudo dnf module enable llvm-toolset $ sudo dnf config-manager --set-enabled powertools ``` @@ -53,7 +52,6 @@ You may need to install the `percona-postgresql{{pgversion}}-devel` package when ```{.bash data-prompt="$"} $ sudo dnf install dnf-plugins-core - $ sudo dnf module enable llvm-toolset $ sudo dnf config-manager --set-enabled crb $ sudo dnf install perl-IPC-Run -y ``` @@ -111,13 +109,7 @@ For Red Hat Enterprise Linux 8 and derivatives, replace the operating system ver $ sudo yum install epel-release ``` - 2. Enable the `llvm-toolset dnf` module - - ```{.bash data-prompt="$"} - $ sudo dnf module enable llvm-toolset - ``` - - 3. Enable the codeready builder repository to resolve dependencies conflict. + 2. Enable the codeready builder repository to resolve dependencies conflict. ```{.bash data-prompt="$"} $ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-9-x86_64-rpms @@ -134,13 +126,7 @@ For Red Hat Enterprise Linux 8 and derivatives, replace the operating system ver $ sudo yum install epel-release ``` - 2. Enable the `llvm-toolset dnf` module - - ```{.bash data-prompt="$"} - $ sudo dnf module enable llvm-toolset - ``` - - 3. Enable the codeready builder repository to resolve dependencies conflict. + 2. Enable the codeready builder repository to resolve dependencies conflict. ```{.bash data-prompt="$"} $ sudo dnf install dnf-plugins-core @@ -158,13 +144,7 @@ For Red Hat Enterprise Linux 8 and derivatives, replace the operating system ver $ sudo yum install epel-release ``` - 2. Enable the `llvm-toolset dnf` module - - ```{.bash data-prompt="$"} - $ sudo dnf module enable llvm-toolset - ``` - - 3. Enable the codeready builder repository to resolve dependencies conflict. + 2. Enable the codeready builder repository to resolve dependencies conflict. ```{.bash data-prompt="$"} $ sudo dnf config-manager --set-enabled ol9_codeready_builder From e149060c15bafe5d3114ad9190563a154b10533c Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 3 Dec 2024 16:28:33 +0100 Subject: [PATCH 06/41] PG-1174 Release notes 16.6 (#681) PG-1174 Release notes 16.6 modified: .github/workflows/main.yml modified: docs/index.md new file: docs/release-notes-v16.6.md modified: docs/release-notes.md modified: docs/third-party.md modified: mkdocs-base.yml modified: variables.yml --- .github/workflows/main.yml | 2 +- docs/index.md | 2 +- docs/release-notes-v16.6.md | 46 +++++++++++++++++++++++++++++++++++++ docs/release-notes.md | 2 ++ docs/third-party.md | 1 + mkdocs-base.yml | 3 ++- variables.yml | 5 ++-- 7 files changed, 56 insertions(+), 5 deletions(-) create mode 100644 docs/release-notes-v16.6.md diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index d01c4e430..fb124a1bc 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -43,7 +43,7 @@ jobs: - name: Deploy docs run: | mike deploy 16 -b publish -p - mike retitle 16 "16.4" -b publish -p + mike retitle 16 "16.6" -b publish -p # - name: Install Node.js 14.x # uses: percona-platform/setup-node@v2 diff --git a/docs/index.md b/docs/index.md index 1a9085ceb..ffaa24341 100644 --- a/docs/index.md +++ b/docs/index.md @@ -47,7 +47,7 @@ Our comprehensive resources will help you overcome challenges, from everyday iss Learn about the releases and changes in the Distribution. -[Release notes :material-arrow-right:](release-notes.md){.md-button} +[Release notes :material-arrow-right:]({{release}}.md){.md-button} diff --git a/docs/release-notes-v16.6.md b/docs/release-notes-v16.6.md new file mode 100644 index 000000000..26cb3e934 --- /dev/null +++ b/docs/release-notes-v16.6.md @@ -0,0 +1,46 @@ +# Percona Distribution for PostgreSQL 16.6 ({{date.16_6}}) + +[Installation](installing.md){.md-button} + +--8<-- "release-notes-intro.md" + +This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.6](https://www.postgresql.org/docs/16/release-16-6.html). + +## Release Highlights + +* This release includes fixes for [CVE-2024-10978](https://www.postgresql.org/support/security/CVE-2024-10978/) and for certain PostgreSQL extensions that break because they depend on the modified Application Binary Interface (ABI). These regressions were introduced in PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, and 12.21. For this reason, the release of Percona Distribution for PostgreSQL 16.5 has been skipped. + +* Percona Distribution for PostgreSQL includes [`pgvector` :octicons-link-external-16:](https://github.com/pgvector/pgvector) - an open source extension that enables you to use PostgreSQL as a vector database. It brings vector data type and vector operations (mainly similarity search) to PosgreSQL. You can install `pgvector` from repositories, tarballs, and it is also available as a Docker image. + +* Percona Distribution for PostgreSQL now statically links `llvmjit.so` library for Red Hat Enterprise Linux 8 and 9 and compatible derivatives. This resolves the conflict between the LLVM version required by Percona Distribution for PostgreSQL and the one supplied with the operating system. This also enables you to use the LLVM modules supplied with the operating system for other software you require. + +## Supplied third-party extensions + +Review each extension’s release notes for What’s new, improvements, or bug fixes. The following is the list of extensions available in Percona Distribution for PostgreSQL. + +The following is the list of extensions available in Percona Distribution for PostgreSQL. + +| Extension | Version | Description | +| ------------------- | -------------- | ---------------------------- | +| [etcd](https://etcd.io/)| 3.5.16 | A distributed, reliable key-value store for setting up high available Patroni clusters | +| [HAProxy](http://www.haproxy.org/) | 2.8.11 | a high-availability and load-balancing solution | +| [Patroni](https://patroni.readthedocs.io/en/latest/) | 4.0.3 | a HA (High Availability) solution for PostgreSQL | +| [PgAudit](https://www.pgaudit.org/) | 16 | provides detailed session or object audit logging via the standard logging facility provided by PostgreSQL | +| [pgAudit set_user](https://github.com/pgaudit/set_user)| 4.1.0 | provides an additional layer of logging and control when unprivileged users must escalate themselves to superusers or object owner roles in order to perform needed maintenance tasks.| +| [pgBackRest](https://pgbackrest.org/) | 2.54.0 | a backup and restore solution for PostgreSQL | +|[pgBadger](https://github.com/darold/pgbadger) | 12.4 | a fast PostgreSQL Log Analyzer.| +|[PgBouncer](https://www.pgbouncer.org/) |1.23.1 | a lightweight connection pooler for PostgreSQL| +| [pg_gather](https://github.com/jobinau/pg_gather)| v28 | an SQL script for running the diagnostics of the health of PostgreSQL cluster | +| [pgpool2](https://git.postgresql.org/gitweb/?p=pgpool2.git;a=summary) | 4.5.4 | a middleware between PostgreSQL server and client for high availability, connection pooling and load balancing.| +| [pg_repack](https://github.com/reorg/pg_repack) | 1.5.1 | rebuilds PostgreSQL database objects | +| [pg_stat_monitor](https://github.com/percona/pg_stat_monitor)|{{pgsmversion}} | collects and aggregates statistics for PostgreSQL and provides histogram information.| +| [pgvector](https://github.com/pgvector/pgvector)| v0.8.0 | A vector similarity search for PostgreSQL| +| [PostGIS](https://github.com/postgis/postgis) | 3.3.7 | a spatial extension for PostgreSQL.| +| [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common)| 266 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time.| +| [wal2json](https://github.com/eulerto/wal2json) |2.6 | a PostgreSQL logical decoding JSON output plugin| + +For Red Hat Enterprise Linux 8 and 9 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. + +Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of +library functions that allow client programs to pass queries to the PostgreSQL +backend server and to receive the results of these queries." diff --git a/docs/release-notes.md b/docs/release-notes.md index 7fae13f6f..da50d44b3 100644 --- a/docs/release-notes.md +++ b/docs/release-notes.md @@ -1,5 +1,7 @@ # Percona Distribution for PostgreSQL release notes +* [Percona Distribution for PostgreSQL 16.6](release-notes-v16.6.md) ({{date.16_6}}) + * [Percona Distribution for PostgreSQL 16.4](release-notes-v16.4.md) ({{date.16_4}}) * [Percona Distribution for PostgreSQL 16.3](release-notes-v16.3.md) (2024-06-06) diff --git a/docs/third-party.md b/docs/third-party.md index b9e3253af..0cadeee57 100644 --- a/docs/third-party.md +++ b/docs/third-party.md @@ -18,4 +18,5 @@ Percona Distribution for PostgreSQL is supplied with the set of third-party open | [pg_repack](https://github.com/reorg/pg_repack) | Required | Rebuilds PostgreSQL database objects | | [pg_stat_monitor](https://github.com/percona/pg_stat_monitor) | Required | Collects and aggregates statistics for PostgreSQL and provides histogram information | | [PostGIS](http://postgis.net/) | Required | Allows storing and manipulating spacial data in PostgreSQL | +|[pgvector :octicons-link-external-16:](https://github.com/pgvector/pgvector)| Required | An extension that enables you to use PostgreSQL as a vector database| |[wal2json](https://github.com/eulerto/wal2json)|Required| A PostgreSQL logical decoding JSON output plugin.| \ No newline at end of file diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 74c73544e..94e2d287f 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -131,7 +131,7 @@ plugins: output_path: '_pdf/PerconaDistributionPostgreSQL-16.pdf' cover_title: 'Distribution for PostgreSQL Documentation' - cover_subtitle: 16.4 (September 10, 2024) + cover_subtitle: 16.6 (December 3, 2024) author: 'Percona Technical Documentation Team' cover_logo: docs/_images/Percona_Logo_Color.png debug_html: false @@ -197,6 +197,7 @@ nav: - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" + - release-notes-v16.6.md - release-notes-v16.4.md - release-notes-v16.3.md - release-notes-v16.2.md diff --git a/variables.yml b/variables.yml index c726ba15a..5df7aa08a 100644 --- a/variables.yml +++ b/variables.yml @@ -1,13 +1,14 @@ # PG Variables set for HTML output # See also mkdocs.yml plugins.with-pdf.cover_subtitle and output_path -release: 'release-notes-v16.4' -dockertag: '16.4' +release: 'release-notes-v16.6' +dockertag: '16.6' pgversion: '16' pgsmversion: '2.1.0' date: + 16_6: 2024-12-03 16_4: 2024-09-10 From 2b1aca97940a46c602d20434a1a74b08a85d9ef5 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Mon, 9 Dec 2024 15:33:29 +0100 Subject: [PATCH 07/41] Update tarball.md (#698) * Update tarball.md --- docs/tarball.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tarball.md b/docs/tarball.md index fdc05b36e..c9539390a 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -19,7 +19,7 @@ The tarballs include the following components: | Component | Description | |-----------|-------------| -| percona-postgresql{{pgversion}}| The latest version of PostgreSQL server and the following extensions:
- `pgaudit`
- `pgAudit_set_user`
- `pg_repack`
- `pg_stat_monitor`
- `pg_gather`
- `wal2json`
- the set of [contrib extensions](contrib.md)| +| percona-postgresql{{pgversion}}| The latest version of PostgreSQL server and the following extensions:
- `pgaudit`
- `pgAudit_set_user`
- `pg_repack`
- `pg_stat_monitor`
- `pg_gather`
- `wal2json`
- `pgvector`
- the set of [contrib extensions](contrib.md)| | percona-haproxy | A high-availability solution and load-balancing solution | | percona-patroni | A high-availability solution for PostgreSQL | | percona-pgbackrest| A backup and restore tool | From 30f0e37dca96c8499af2a187aa35a4515bfcccd2 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Mon, 16 Dec 2024 15:41:56 +0100 Subject: [PATCH 08/41] PG-1223 Updated etcd setup steps (#702) * PG-1223 Updated etcd setup steps new file: snippets/check-etcd.md new file: snippets/percona-release-apt.md new file: snippets/percona-release-yum.md --- docs/css/design.css | 3 +- docs/how-to.md | 75 ------- docs/solutions/ha-setup-apt.md | 304 +++++++++++++++------------- docs/solutions/ha-setup-yum.md | 301 ++++++++++++++------------- docs/solutions/high-availability.md | 17 +- mkdocs-base.yml | 1 - snippets/check-etcd.md | 47 +++++ snippets/percona-release-apt.md | 24 +++ snippets/percona-release-yum.md | 5 + 9 files changed, 415 insertions(+), 362 deletions(-) delete mode 100644 docs/how-to.md create mode 100644 snippets/check-etcd.md create mode 100644 snippets/percona-release-apt.md create mode 100644 snippets/percona-release-yum.md diff --git a/docs/css/design.css b/docs/css/design.css index 14f9728b6..e452993e0 100644 --- a/docs/css/design.css +++ b/docs/css/design.css @@ -269,6 +269,7 @@ vertical-align: baseline; padding: 0 0.2em 0.1em; border-radius: 0.15em; + white-space: pre-wrap; /* Ensure long lines wrap */ } .md-typeset .highlight code span, .md-typeset code, @@ -729,4 +730,4 @@ i[warning] [class*="moji"] { padding: 1em; } } -/**/ \ No newline at end of file +/**/ diff --git a/docs/how-to.md b/docs/how-to.md deleted file mode 100644 index 86acdd79e..000000000 --- a/docs/how-to.md +++ /dev/null @@ -1,75 +0,0 @@ -# How to - -## How to configure etcd nodes simultaneously - -!!! note - - We assume you have a deeper knowledge of how etcd works. Otherwise, refer to the configuration where you add etcd nodes one by one. - -Instead of adding `etcd` nodes one by one, you can configure and start all nodes in parallel. - -1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - - === "node1" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - - === "node2" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 - ``` - - === "node3" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.3:2380 - listen-peer-urls: http://10.104.0.3:2380 - advertise-client-urls: http://10.104.0.3:2379 - listen-client-urls: http://10.104.0.3:2379 - ``` - -2. Enable and start the `etcd` service on all nodes: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - ``` - - During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - -3. Check the etcd cluster members. Connect to one of the nodes and run the following command: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` - - The output resembles the following: - - ``` - 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false - 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false - c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md index c3baa590a..24c34773f 100644 --- a/docs/solutions/ha-setup-apt.md +++ b/docs/solutions/ha-setup-apt.md @@ -26,17 +26,21 @@ This guide provides instructions on how to set up a highly available PostgreSQL ## Initial setup -It’s not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other’s names and allow their seamless communication. +Configure every node. -1. Run the following command on each node. Change the node name to `node1`, `node2` and `node3` respectively: +### Set up hostnames in the `/etc/hosts` file - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node-1 - ``` +It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. + +=== "node1" -2. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + 1. Set up the hostname for the node - === "node1" + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node1 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="3 4" # Cluster IP and names @@ -45,7 +49,15 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "node2" +=== "node2" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node2 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 4" # Cluster IP and names @@ -54,7 +66,15 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "node3" +=== "node3" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node3 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 3" # Cluster IP and names @@ -63,11 +83,17 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "HAproxy-demo" +=== "HAproxy-demo" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname HAProxy-demo + ``` - The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: - ```text hl_lines="4 5 6" + ```text hl_lines="3 4 5" # Cluster IP and names 10.104.0.6 HAProxy-demo 10.104.0.1 node1 @@ -75,22 +101,29 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - ### Install the software Run the following commands on `node1`, `node2` and `node3`: 1. Install Percona Distribution for PostgreSQL - * [Install `percona-release` :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/installing.html). + * Disable the upstream `postgresql-{{pgversion}}` package. - * Enable the repository: + * Install the `percona-release` repository management tool - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg16 - ``` + --8<-- "percona-release-apt.md" + + * Enable the repository + + ```{.bash data-prompt="$"} + $ sudo percona-release setup ppg{{pgversion}} + ``` - * [Install Percona Distribution for PostgreSQL packages](../apt.md). + * Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql-{{pgversion}} + ``` 2. Install some Python and auxiliary packages to help with Patroni and etcd @@ -123,114 +156,60 @@ Run the following commands on `node1`, `node2` and `node3`: ## Configure etcd distributed store -The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is etcd. etcd is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An etcd cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances. - -This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/) - -If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). - -The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. +In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). !!! note - - Users with deeper understanding of how etcd works can configure and start all etcd nodes at a time and bootstrap the cluster using one of the following methods: - - * Static in the case when the IP addresses of the cluster nodes are known - * Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - - See the [How to configure etcd nodes simultaneously](../how-to.md#how-to-configure-etcd-nodes-simultaneously) section for details. - -### Configure `node1` - -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node name and IP address with the actual name and IP address of your node. - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - -2. Start the `etcd` service to apply the changes on `node1`. - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl start etcd - $ sudo systemctl status etcd - ``` - -3. Check the etcd cluster members on `node1`: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` - Sample output: + If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. - ```{.text .no-copy} - 21d50d7f768f153a: name=default peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` +To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: -4. Add the `node2` to the cluster. Run the following command on `node1`: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node2 http://10.104.0.2:2380 - ``` - - ??? example "Sample output" +* Static in the case when the IP addresses of the cluster nodes are known +* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - ```{.text .no-copy} - Added member named node2 with ID 10042578c504d052 to cluster - - etcd_NAME="node2" - etcd_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" - etcd_INITIAL_CLUSTER_STATE="existing" - ``` +Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. -### Configure `node2` +We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. +### Method 1. Modify the configuration file - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 - ``` +1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. -3. Start the `etcd` service to apply the changes on `node2`: + === "node1" - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl start etcd - $ sudo systemctl status etcd - ``` - -### Configure `node3` + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node1' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.1:2380 + listen-peer-urls: http://10.104.0.1:2380 + advertise-client-urls: http://10.104.0.1:2379 + listen-client-urls: http://10.104.0.1:2379 + ``` -1. Add `node3` to the cluster. **Run the following command on `node1`** + === "node2" - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node3 http://10.104.0.3:2380 - ``` + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node2' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.2:2380 + listen-peer-urls: http://10.104.0.2:2380 + advertise-client-urls: http://10.104.0.2:2379 + listen-client-urls: http://10.104.0.2:2379 + ``` -2. On `node3`, create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. + === "node3" ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' + name: 'node3' initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 data-dir: /var/lib/etcd initial-advertise-peer-urls: http://10.104.0.3:2380 listen-peer-urls: http://10.104.0.3:2380 @@ -238,7 +217,7 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar listen-client-urls: http://10.104.0.3:2379 ``` -3. Start the `etcd` service to apply the changes. +2. Enable and start the `etcd` service on all nodes: ```{.bash data-prompt="$"} $ sudo systemctl enable --now etcd @@ -246,19 +225,65 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar $ sudo systemctl status etcd ``` -4. Check the etcd cluster members. + During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ```{.bash data-prompt="$"} - $ sudo etcdctl member list +--8<-- "check-etcd.md" + +### Method 2. Start etcd nodes with command line options + +1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: + + ``` + TOKEN=PostgreSQL_HA_Cluster_1 + CLUSTER_STATE=new + NAME_1=node1 + NAME_2=node2 + NAME_3=node3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 ``` - ??? example "Sample output" +2. Start each etcd node in parallel using the following command: + + === "node1" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_1} + THIS_IP=${HOST_1} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` + + === "node2" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_2} + THIS_IP=${HOST_2} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` + + === "node3" + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_3} + THIS_IP=${HOST_3} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} ``` - 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false - 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false - c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` + +--8<-- "check-etcd.md" ## Configure Patroni @@ -294,7 +319,7 @@ Run the following commands on all nodes. You can do this in parallel: SCOPE="cluster_1" ``` -2. Create the `/etc/patroni/patroni.yml` configuration file. Add the following configuration for `node1`: +2. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: ```bash echo " @@ -395,11 +420,11 @@ Run the following commands on all nodes. You can do this in parallel: Following these, there is a `bootstrap` section that contains the PostgreSQL configurations and the steps to run once the database is initialized. The `pg_hba.conf` entries specify all the other nodes that can connect to this node and their authentication mechanism. -3. Check that the systemd unit file `patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. +3. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. If it's **not created**, create it manually and specify the following contents within: - ```ini title="/etc/systemd/system/patroni.service" + ```ini title="/etc/systemd/system/percona-patroni.service" [Unit] Description=Runners to orchestrate a high-availability PostgreSQL After=syslog.target network.target @@ -435,7 +460,9 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo systemctl daemon-reload ``` -5. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: +5. Repeat steps 1-4 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. +6. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: + ```{.bash data-prompt="$"} $ sudo systemctl enable --now patroni @@ -444,7 +471,7 @@ Run the following commands on all nodes. You can do this in parallel: When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. -6. Check the service to see if there are errors: +7. Check the service to see if there are errors: ```{.bash data-prompt="$"} $ sudo journalctl -fu patroni @@ -454,31 +481,22 @@ When Patroni starts, it initializes PostgreSQL (because the service is not curre Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if the Patroni configuration file is modified and the service is restarted. -7. Check the cluster: +8. Check the cluster. Run the following command on any node: ```{.bash data-prompt="$"} $ patronictl -c /etc/patroni/patroni.yml list $SCOPE ``` - The output on `node1` resembles the following: + The output resembles the following: ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - +--------+-------------+---------+---------+----+-----------+ - ``` - - On the remaining nodes: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - | node-2 | 10.0.100.2 | Replica | running | 1 | 0 | - +--------+-------------+---------+---------+----+-----------+ + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | + | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | + +--------+------------+---------+-----------+----+-----------+ ``` If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following command: @@ -490,7 +508,7 @@ $ sudo psql -U postgres The command output is the following: ``` -psql (16.0) +psql ({{pgversion}}) Type "help" for help. postgres=# diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index d88fe1595..816723331 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -29,15 +29,15 @@ This guide provides instructions on how to set up a highly available PostgreSQL It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. -1. Run the following command on each node. Change the node name to `node1`, `node2` and `node3` respectively: +=== "node1" - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node-1 - ``` + 1. Set up the hostname for the node -2. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node1 + ``` - === "node1" + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="3 4" # Cluster IP and names @@ -46,7 +46,15 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "node2" +=== "node2" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node2 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 4" # Cluster IP and names @@ -55,7 +63,15 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "node3" +=== "node3" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node3 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 3" # Cluster IP and names @@ -64,11 +80,17 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "HAproxy-demo" +=== "HAproxy-demo" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname HAProxy-demo + ``` - The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: - ```text hl_lines="4 5 6" + ```text hl_lines="3 4 5" # Cluster IP and names 10.104.0.6 HAProxy-demo 10.104.0.1 node1 @@ -78,16 +100,26 @@ It's not necessary to have name resolution, but it makes the whole setup more re ### Install the software -1. Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from Percona repository: +Run the following commands on `node1`, `node2` and `node3`: + +1. Install Percona Distribution for PostgreSQL: + + * Check the [platform specific notes](../yum.md#for-percona-distribution-for-postgresql-packages) + * Install the `percona-release` repository management tool + + --8<-- "percona-release-yum.md" - * [Install `percona-release` :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/installing.html). * Enable the repository: ```{.bash data-prompt="$"} $ sudo percona-release setup ppg16 ``` - * [Install Percona Distribution for PostgreSQL packages](../yum.md). + * Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql{{pgversion}}-server + ``` !!! important @@ -116,112 +148,60 @@ It's not necessary to have name resolution, but it makes the whole setup more re ## Configure etcd distributed store -The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is etcd. etcd is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An etcd cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances. - -This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/). - -If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). - -The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. +In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). !!! note - - Users with deeper understanding of how etcd works can configure and start all etcd nodes at a time and bootstrap the cluster using one of the following methods: - - * Static in the case when the IP addresses of the cluster nodes are known - * Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - - See the [How to configure etcd nodes simultaneously](../how-to.md#how-to-configure-etcd-nodes-simultaneously) section for details. - -### Configure `node1` - -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node name and IP address with the actual name and IP address of your node. - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - -2. Enable and start the `etcd` service to apply the changes on `node1`. - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd - ``` - -3. Check the etcd cluster members on `node1`: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` - Sample output: - - ```{.text .no-copy} - 21d50d7f768f153a: name=default peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` + If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. -4. Add the `node2` to the cluster. Run the following command on `node1`: +To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node2 http://10.104.0.2:2380 - ``` - - ??? example "Sample output" +* Static in the case when the IP addresses of the cluster nodes are known +* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - ```{.text .no-copy} - Added member named node2 with ID 10042578c504d052 to cluster - - etcd_NAME="node2" - etcd_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" - etcd_INITIAL_CLUSTER_STATE="existing" - ``` - -### Configure `node2` +Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. +We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 - ``` +### Method 1. Modify the configuration file -3. Enable and start the `etcd` service to apply the changes on `node2`: +1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd - ``` + === "node1" -### Configure `node3` + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node1' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.1:2380 + listen-peer-urls: http://10.104.0.1:2380 + advertise-client-urls: http://10.104.0.1:2379 + listen-client-urls: http://10.104.0.1:2379 + ``` -1. Add `node3` to the cluster. **Run the following command on `node1`** + === "node2" - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node3 http://10.104.0.3:2380 - ``` + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node2' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.2:2380 + listen-peer-urls: http://10.104.0.2:2380 + advertise-client-urls: http://10.104.0.2:2379 + listen-client-urls: http://10.104.0.2:2379 + ``` -2. On `node3`, create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. + === "node3" ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' + name: 'node3' initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 data-dir: /var/lib/etcd initial-advertise-peer-urls: http://10.104.0.3:2380 listen-peer-urls: http://10.104.0.3:2380 @@ -229,26 +209,73 @@ The `etcd` cluster is first started in one node and then the subsequent nodes ar listen-client-urls: http://10.104.0.3:2379 ``` -3. Enable and start the `etcd` service to apply the changes. +2. Enable and start the `etcd` service on all nodes: ```{.bash data-prompt="$"} $ sudo systemctl enable --now etcd + $ sudo systemctl start etcd $ sudo systemctl status etcd ``` -4. Check the etcd cluster members. + During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ```{.bash data-prompt="$"} - $ sudo etcdctl member list +--8<-- "check-etcd.md" + +### Method 2. Start etcd nodes with command line options + +1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: + + ``` + TOKEN=PostgreSQL_HA_Cluster_1 + CLUSTER_STATE=new + NAME_1=node1 + NAME_2=node2 + NAME_3=node3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 ``` - ??? example "Sample output" +2. Start each etcd node in parallel using the following command: + + === "node1" + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_1} + THIS_IP=${HOST_1} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` + + === "node2" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_2} + THIS_IP=${HOST_2} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} ``` - 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false - 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false - c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` + + === "node3" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_3} + THIS_IP=${HOST_3} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` + +--8<-- "check-etcd.md" ## Configure Patroni @@ -301,8 +328,8 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo chmod 700 /data/pgsql ``` -3. Create the `/etc/patroni/patroni.yml` configuration file. Add the following configuration: - +3. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: + ```bash echo " namespace: ${NAMESPACE} @@ -393,11 +420,11 @@ Run the following commands on all nodes. You can do this in parallel: " | sudo tee -a /etc/patroni/patroni.yml ``` -4. Check that the systemd unit file `patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. +4. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. - If it's **not** created, create it manually and specify the following contents within: - - ```ini title="/etc/systemd/system/patroni.service" + If it's **not created**, create it manually and specify the following contents within: + + ```ini title="/etc/systemd/system/percona-patroni.service" [Unit] Description=Runners to orchestrate a high-availability PostgreSQL After=syslog.target network.target @@ -433,7 +460,8 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo systemctl daemon-reload ``` -6. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: +6. Repeat steps 1-5 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. +7. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: ```{.bash data-prompt="$"} $ sudo systemctl enable --now patroni @@ -442,7 +470,7 @@ Run the following commands on all nodes. You can do this in parallel: When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. -7. Check the service to see if there are errors: +8. Check the service to see if there are errors: ```{.bash data-prompt="$"} $ sudo journalctl -fu patroni @@ -463,32 +491,23 @@ Run the following commands on all nodes. You can do this in parallel: postgres=# ``` -8. When all nodes are up and running, you can check the cluster status using the following command: +9. When all nodes are up and running, you can check the cluster status using the following command: ```{.bash data-prompt="$"} $ sudo patronictl -c /etc/patroni/patroni.yml list ``` - The output on `node1` resembles the following: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - +--------+-------------+---------+---------+----+-----------+ - ``` - - On the remaining nodes: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - | node-2 | 10.0.100.2 | Replica | running | 1 | 0 | - +--------+-------------+---------+---------+----+-----------+ - ``` + The output resembles the following: + + ```{.text .no-copy} + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | + | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | + +--------+------------+---------+-----------+----+-----------+ + ``` ## Configure HAProxy diff --git a/docs/solutions/high-availability.md b/docs/solutions/high-availability.md index f79e3a1b5..e6118b3fc 100644 --- a/docs/solutions/high-availability.md +++ b/docs/solutions/high-availability.md @@ -38,7 +38,7 @@ There are several methods to achieve high availability in PostgreSQL. This solut ## Patroni -[Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. +[Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) is a Patroni is an open-source tool that helps to deploy, manage, and monitor highly available PostgreSQL clusters using physical streaming replication. Patroni relies on a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes to store the cluster configuration. ### Key benefits of Patroni: @@ -50,6 +50,21 @@ There are several methods to achieve high availability in PostgreSQL. This solut * Distributed consensus for every action and configuration. * Integration with Linux watchdog for avoiding split-brain syndrome. +## etcd + +As stated before, Patroni uses a distributed configuration store to store the cluster configuration, health and status.The most popular implementation of the distributed configuration store is etcd due to its simplicity, consistency and reliability. Etcd not only stores the cluster data, it also handles the election of a new primary node (a leader in ETCD terminology). + +etcd is deployed as a cluster for fault-tolerance. An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. + +The recommended approach is to deploy an odd-sized cluster (e.g. 3, 5 or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with n members, the majority is (n/2)+1. + +To better illustrate this concept, let's take an example of clusters with 3 nodes and 4 nodes. + +In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. + +In a 4-nodes cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. + +In this solution we use a 3-nodes etcd cluster that resides on the same hosts with PostgreSQL and Patroni. Though !!! admonition "See also" diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 94e2d287f..77e8de359 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -193,7 +193,6 @@ nav: - minor-upgrade.md - migration.md - Troubleshooting guide: troubleshooting.md - - How to: how-to.md - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" diff --git a/snippets/check-etcd.md b/snippets/check-etcd.md new file mode 100644 index 000000000..1bd516fd2 --- /dev/null +++ b/snippets/check-etcd.md @@ -0,0 +1,47 @@ +3. Check the etcd cluster members. Use `etcdctl` for this purpose. Ensure that `etcdctl` interacts with etcd using API version 3 and knows which nodes, or endpoints, to communicate with. For this, we will define the required information as environment variables. Run the following commands on one of the nodes: + + ``` + export ETCDCTL_API=3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379 + ``` + +4. Now, list the cluster members and output the result as a table as follows: + + ```{.bash data-prompt="$"} + $ sudo etcdctl --endpoints=$ENDPOINTS -w table member list + ``` + + ??? example "Sample output" + + ``` + +------------------+---------+-------+------------------------+----------------------------+------------+ + | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | + +------------------+---------+-------+------------------------+----------------------------+------------+ + | 4788684035f976d3 | started | node2 | http://10.104.0.2:2380 | http://192.168.56.102:2379 | false | + | 67684e355c833ffa | started | node3 | http://10.104.0.3:2380 | http://192.168.56.103:2379 | false | + | 9d2e318af9306c67 | started | node1 | http://10.104.0.1:2380 | http://192.168.56.101:2379 | false | + +------------------+---------+-------+------------------------+----------------------------+------------+ + ``` + +5. To check what node is currently the leader, use the following command + + ```{.bash data-prompt="$"} + $ sudo etcdctl --endpoints=$ENDPOINTS -w table endpoint status + ``` + + ??? example "Sample output" + + ```{.text .no-copy} + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + | 10.104.0.1:2379 | 9d2e318af9306c67 | 3.5.16 | 20 kB | true | false | 2 | 10 | 10 | | + | 10.104.0.2:2379 | 4788684035f976d3 | 3.5.16 | 20 kB | false | false | 2 | 10 | 10 | | + | 10.104.0.3:2379 | 67684e355c833ffa | 3.5.16 | 20 kB | false | false | 2 | 10 | 10 | | + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + ``` + + \ No newline at end of file diff --git a/snippets/percona-release-apt.md b/snippets/percona-release-apt.md new file mode 100644 index 000000000..c3a80d194 --- /dev/null +++ b/snippets/percona-release-apt.md @@ -0,0 +1,24 @@ +1. Install the `curl` download utility if it's not installed already: + + ```{.bash data-prompt="$"} + $ sudo apt update + $ sudo apt install curl + ``` + +2. Download the `percona-release` repository package: + + ```{.bash data-prompt="$"} + $ curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb + ``` + +3. Install the downloaded repository package and its dependencies using `apt`: + + ```{.bash data-prompt="$"} + $ sudo apt install gnupg2 lsb-release ./percona-release_latest.generic_all.deb + ``` + +4. Refresh the local cache to update the package information: + + ```{.bash data-prompt="$"} + $ sudo apt update + ``` \ No newline at end of file diff --git a/snippets/percona-release-yum.md b/snippets/percona-release-yum.md new file mode 100644 index 000000000..05d669385 --- /dev/null +++ b/snippets/percona-release-yum.md @@ -0,0 +1,5 @@ +Run the following command as the `root` user or with `sudo` privileges: + +```{.bash data-prompt="$"} +$ sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm +``` \ No newline at end of file From 9afae17a6280b4eb30777b6e84d407617c6c2cfd Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 17 Dec 2024 12:03:38 +0100 Subject: [PATCH 09/41] Updated version-select.js (#708) --- docs/js/version-select.js | 170 +++++++++++++------------------------- 1 file changed, 57 insertions(+), 113 deletions(-) diff --git a/docs/js/version-select.js b/docs/js/version-select.js index dd66d6b4a..b24febf38 100644 --- a/docs/js/version-select.js +++ b/docs/js/version-select.js @@ -1,120 +1,64 @@ -setTimeout(() => { - const asideMenu = document.getElementsByClassName('sphinxsidebarwrapper')[0]; - hideSubMenus(); - asideMenu.style.display = 'block'; -}, 500); - -function hideSubMenus() { - const asideMenu = document.getElementsByClassName('sphinxsidebarwrapper')[0]; - const activeCheckboxClass = 'custom-button--active'; - const activeBackgroundClass = 'custom-button--main-active'; - const links = Array.from(asideMenu.getElementsByTagName('a')); - const accordionLinks = links.filter(links => links.nextElementSibling && links.nextElementSibling.localName === 'ul'); - const simpleLinks = links.filter(links => !links.nextElementSibling && links.parentElement.localName === 'li'); - - simpleLinks.forEach(simpleLink => { - simpleLink.parentElement.style.listStyleType = 'disc'; - simpleLink.parentElement.style.marginLeft = '20px'; +/* + * Custom version of same taken from mike code for injecting version switcher into percona.com + */ + +window.addEventListener('DOMContentLoaded', function () { + // This is a bit hacky. Figure out the base URL from a known CSS file the + // template refers to... + var ex = new RegExp('/?css/version-select.css$'); + var sheet = document.querySelector('link[href$="version-select.css"]'); + + if (!sheet) { + return; + } + + var ABS_BASE_URL = sheet.href.replace(ex, ''); + var CURRENT_VERSION = ABS_BASE_URL.split('/').pop(); + + function makeSelect(options, selected) { + var select = document.createElement('select'); + select.classList.add('btn'); + select.classList.add('btn-primary'); + + options.forEach(function (i) { + var option = new Option(i.text, i.value, undefined, i.value === selected); + select.add(option); }); - accordionLinks.forEach((link, index) => { - insertButton(link, index); + return select; + } + + var xhr = new XMLHttpRequest(); + xhr.open('GET', ABS_BASE_URL + '/../versions.json'); + xhr.onload = function () { + var versions = JSON.parse(this.responseText); + + var realVersion = versions.find(function (i) { + return ( + i.version === CURRENT_VERSION || i.aliases.includes(CURRENT_VERSION) + ); + }).version; + + var select = makeSelect( + versions.map(function (i) { + return { text: i.title, value: i.version }; + }), + realVersion + ); + select.addEventListener('change', function (event) { + window.location.href = ABS_BASE_URL + '/../' + this.value; }); - const buttons = Array.from(document.getElementsByClassName('custom-button')); - - buttons.forEach(button => button.addEventListener('click', event => { - event.preventDefault(); - const current = event.currentTarget; - const parent = current.parentElement; - const isMain = Array.from(parent.classList).includes('toctree-l1'); - const isMainActive = Array.from(parent.classList).includes(activeBackgroundClass); - const targetClassList = Array.from(current.classList); - - toggleElement(targetClassList.includes(activeCheckboxClass), current, activeCheckboxClass); - if (isMain) { - toggleElement(isMainActive, parent, activeBackgroundClass); - } - })); - -// WIP var toctree_heading = document.getElementById("toctree-heading"); -// NOT NEEDED? asideMenu.parentNode.insertBefore(styleDomEl, asideMenu); -} - -function toggleElement(condition, item, className) { - const isButton = item.localName === 'button'; - - if (!condition) { - const previousActive = Array.from(item.parentElement.parentElement.getElementsByClassName('list-item--active')); - if (isButton) { - localStorage.setItem(item.id, 'true'); + var container = document.createElement('div'); + container.id = 'custom_select'; + container.classList.add('side-column-block'); - if (previousActive.length) { - previousActive.forEach(previous => { + // Add menu + container.appendChild(select); - const previousActiveButtons = Array.from(previous.getElementsByClassName('custom-button--active')); - removeClass(previous, ['list-item--active', 'custom-button--main-active']); + var sidebar = document.querySelector('#version-select-wrapper'); // Inject menu into element with this ID + sidebar.appendChild(container); + }; - if (previousActiveButtons.length) { - previousActiveButtons.forEach(previousButton => { - - removeClass(previousButton, 'custom-button--active'); - localStorage.removeItem(previousButton.id); - }); - } - }) - } - } - addClass(item, className); - addClass(item.parentElement, 'list-item--active'); - } else { - removeClass(item, className); - removeClass(item.parentElement, 'list-item--active'); - - if (isButton) { - localStorage.removeItem(item.id); - } - } -} -function addClass(item, classes) { - item.classList.add(...Array.isArray(classes) ? classes : [classes]); -} -function removeClass(item, classes) { - item.classList.remove(...Array.isArray(classes) ? classes : [classes]); -} -function insertButton(element, id) { - const button = document.createElement('button'); - const isMain = Array.from(element.parentElement.classList).includes('toctree-l1'); - button.id = id; - addClass(button, 'custom-button'); - if (localStorage.getItem(id)) { - addClass(button, 'custom-button--active'); - addClass(element.parentElement, 'list-item--active'); - if (isMain) { - addClass(element.parentElement, 'custom-button--main-active'); - } - } - element.insertAdjacentElement('beforebegin', button); -} -function makeSelect() { - const custom_select = document.getElementById('custom_select'); - const select_active_option = custom_select.getElementsByClassName('select-active-text')[0]; - const custom_select_list = document.getElementById('custom_select_list'); - - select_active_option.innerHTML = window.location.href.includes('') ? - custom_select_list.getElementsByClassName('custom-select__option')[1].innerHTML : - custom_select_list.getElementsByClassName('custom-select__option')[0].innerHTML; - - document.addEventListener('click', event => { - if (event.target.parentElement.id === 'custom_select' || event.target.id === 'custom_select') { - custom_select_list.classList.toggle('select-hidden') - } - if (Array.from(event.target.classList).includes('custom-select__option')) { - select_active_option.innerHTML = event.target.innerHTML; - } - if (event.target.id !== 'custom_select' && event.target.parentElement.id !== 'custom_select') { - custom_select_list.classList.add('select-hidden') - } - - }); -} \ No newline at end of file + xhr.send(); +}); \ No newline at end of file From 485d03e210829949b04b07089dcf4ebb4de2a870 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Mon, 13 Jan 2025 14:27:49 +0200 Subject: [PATCH 10/41] PG-1283 Fixed typos in yum install instructions for HA (#715) modified: docs/solutions/ha-setup-yum.md --- docs/solutions/ha-setup-yum.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index 816723331..a2ef5d66b 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -118,7 +118,7 @@ Run the following commands on `node1`, `node2` and `node3`: * Install Percona Distribution for PostgreSQL package ```{.bash data-prompt="$"} - $ sudo apt install percona-postgresql{{pgversion}}-server + $ sudo yum install percona-postgresql{{pgversion}}-server ``` !!! important @@ -142,8 +142,8 @@ Run the following commands on `node1`, `node2` and `node3`: 4. Stop and disable all installed services: ```{.bash data-prompt="$"} - $ sudo systemctl stop {etcd,patroni,postgresql} - $ systemctl disable {etcd,patroni,postgresql} + $ sudo systemctl stop {etcd,patroni,postgresql-{{pgversion}}} + $ sudo systemctl disable {etcd,patroni,postgresql-{{pgversion}}} ``` ## Configure etcd distributed store From 57c850fde5396d2b1dc93d393dd5e639def9173c Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Mon, 20 Jan 2025 13:34:18 +0200 Subject: [PATCH 11/41] PG-1299 Removed deprecated extension from Contrib extensions table (#720) --- docs/contrib.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contrib.md b/docs/contrib.md index af43b5620..de668bb39 100644 --- a/docs/contrib.md +++ b/docs/contrib.md @@ -51,4 +51,4 @@ Find the list of controb modules and extensions included in Percona Distribution |[tsm_system_time](https://www.postgresql.org/docs/16/tsm-system-time.html) | | Provides the table sampling method SYSTEM_TIME, which can be used in the TABLESAMPLE clause of a SELECT command.| |[unaccent](https://www.postgresql.org/docs/16/unaccent.html) | |A text search dictionary that removes accents (diacritic signs) from lexemes. It's a filtering dictionary, which means its output is always passed to the next dictionary (if any). This allows accent-insensitive processing for full text search. | |[uuid-ossp](https://www.postgresql.org/docs/16/uuid-ossp.html) |Required | Provides functions to generate universally unique identifiers (UUIDs) using one of several standard algorithms | -|[xml2](https://www.postgresql.org/docs/16/xml2.html) |Required | Provides XPath querying and XSLT functionality. It allows for complex querying and transformation of XML data stored in PostgreSQL.| + From 41e6a24d56ab72cf93db73395c9d06371998a371 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 30 Jan 2025 15:01:59 +0200 Subject: [PATCH 12/41] PG-1350 Updated the Get Help widget (#725) Created a dedicated Help page Moved banner from page bottom to right-hand nav bar modified: _resource/overrides/main.html new file: _resource/overrides/partials/banner.html modified: docs/css/extra.css new file: docs/get-help.md modified: mkdocs-base.yml deleted: snippets/services-banner.md --- _resource/overrides/main.html | 16 +++------------- _resource/overrides/partials/banner.html | 9 +++++++++ docs/css/extra.css | 7 ++++++- docs/get-help.md | 24 ++++++++++++++++++++++++ mkdocs-base.yml | 6 +++--- snippets/services-banner.md | 13 ------------- 6 files changed, 45 insertions(+), 30 deletions(-) create mode 100644 _resource/overrides/partials/banner.html create mode 100644 docs/get-help.md delete mode 100644 snippets/services-banner.md diff --git a/_resource/overrides/main.html b/_resource/overrides/main.html index 83d4a6c16..ca7b047ba 100644 --- a/_resource/overrides/main.html +++ b/_resource/overrides/main.html @@ -6,19 +6,6 @@ {# Import the theme's layout. #} {% extends "base.html" %} -{%- macro relbar2 () %} -
-
-
-

Contact Us

-

For free technical help, visit the Percona Community Forum.
-

To report bugs or submit feature requests, open a JIRA ticket.
-

For paid support and managed or consulting services , contact Percona Sales.

- -
-
-
-{%- endmacro %} {% block scripts %} @@ -72,6 +59,9 @@

Contact Us

{% include "partials/toc.html" %}
+
+ {% include "partials/banner.html" %} +
{% endif %} diff --git a/_resource/overrides/partials/banner.html b/_resource/overrides/partials/banner.html new file mode 100644 index 000000000..830718b90 --- /dev/null +++ b/_resource/overrides/partials/banner.html @@ -0,0 +1,9 @@ +
+

+

For help, click the link below to get free database assistance or contact our experts for personalized support.

+ +
+ + Get help from Percona +
+
\ No newline at end of file diff --git a/docs/css/extra.css b/docs/css/extra.css index 30f5a6278..1fd45fbe9 100644 --- a/docs/css/extra.css +++ b/docs/css/extra.css @@ -4,4 +4,9 @@ top: 0.6rem; left: 0.6rem; } - } \ No newline at end of file + } + + .md-sidebar__inner { + font-size: 0.65rem; /* Font size */ + line-height: 1.6; +} \ No newline at end of file diff --git a/docs/get-help.md b/docs/get-help.md new file mode 100644 index 000000000..4b253da2f --- /dev/null +++ b/docs/get-help.md @@ -0,0 +1,24 @@ +# Get help from Percona + +Our documentation guides are packed with information, but they can’t cover everything you need to know about Percona Distribution for PostgreSQL. They also won’t cover every scenario you might come across. Don’t be afraid to try things out and ask questions when you get stuck. + +## Percona's Community Forum + +Be a part of a space where you can tap into a wealth of knowledge from other database enthusiasts and experts who work with Percona’s software every day. While our service is entirely free, keep in mind that response times can vary depending on the complexity of the question. You are engaging with people who genuinely love solving database challenges. + +We recommend visiting our [Community Forum](https://forums.percona.com/t/welcome-to-perconas-community-forum/7){:target="_blank"}. It’s an excellent place for discussions, technical insights, and support around Percona database software. If you’re new and feeling a bit unsure, our [FAQ](https://forums.percona.com/faq){:target="_blank"} and [Guide for New Users](https://forums.percona.com/t/faq-guide-for-new-users/8562){:target="_blank"} ease you in. + +If you have thoughts, feedback, or ideas, the community team would like to hear from you at [Any ideas on how to make the forum better?](https://forums.percona.com/t/any-ideas-on-how-to-make-the-forum-better/11522){:target="blank"}. We’re always excited to connect and improve everyone's experience. + +## Percona experts + +[Percona experts](https://www.percona.com/services/consulting){:target="_blank"} bring years of experience in tackling tough database performance issues and design challenges. We understand your challenges when managing complex database environments. That's why we offer various services to help you simplify your operations and achieve your goals. + +| Service | Description | +|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| 24/7 Expert Support | Our dedicated team of database experts is available 24/7 to assist you with any database issues. We provide flexible support plans tailored to your specific needs. | +| Hands-On Database Management | Our managed services team can take over the day-to-day management of your database infrastructure, freeing up your time to focus on other priorities. | +| Expert Consulting | Our experienced consultants provide guidance on database topics like architecture design, migration planning, performance optimization, and security best practices. | +| Comprehensive Training | Our training programs help your team develop skills to manage databases effectively, offering virtual and in-person courses. | + +We're here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we're ready to provide your expertise and support. \ No newline at end of file diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 77e8de359..d994a96cc 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -98,8 +98,8 @@ markdown_extensions: pymdownx.inlinehilite: {} pymdownx.snippets: base_path: ["snippets"] - auto_append: - - services-banner.md + # auto_append: + # - services-banner.md pymdownx.emoji: emoji_index: !!python/name:material.extensions.emoji.twemoji emoji_generator: !!python/name:material.extensions.emoji.to_svg @@ -152,7 +152,7 @@ extra: nav: - 'Home': 'index.md' - + - get-help.md - Get started: - Quickstart guide: installing.md - 1. Install: diff --git a/snippets/services-banner.md b/snippets/services-banner.md deleted file mode 100644 index 2bb683aac..000000000 --- a/snippets/services-banner.md +++ /dev/null @@ -1,13 +0,0 @@ - -
- -## Get expert help { .title } - -If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. - -
- -[:material-forum-outline: Community Forum](https://forums.percona.com/c/postgresql/25?utm_campaign=Doc%20pages) [:percona-logo: Get a Percona Expert](https://www.percona.com/about/contact) - -
- From d4fdcc0a0c03aba3420b1a0595126301de69363b Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 4 Feb 2025 15:25:14 +0200 Subject: [PATCH 13/41] PG-1359 Fixed install dependencies syntax in commands (#737) modified: docs/yum.md --- docs/yum.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/docs/yum.md b/docs/yum.md index d483a4222..ee75fe24c 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -32,7 +32,8 @@ You may need to install the `percona-postgresql{{pgversion}}-devel` package when === "RHEL8" ```{.bash data-prompt="$"} - $ sudo yum --enablerepo=codeready-builder-for-rhel-8-rhui-rpms install perl-IPC-Run -y + $ sudo yum --enablerepo=codeready-builder-for-rhel-8-rhui-rpms + $ sudo dnf install perl-IPC-Run -y ``` === "Rocky Linux 8" @@ -45,7 +46,8 @@ You may need to install the `percona-postgresql{{pgversion}}-devel` package when === "Oracle Linux 8" ```{.bash data-prompt="$"} - $ sudo dnf config-manager --set-enabled ol8_codeready_builder install perl-IPC-Run -y + $ sudo dnf config-manager --set-enabled ol8_codeready_builder + $ sudo dnf install perl-IPC-Run -y ``` === "Rocky Linux 9" @@ -59,7 +61,8 @@ You may need to install the `percona-postgresql{{pgversion}}-devel` package when === "Oracle Linux 9" ```{.bash data-prompt="$"} - $ sudo dnf config-manager --set-enabled ol9_codeready_builder install perl-IPC-Run -y + $ sudo dnf config-manager --set-enabled ol9_codeready_builder + $ sudo dnf install perl-IPC-Run -y ``` ### For `percona-patroni` package From 1e27362338f310542b217827a5a8f1a858daa2ea Mon Sep 17 00:00:00 2001 From: Alina Derkach Date: Thu, 6 Feb 2025 19:12:55 +0100 Subject: [PATCH 14/41] DOCS-135 [DOCS] Fix the colour of the search results in dark mode (#742) Update design.css --- docs/css/design.css | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/css/design.css b/docs/css/design.css index e452993e0..f4861d6db 100644 --- a/docs/css/design.css +++ b/docs/css/design.css @@ -86,6 +86,7 @@ /* Defaults */ --md-default-bg-color: var(--white); + --md-default-fg-color: var(--stone900); --md-default-fg-color--light: rgba(44,50,62,0.72); --md-default-fg-color--lighter: rgba(44,50,62,0.40); --md-default-fg-color--lightest: rgba(44,50,62,0.25); @@ -119,6 +120,7 @@ /* Defaults */ --md-default-bg-color: var(--stone900); + --md-default-fg-color: var(--white); --md-default-fg-color--light: rgba(251,251,251,0.72); --md-default-fg-color--lighter: rgba(251,251,251,0.4); --md-default-fg-color--lightest: rgba(209,213,222,0.25); @@ -162,7 +164,7 @@ .md-typeset h1 { margin: 0 0 0.75em; } -.md-header { +.md-header :not(.md-search__suggest) { font-family: var(--fHeading); font-weight: bold; } From bb94ac54794c88fd323c64cf1f9e3510f681e18c Mon Sep 17 00:00:00 2001 From: Alina Derkach Date: Mon, 24 Feb 2025 08:29:16 +0100 Subject: [PATCH 15/41] DOCS-159 Implement the Lead generation forms (16) (#753) Update get-help.md --- docs/get-help.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/get-help.md b/docs/get-help.md index 4b253da2f..1eab330c1 100644 --- a/docs/get-help.md +++ b/docs/get-help.md @@ -12,7 +12,10 @@ If you have thoughts, feedback, or ideas, the community team would like to hear ## Percona experts -[Percona experts](https://www.percona.com/services/consulting){:target="_blank"} bring years of experience in tackling tough database performance issues and design challenges. We understand your challenges when managing complex database environments. That's why we offer various services to help you simplify your operations and achieve your goals. +Percona experts bring years of experience in tackling tough database performance issues and design challenges. + +
+We understand your challenges when managing complex database environments. That's why we offer various services to help you simplify your operations and achieve your goals. | Service | Description | |----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -21,4 +24,4 @@ If you have thoughts, feedback, or ideas, the community team would like to hear | Expert Consulting | Our experienced consultants provide guidance on database topics like architecture design, migration planning, performance optimization, and security best practices. | | Comprehensive Training | Our training programs help your team develop skills to manage databases effectively, offering virtual and in-person courses. | -We're here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we're ready to provide your expertise and support. \ No newline at end of file +We're here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we're ready to provide your expertise and support. From 6c91f94b353ce00766dedad84afbb4c3bb64252a Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 27 Feb 2025 14:41:09 +0100 Subject: [PATCH 16/41] PG-1300 Added PostGIS from tarballs (#748) modified: docs/solutions/postgis-deploy.md modified: docs/tarball.md Signed-off-by: Anastasia Alexadrova --- docs/solutions/postgis-deploy.md | 4 ++++ docs/tarball.md | 6 +++--- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/solutions/postgis-deploy.md b/docs/solutions/postgis-deploy.md index 77fb41892..56a3c1278 100644 --- a/docs/solutions/postgis-deploy.md +++ b/docs/solutions/postgis-deploy.md @@ -66,6 +66,10 @@ The following document provides guidelines how to install PostGIS and how to run FROM pg_available_extensions WHERE name LIKE 'postgis%' or name LIKE 'address%'; ``` +=== ":octicons-download-16: From tarballs" + + PostGIS is included into binary tarball and is a part of the `percona-postgresql{{pgversion}}` binary. Use the [install from tarballs](../tarball/.md) tutorial to install it. + ## Enable PostGIS extension diff --git a/docs/tarball.md b/docs/tarball.md index c9539390a..51e589d35 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -19,7 +19,7 @@ The tarballs include the following components: | Component | Description | |-----------|-------------| -| percona-postgresql{{pgversion}}| The latest version of PostgreSQL server and the following extensions:
- `pgaudit`
- `pgAudit_set_user`
- `pg_repack`
- `pg_stat_monitor`
- `pg_gather`
- `wal2json`
- `pgvector`
- the set of [contrib extensions](contrib.md)| +| percona-postgresql{{pgversion}}| The latest version of PostgreSQL server and the following extensions:
- `pgaudit`
- `pgAudit_set_user`
- `pg_repack`
- `pg_stat_monitor`
- `pg_gather`
- `wal2json`
- `postGIS`
- `pgvector`
- the set of [contrib extensions](contrib.md)| | percona-haproxy | A high-availability solution and load-balancing solution | | percona-patroni | A high-availability solution for PostgreSQL | | percona-pgbackrest| A backup and restore tool | @@ -142,7 +142,7 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use 9. Connect to `psql` ```{.bash data-prompt="$"} - $ /opt/pgdistro/percona-postgresql{{pgversion}}/bin/psql + $ /opt/pgdistro/percona-postgresql{{pgversion}}/bin/psql -d postgres ``` ??? example "Sample output" @@ -154,7 +154,7 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use postgres=# ``` -### Start the components +## Start the components After you unpacked the tarball and added the location of the components' binaries to the `$PATH` variable, the components are available for use. You can invoke a component by running its command-line tool. From 05a92f19a4a1ef739b03f1d5bc4f326cf706e5b8 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 27 Feb 2025 14:42:04 +0100 Subject: [PATCH 17/41] PKG-388 Updated the tags in Run in Docker steps (#734) --- docs/docker.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/docker.md b/docs/docker.md index f6af0a067..565bbac5d 100644 --- a/docs/docker.md +++ b/docs/docker.md @@ -33,14 +33,14 @@ For more information about using Docker, see the [Docker Docs :octicons-link-ext 1. Start a Percona Distribution for PostgreSQL container as follows: ```{.bash data-prompt="$"} - $ docker run --name container-name -e POSTGRES_PASSWORD=secret -d percona/percona-distribution-postgresql:-multi + $ docker run --name container-name -e POSTGRES_PASSWORD=secret -d percona/percona-distribution-postgresql:{{dockertag}} ``` Where: * `container-name` is the name you assign to your container * `POSTGRES_PASSWORD` is the superuser password - * `tag-multi` is the tag specifying the version you need. For example, `{{dockertag}}-multi`. The `multi` part of the tag serves to identify the architecture (x86_64 or ARM64) and pull the respective image. See the [full list of tags :octicons-link-external-16:](https://hub.docker.com/r/percona/percona-distribution-postgresql/tags/). + * `{{dockertag}}` is the tag specifying the version you need. Docker identifies the architecture (x86_64 or ARM64) and pulls the respective image. See the [full list of tags :octicons-link-external-16:](https://hub.docker.com/r/percona/percona-distribution-postgresql/tags/). !!! tip @@ -56,7 +56,7 @@ For more information about using Docker, see the [Docker Docs :octicons-link-ext 2. Start the container: ```{.bash data-prompt="$"} - $ docker run --name container-name --env-file ./.my-pg.env -d percona/percona-distribution-postgresql:-multi + $ docker run --name container-name --env-file ./.my-pg.env -d percona/percona-distribution-postgresql:{{dockertag}} ``` 2. Connect to the container's interactive terminal: @@ -87,14 +87,14 @@ where: The following command starts another container instance and runs the `psql` command line client against your original container, allowing you to execute SQL statements against your database: ```{.bash data-prompt="$"} -$ docker run -it --network container:db-container-name --name container-name percona/percona-distribution-postgresql:-multi psql -h address -U postgres +$ docker run -it --network container:db-container-name --name container-name percona/percona-distribution-postgresql:{{dockertag}} psql -h address -U postgres ``` Where: * `db-container-name` is the name of your database container * `container-name` is the name of your container that you will use to connect to the database container using the `psql` command line client -`tag-multi` is the tag specifying the version you need. For example, `{{dockertag}}-multi`. The `multi` part of the tag serves to identify the architecture (x86_64 or ARM64) and pull the respective image. +* `{{dockertag}}` is the tag specifying the version you need. Docker identifies the architecture (x86_64 or ARM64) and pulls the respective image. * `address` is the network address where your database container is running. Use 127.0.0.1, if the database container is running on the local machine/host. ## Enable `pg_stat_monitor` From f0a9ebc872c2c40f17026452543501780f1c176b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E3=82=B0=E3=83=AA=E3=82=A2=E3=83=B3=E3=83=89=E3=83=AD?= Date: Thu, 27 Feb 2025 10:43:37 -0300 Subject: [PATCH 18/41] Update pgbackrest.md (#688) * Update pgbackrest.md Changed configuration file used (postgresql.yml is wrong) and we also need at least cluster name in the reload command. Optionally we can also use a node name, this is why I added that too. --------- Co-authored-by: Anastasia Alexandrova --- docs/solutions/pgbackrest.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/solutions/pgbackrest.md b/docs/solutions/pgbackrest.md index 5b147f3f0..4481874cb 100644 --- a/docs/solutions/pgbackrest.md +++ b/docs/solutions/pgbackrest.md @@ -476,10 +476,10 @@ Run the following commands on `node1`, `node2`, and `node3`. (...) ``` - Reload the changed configurations: + Reload the changed configurations. Specify either the cluster name or a node name for the following command: ```{.bash data-prompt="$"} - $ patronictl -c /etc/patroni/postgresql.yml reload + $ patronictl -c /etc/patroni/patroni.yml reload cluster_name node_name ``` :material-information: Note: When configuring a PostgreSQL server that is not managed by Patroni to archive/restore WALs from the `pgBackRest` server, edit the server's main configuration file directly and adjust the `archive_command` and `restore_command` variables as shown above. From b569bbd3bdbbe806d6f2533ff7a1499d2701f83f Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 27 Feb 2025 14:47:49 +0100 Subject: [PATCH 19/41] PG-1346 Release notes 16.8 (#724) PG-1346 Release notes 16.8 new file: docs/release-notes-v16.8.md modified: docs/release-notes.md modified: mkdocs-base.yml modified: variables.yml Signed-off-by: Anastasia Alexadrova --- .github/workflows/main.yml | 8 +++--- docs/apt.md | 2 +- docs/release-notes-v16.8.md | 57 +++++++++++++++++++++++++++++++++++++ docs/release-notes.md | 2 ++ docs/repo-overview.md | 4 +++ docs/yum.md | 2 +- mkdocs-base.yml | 4 ++- requirements.txt | 3 +- variables.yml | 8 +++--- 9 files changed, 78 insertions(+), 12 deletions(-) create mode 100644 docs/release-notes-v16.8.md diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index fb124a1bc..3827dbeb4 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -11,13 +11,13 @@ jobs: steps: #Pull the latest changes - - name: Chekout code - uses: actions/checkout@v2 + - name: Checkout code + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 with: fetch-depth: 0 #Prepare the env - name: Set up Python - uses: actions/setup-python@v2 + uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0 with: python-version: '3.x' @@ -43,7 +43,7 @@ jobs: - name: Deploy docs run: | mike deploy 16 -b publish -p - mike retitle 16 "16.6" -b publish -p + mike retitle 16 "16.8" -b publish -p # - name: Install Node.js 14.x # uses: percona-platform/setup-node@v2 diff --git a/docs/apt.md b/docs/apt.md index 1d50264c4..2685382fd 100644 --- a/docs/apt.md +++ b/docs/apt.md @@ -43,7 +43,7 @@ Run all the commands in the following sections as root or using the `sudo` comma ### Install packages -=== "Install using meta-package" +=== "Install using meta-package (deprecated)" The [meta package](repo-overview.md#percona-ppg-server){:target=”_blank”} enables you to install several components of the distribution in one go. diff --git a/docs/release-notes-v16.8.md b/docs/release-notes-v16.8.md new file mode 100644 index 000000000..9644c782a --- /dev/null +++ b/docs/release-notes-v16.8.md @@ -0,0 +1,57 @@ +# Percona Distribution for PostgreSQL 16.8 ({{date.16_8}}) + +[Installation](installing.md){.md-button} + +--8<-- "release-notes-intro.md" + +This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.7](https://www.postgresql.org/docs/16/release-16-7.html) and PostgreSQL 16.8. + +## Release Highlights + +This release fixes [CVE-2025-1094](https://www.postgresql.org/support/security/CVE-2025-1094/), which closed a vulnerability in the `libpq` PostgreSQL client library but introduced a regression related to string handling for non-null terminated strings. The error would be visible based on how a PostgreSQL client implemented this behavior. This regression affects versions 17.3, 16.7, 15.11, 14.16, and 13.19. For this reason, version 16.7 was skipped. + +### Improved security and user experience for Docker images + +* Percona Distribution for PostgreSQL Docker image is now based on Universal Base Image (UBI) version 9, which includes the latest security fixes. This makes the image compliant with the Red Hat certification and ensures the seamless work of containers on Red Hat OpenShift Container Platform. + +* You no longer have to specify the `{{dockertag}}-multi` tag when you run Percona Distribution for PostgreSQL in Docker. Instead, use the `percona/percona-distribution-postgresql:{{dockertag}}`. Docker automatically identifies the architecture of your operating system and pulls the corresponding image. Refer to [Run in Docker](docker.md) for how to get started. + +### PostGIS is included into tarballs + +We have extended Percona Distribution for PostgreSQL tarballs with PostGIS - an open-source extension to handle spacial data. This way you can install and run PostgreSQL as a geospatial database on hosts without a direct access to the Internet. Learn more about [installing from tarballs](tarball.md) and [Spacial data manipulation](postgis.md) + +### Deprecation of meta packages + +[Meta-packages for Percona Distribution for PostgreSQL](repo-overview.md#repository-contents) are deprecated and will be removed in future releases. + +## Supplied third-party extensions + +Review each extension’s release notes for What’s new, improvements, or bug fixes. The following is the list of extensions available in Percona Distribution for PostgreSQL. + +The following is the list of extensions available in Percona Distribution for PostgreSQL. + +| Extension | Version | Description | +| ------------------- | -------------- | ---------------------------- | +| [etcd](https://etcd.io/)| 3.5.18 | A distributed, reliable key-value store for setting up high available Patroni clusters | +| [HAProxy](http://www.haproxy.org/) | 2.8.13 | a high-availability and load-balancing solution | +| [Patroni](https://patroni.readthedocs.io/en/latest/) | 4.0.4 | a HA (High Availability) solution for PostgreSQL | +| [PgAudit](https://www.pgaudit.org/) | 16 | provides detailed session or object audit logging via the standard logging facility provided by PostgreSQL | +| [pgAudit set_user](https://github.com/pgaudit/set_user)| 4.1.0 | provides an additional layer of logging and control when unprivileged users must escalate themselves to superusers or object owner roles in order to perform needed maintenance tasks.| +| [pgBackRest](https://pgbackrest.org/) | 2.54.2 | a backup and restore solution for PostgreSQL | +| [pgBadger](https://github.com/darold/pgbadger) | 13.0 | a fast PostgreSQL Log Analyzer. | +| [PgBouncer](https://www.pgbouncer.org/) | 1.24.0 | a lightweight connection pooler for PostgreSQL | +| [pg_gather](https://github.com/jobinau/pg_gather) | v29 | an SQL script for running the diagnostics of the health of PostgreSQL cluster | +| [pgpool2](https://git.postgresql.org/gitweb/?p=pgpool2.git;a=summary) | 4.5.5 | a middleware between PostgreSQL server and client for high availability, connection pooling, and load balancing. | +| [pg_repack](https://github.com/reorg/pg_repack) | 1.5.2 | rebuilds PostgreSQL database objects | +| [pg_stat_monitor](https://github.com/percona/pg_stat_monitor) | 2.1.0 | collects and aggregates statistics for PostgreSQL and provides histogram information. | +| [pgvector](https://github.com/pgvector/pgvector) | v0.8.0 | A vector similarity search for PostgreSQL | +| [PostGIS](https://github.com/postgis/postgis) | 3.3.8 | a spatial extension for PostgreSQL. | +| [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 270 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | +| [wal2json](https://github.com/eulerto/wal2json) | 2.6 | a PostgreSQL logical decoding JSON output plugin | + + +For Red Hat Enterprise Linux 8 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. + +Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of +library functions that allow client programs to pass queries to the PostgreSQL +backend server and to receive the results of these queries." diff --git a/docs/release-notes.md b/docs/release-notes.md index da50d44b3..b09237010 100644 --- a/docs/release-notes.md +++ b/docs/release-notes.md @@ -1,5 +1,7 @@ # Percona Distribution for PostgreSQL release notes +* [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) + * [Percona Distribution for PostgreSQL 16.6](release-notes-v16.6.md) ({{date.16_6}}) * [Percona Distribution for PostgreSQL 16.4](release-notes-v16.4.md) ({{date.16_4}}) diff --git a/docs/repo-overview.md b/docs/repo-overview.md index cd7abae19..c9127039e 100644 --- a/docs/repo-overview.md +++ b/docs/repo-overview.md @@ -12,6 +12,10 @@ Percona Distribution for PostgreSQL provides individual packages for its compone Using a meta-package, you can install all components it contains in one go. +!!! note + + Meta packages are deprecated and will be removed in future releases. + ### `percona-ppg-server` === "Package name on Debian/Ubuntu" diff --git a/docs/yum.md b/docs/yum.md index ee75fe24c..ec62dc962 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -231,7 +231,7 @@ $ sudo yum -y install curl ### Install packages -=== "Install using meta-package" +=== "Install using meta-package (deprecated)" The [meta package](repo-overview.md#percona-ppg-server){:target=”_blank”} enables you to install several components of the distribution in one go. diff --git a/mkdocs-base.yml b/mkdocs-base.yml index d994a96cc..a08921644 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -112,6 +112,7 @@ plugins: section-index: {} search: separator: '[\s\-,:!=\[\]()"`/]+|\.(?!\d)|&[lg]t;|(?!\b)(?=[A-Z][a-z])' + open-in-new-tab: {} git-revision-date-localized: enable_creation_date: true enabled: !ENV [ENABLED_GIT_REVISION_DATE, True] @@ -131,7 +132,7 @@ plugins: output_path: '_pdf/PerconaDistributionPostgreSQL-16.pdf' cover_title: 'Distribution for PostgreSQL Documentation' - cover_subtitle: 16.6 (December 3, 2024) + cover_subtitle: 16.8 (February 27, 2025) author: 'Percona Technical Documentation Team' cover_logo: docs/_images/Percona_Logo_Color.png debug_html: false @@ -196,6 +197,7 @@ nav: - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" + - release-notes-v16.8.md - release-notes-v16.6.md - release-notes-v16.4.md - release-notes-v16.3.md diff --git a/requirements.txt b/requirements.txt index f1d3d82d1..031d9e13a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -14,4 +14,5 @@ mkdocs-section-index mkdocs-htmlproofer-plugin mkdocs-meta-descriptions-plugin mike -Pillow > 10.1.0 \ No newline at end of file +Pillow > 10.1.0 +mkdocs-open-in-new-tab \ No newline at end of file diff --git a/variables.yml b/variables.yml index 5df7aa08a..ac6c82c91 100644 --- a/variables.yml +++ b/variables.yml @@ -1,14 +1,14 @@ # PG Variables set for HTML output # See also mkdocs.yml plugins.with-pdf.cover_subtitle and output_path -release: 'release-notes-v16.6' -dockertag: '16.6' +release: 'release-notes-v16.8' +dockertag: '16.8' pgversion: '16' -pgsmversion: '2.1.0' - +pgsmversion: '2.1.1' date: + 16_8: 2025-02-27 16_6: 2024-12-03 16_4: 2024-09-10 From 2cca0830bdf5fc625349c012ac7b7f675ec61401 Mon Sep 17 00:00:00 2001 From: Anastasia Alexadrova Date: Thu, 27 Feb 2025 14:53:32 +0100 Subject: [PATCH 20/41] Fix pgsm version for 16.8 Signed-off-by: Anastasia Alexadrova --- docs/release-notes-v16.8.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/release-notes-v16.8.md b/docs/release-notes-v16.8.md index 9644c782a..b2ab8ef61 100644 --- a/docs/release-notes-v16.8.md +++ b/docs/release-notes-v16.8.md @@ -43,10 +43,10 @@ The following is the list of extensions available in Percona Distribution for Po | [pg_gather](https://github.com/jobinau/pg_gather) | v29 | an SQL script for running the diagnostics of the health of PostgreSQL cluster | | [pgpool2](https://git.postgresql.org/gitweb/?p=pgpool2.git;a=summary) | 4.5.5 | a middleware between PostgreSQL server and client for high availability, connection pooling, and load balancing. | | [pg_repack](https://github.com/reorg/pg_repack) | 1.5.2 | rebuilds PostgreSQL database objects | -| [pg_stat_monitor](https://github.com/percona/pg_stat_monitor) | 2.1.0 | collects and aggregates statistics for PostgreSQL and provides histogram information. | +| [pg_stat_monitor](https://github.com/percona/pg_stat_monitor) | {{pgsmversion}} | collects and aggregates statistics for PostgreSQL and provides histogram information. | | [pgvector](https://github.com/pgvector/pgvector) | v0.8.0 | A vector similarity search for PostgreSQL | | [PostGIS](https://github.com/postgis/postgis) | 3.3.8 | a spatial extension for PostgreSQL. | -| [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 270 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | +| [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 267 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | | [wal2json](https://github.com/eulerto/wal2json) | 2.6 | a PostgreSQL logical decoding JSON output plugin | From b969550de23b980efec7f2c4991d22271719ee8a Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Wed, 5 Mar 2025 13:12:36 +0100 Subject: [PATCH 21/41] Removed PGSM and pg_tde pages, added links to respective docs (#766) Fixed links to pgsm install for apt and yum modified: docs/apt.md modified: docs/extensions.md deleted: docs/pg-stat-monitor.md deleted: docs/pg-tde.md modified: docs/telemetry.md modified: docs/yum.md --- docs/apt.md | 2 +- docs/extensions.md | 8 +- docs/pg-stat-monitor.md | 277 ---------------------------------------- docs/pg-tde.md | 197 ---------------------------- docs/telemetry.md | 106 +++++++-------- docs/yum.md | 2 +- 6 files changed, 58 insertions(+), 534 deletions(-) delete mode 100644 docs/pg-stat-monitor.md delete mode 100644 docs/pg-tde.md diff --git a/docs/apt.md b/docs/apt.md index 2685382fd..dabe4ab31 100644 --- a/docs/apt.md +++ b/docs/apt.md @@ -88,7 +88,7 @@ Run all the commands in the following sections as root or using the `sudo` comma $ sudo apt install percona-patroni ``` - [Install `pg_stat_monitor`](pg-stat-monitor.md) + [Install `pg_stat_monitor` :octicons-external-link-16:](https://docs.percona.com/pg-stat-monitor/install.html): Install `pgBouncer`: diff --git a/docs/extensions.md b/docs/extensions.md index 4c7dd7d87..9ce4a189d 100644 --- a/docs/extensions.md +++ b/docs/extensions.md @@ -4,12 +4,10 @@ Percona Distribution for PostgreSQL is not only the PostgreSQL server. It also i Percona Distribution for PostgreSQL includes the extensions that have been tested to work together. These extensions encompass the following: -* [PostgreSQL contrib modules and utilities](contrib.md) -* Extensions authored by Percona: - - * [`pg_stat_monitor`](pg-stat-monitor.md) - * [`pg_tde`](pg-tde.md) +Percona Distribution for PostgreSQL includes the extensions that have been tested to work together. These extensions encompass the following: +* [PostgreSQL contrib modules and utilities](contrib.md) +* [Extensions authored by Percona](percona-ext.md) * [Third-party components](third-party.md) Percona also supports [extra modules](https://repo.percona.com/ppg-16-extras/), not included in Percona Distribution for PostgreSQL but tested to work with it. diff --git a/docs/pg-stat-monitor.md b/docs/pg-stat-monitor.md deleted file mode 100644 index 7793c5fed..000000000 --- a/docs/pg-stat-monitor.md +++ /dev/null @@ -1,277 +0,0 @@ -# pg_stat_monitor - -!!! note - - This document describes the functionality of pg_stat_monitor {{pgsmversion}}. - -## Overview - -`pg_stat_monitor` is a Query Performance Monitoring -tool for PostgreSQL. It collects various statistics data such as query statistics, query plan, SQL comments and other performance insights. The collected data is aggregated and presented in a single view. This allows you to view queries from performance, application and analysis perspectives. - -`pg_stat_monitor` groups statistics data and writes it in a storage unit called *bucket*. The data is added and stored in a bucket for the defined period – the bucket lifetime. This allows you to identify performance issues and patterns based on time. - -You can specify the following: - - -* The number of buckets. Together they form a bucket chain. -* Bucket size. This is the amount of shared memory allocated for buckets. Memory is divided equally among buckets. -* Bucket lifetime. - -When a bucket lifetime expires, `pg_stat_monitor` resets all statistics and writes the data in the next bucket in the chain. When the last bucket’s lifetime expires, `pg_stat_monitor` returns to the first bucket. - -!!! important - - The contents of the bucket will be overwritten. In order not to lose the data, make sure to read the bucket before `pg_stat_monitor` starts writing new data to it. - - -### Views - -#### pg_stat_monitor view - -The `pg_stat_monitor` view contains all the statistics collected and aggregated by the extension. This view contains one row for each distinct combination of metrics and whether it is a top-level statement or not (up to the maximum number of distinct statements that the module can track). For details about available metrics, refer to the [`pg_stat_monitor` view reference :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/reference.html). - -The following are the primary keys for pg_stat_monitor: - -* `bucket` -* `userid` -* `datname` -* `queryid` -* `client_ip` -* `planid` -* `application_name` - -A new row is created for each key in the `pg_stat_monitor` view. - -For security reasons, only superusers and members of the `pg_read_all_stats` role are allowed to see the SQL text, `client_ip` and `queryid` of queries executed by other users. Other users can see the statistics, however, if the view has been installed in their database. - -#### pg_stat_monitor_settings view (dropped) - -Starting with version 2.0.0, the `pg_stat_monitor_settings` view is deprecated and removed. All `pg_stat_monitor` configuration parameters are now available though the `pg_settings` view using the following query: - -```sql -SELECT name, setting, unit, context, vartype, source, min_val, max_val, enumvals, boot_val, reset_val, pending_restart FROM pg_settings WHERE name LIKE '%pg_stat_monitor%'; -``` - -For backward compatibility, you can create the `pg_stat_monitor_settings` view using the following SQL statement: - -```sql -CREATE VIEW pg_stat_monitor_settings - -AS - -SELECT * - -FROM pg_settings - -WHERE name like 'pg_stat_monitor.%'; -``` - -In `pg_stat_monitor` version 1.1.1 and earlier, the `pg_stat_monitor_settings` view shows one row per `pg_stat_monitor` configuration parameter. It displays configuration parameter name, value, default value, description, minimum and maximum values, and whether a restart is required for a change in value to be effective. - -To learn more, see the [Changing the configuration](#changing-the-configuration) section. - -## Installation - -This section describes how to install `pg_stat_monitor` from Percona repositories. To learn about other installation methods, see the [Installation :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/install.html) section in the `pg_stat_monitor` documentation. - -**Preconditions**: - -To install `pg_stat_monitor` from Percona repositories, you need to subscribe to them. To do this, you must have the [`percona-release` repository management tool :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/installing.html) up and running. - -To install `pg_stat_monitor`, run the following commands: - -=== ":material-debian: On Debian and Ubuntu" - - 1. Enable the repository - - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg16 - ``` - - 2. Install the package: - - ```{.bash data-prompt="$"} - $ sudo apt-get install percona-pg-stat-monitor16 - ``` - -=== ":material-redhat: On Red Hat Enterprise Linux and derivatives" - - 1. Enable the repository - - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg16 - ``` - - 2. Install the package: - - ```{.bash data-prompt="$"} - $ sudo yum install percona-pg-stat-monitor16 - ``` - -## Setup - -`pg_stat_monitor` requires additional setup in order to use it with PostgreSQL. The setup steps are the following: - - -1. Add `pg_stat_monitor` in the `shared_preload_libraries` configuration parameter. - - The recommended way to modify PostgreSQL configuration file is using the [ALTER SYSTEM :octicons-link-external-16:](https://www.postgresql.org/docs/15/sql-altersystem.html) command. [Connect to psql](connect.md) and use the following command: - - ```sql - ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_monitor'; - ``` - - The parameter value is written to the `postgresql.auto.conf` file which is read in addition with `postgresql.conf` file. - - !!! note - - To use `pg_stat_monitor` together with `pg_stat_statements`, specify both modules separated by commas for the `ALTER SYSTEM SET` command. - - The order of modules is important: `pg_stat_monitor` must be specified **after** `pg_stat_statements`: - - ```sql - ALTER SYSTEM SET shared_preload_libraries = ‘pg_stat_statements, pg_stat_monitor’ - ``` - -2. Start or restart the `postgresql` instance to enable `pg_stat_monitor`. Use the following command for restart: - - - === ":material-debian: On Debian and Ubuntu" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql.service - ``` - - - === ":material-redhat: On Red Hat Enterprise Linux and derivatives" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql-16 - ``` - - -3. Create the extension. Connect to `psql` and use the following command: - - ```sql - CREATE EXTENSION pg_stat_monitor; - ``` - - By default, the extension is created against the `postgres` database. You need to create the extension on every database where you want to collect statistics. - -!!! tip - - To check the version of the extension, run the following command in the `psql` session: - - ```sql - SELECT pg_stat_monitor_version(); - ``` - -## Usage - -For example, to view the IP address of the client application that made the query, run the following command: - -```sql -SELECT DISTINCT userid::regrole, pg_stat_monitor.datname, substr(query,0, 50) AS query, calls, bucket, bucket_start_time, queryid, client_ip -FROM pg_stat_monitor, pg_database -WHERE pg_database.oid = oid; -``` - -Output: - -``` - userid | datname | query | calls | bucket | bucket_start_time | queryid | client_ip -----------+----------+---------------------------------------------------+-------+--------+---------------------+------------------+----------- - postgres | postgres | SELECT name,description FROM pg_stat_monitor_sett | 1 | 9 | 2022-10-24 07:29:00 | AD536A8DEA7F0C73 | 127.0.0.1 - postgres | postgres | SELECT c.oid, +| 1 | 9 | 2022-10-24 07:29:00 | 34B888E5C844519C | 127.0.0.1 - | | n.nspname, +| | | | | - | | c.relname +| | | | | - | | FROM pg_ca | | | | | - postgres | postgres | SELECT DISTINCT userid::regrole, pg_stat_monitor. | 1 | 1 | 2022-10-24 07:31:00 | 6230793895381F1D | 127.0.0.1 - postgres | postgres | SELECT pg_stat_monitor_version() | 1 | 9 | 2022-10-24 07:29:00 | B617F5F12931F388 | 127.0.0.1 - postgres | postgres | CREATE EXTENSION pg_stat_monitor | 1 | 8 | 2022-10-24 07:28:00 | 14B98AF0776BAF7B | 127.0.0.1 - postgres | postgres | SELECT a.attname, +| 1 | 9 | 2022-10-24 07:29:00 | 96F8E4B589EF148F | 127.0.0.1 - | | pg_catalog.format_type(a.attt | | | | | - postgres | postgres | SELECT c.relchecks, c.relkind, c.relhasindex, c.r | 1 | 9 | 2022-10-24 07:29:00 | CCC51D018AC96A25 | 127.0.0.1 - -``` - - -Find more usage examples in the [`pg_stat_monitor` user guide :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/user_guide.html). - -## Changing the configuration - -Run the following query to list available configuration parameters. - -```sql -SELECT name, short_desc FROM pg_settings WHERE name LIKE '%pg_stat_monitor%'; -``` - -**Output** - -``` - name | short_desc --------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------- - pg_stat_monitor.pgsm_bucket_time | Sets the time in seconds per bucket. - pg_stat_monitor.pgsm_enable_overflow | Enable/Disable pg_stat_monitor to grow beyond shared memory into swap space. - pg_stat_monitor.pgsm_enable_pgsm_query_id | Enable/disable PGSM specific query id calculation which is very useful in comparing same query across databases and clusters.. - pg_stat_monitor.pgsm_enable_query_plan | Enable/Disable query plan monitoring. - pg_stat_monitor.pgsm_extract_comments | Enable/Disable extracting comments from queries. - pg_stat_monitor.pgsm_histogram_buckets | Sets the maximum number of histogram buckets. - pg_stat_monitor.pgsm_histogram_max | Sets the time in millisecond. - pg_stat_monitor.pgsm_histogram_min | Sets the time in millisecond. - pg_stat_monitor.pgsm_max | Sets the maximum size of shared memory in (MB) used for statement's metadata tracked by pg_stat_monitor. - pg_stat_monitor.pgsm_max_buckets | Sets the maximum number of buckets. - pg_stat_monitor.pgsm_normalized_query | Selects whether save query in normalized format. - pg_stat_monitor.pgsm_overflow_target | Sets the overflow target for pg_stat_monitor. (Deprecated, use pgsm_enable_overflow) - pg_stat_monitor.pgsm_query_max_len | Sets the maximum length of query. - pg_stat_monitor.pgsm_query_shared_buffer | Sets the maximum size of shared memory in (MB) used for query tracked by pg_stat_monitor. - pg_stat_monitor.pgsm_track | Selects which statements are tracked by pg_stat_monitor. - pg_stat_monitor.pgsm_track_planning | Selects whether planning statistics are tracked. - pg_stat_monitor.pgsm_track_utility | Selects whether utility commands are tracked. -``` - -You can change a parameter by setting a new value in the configuration file. Some parameters require server restart to apply a new value. For others, configuration reload is enough. Refer to the [configuration parameters :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/configuration.html) of the `pg_stat_monitor` documentation for the parameters’ description, how you can change their values and if the server restart is required to apply them. - -As an example, let’s set the bucket lifetime from default 60 seconds to 40 seconds. Use the **ALTER SYSTEM** command: - -```sql -ALTER SYSTEM set pg_stat_monitor.pgsm_bucket_time = 40; -``` - -Restart the server to apply the change: - - -=== ":material-debian: On Debian and Ubuntu" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql.service - ``` - -=== "On Red Hat Enterprise Linux and derivatives" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql-15 - ``` - -Verify the updated parameter: - -```sql -SELECT name, setting -FROM pg_settings -WHERE name = 'pg_stat_monitor.pgsm_bucket_time'; - - name | setting - ----------------------------------+--------- - pg_stat_monitor.pgsm_bucket_time | 40 -``` - -!!! admonition "See also" - - [`pg_stat_monitor` Documentation :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/index.html) - - - Percona Blog: - - * [pg_stat_monitor: A New Way Of Looking At PostgreSQL Metrics :octicons-link-external-16:](https://www.percona.com/blog/2021/01/19/pg_stat_monitor-a-new-way-of-looking-at-postgresql-metrics/) - * [Improve PostgreSQL Query Performance Insights with pg_stat_monitor :octicons-link-external-16:](https://www.percona.com/blog/improve-postgresql-query-performance-insights-with-pg_stat_monitor/) diff --git a/docs/pg-tde.md b/docs/pg-tde.md deleted file mode 100644 index dd78a43b3..000000000 --- a/docs/pg-tde.md +++ /dev/null @@ -1,197 +0,0 @@ -# pg_tde - -!!! note - - This is the Beta version of the extension and is not recommended for production use yet. Please use it in testing environments only. - -## Overview - -`pg_tde` stands for Transparent Data Encryption for PostgreSQL. This is an open-source extension designed to enhance PostgreSQL’s security by encrypting data files on disk. The encryption is transparent for users allowing them to access and manipulate the data and not to worry about the encryption process. - -Unlike traditional encryption methods that require significant changes to database schemas and applications, `pg_tde` seamlessly integrates with PostgreSQL, encrypting data at the table level without disrupting existing workflows. It uses the Advanced Encryption Standard (AES) encryption algorithm. - -### Key features: - -* Encryption of heap tables, including TOAST. -* Storage of encryption keys in either a Hashicorp Vault server or a local keyring file (primarily for development purposes). -* Configurable encryption settings per database: you can choose which tables to encrypt, achieving granular control over data protection. -* Replication support. -* Enhanced security through the ability to rotate principal keys used for data encryption, reducing the risk of long-term exposure to potential attacks and aiding compliance with security standards like GDPR, HIPAA, and PCI DSS. - -## Installation - -This section provides instructions how to install `pg_tde` from Percona repositories using the package manager of your operating system. For other installation methods, refer to the [`pg_tde` documentation :octicons-link-external-16:](https://percona-lab.github.io/pg_tde/main/install.html#procedure). - -=== ":material-debian: :material-debian: On Debian and Ubuntu" - - `pg_tde` packages are available for the following Linux distributions: - - * Ubuntu 20.04 (Focal Fossa) - * Ubuntu 22.04 (Jammy Jellyfish) - * Debian 10 (Buster) - * Debian 11 (Bullseye) - * Debian 12 (Bookworm) - - To install `pg_tde`, run the following commands as the root user or with the `sudo` privileges: - - 1. [Install `percona-release` :octicons-link-external-16:](https://docs.percona.com/percona-software-repositories/installing.html) repository management tool. - - 2. Enable the repository: - - ```{.bash data-prompt="$"} - $ sudo percona-release enable-only ppg-16.3 testing - ``` - - 3. Install the package: - - ```{.bash data-prompt="$"} - $ sudo apt-get install percona-postgresql-16-pg-tde - ``` - -=== ":material-redhat: On Red Hat Enterprise Linux and compatible derivatives" - - `pg_tde` packages are available for the following Linux distributions: - - * Red Hat Enterprise Linux and CentOS 7 - * Red Hat Enterprise Linux 8 and compatible derivatives - * Red Hat Enterprise Linux 9 and compatible derivatives - - To install `pg_tde`, run the following commands as the root user or with the `sudo` privileges: - - 1. Enable / disable modules: - - === "CentOS 7" - - Install the `epel-release` package: - - ```{.bash data-prompt="$"} - $ sudo yum -y install epel-release - $ sudo yum repolist - ``` - - === "RHEL8/Oracle Linux 8/Rocky Linux 8" - - Disable the ``postgresql`` and ``llvm-toolset``modules: - - ```{.bash data-prompt="$"} - $ sudo dnf module disable postgresql llvm-toolset - ``` - - 2. [Install `percona-release` :octicons-link-external-16:](https://docs.percona.com/percona-software-repositories/ installing.html) repository management tool. - - 3. Enable the repository: - - ```{.bash data-prompt="$"} - $ sudo percona-release enable-only ppg-16.3 testing - ``` - - 4. Install the package: - - ```{.bash data-prompt="$"} - $ sudo yum install percona-pg_tde_16 - ``` - -## Setup - -`pg_tde` requires additional setup steps in order to use it with PostgreSQL. -This section provides setup using the HashiCorp Vault server for storing encryption key as the recommended approach. Please see [`pg_tde` documentation :octicons-link-external-16:] for alternative configuration using a keyfile. - -The setup of the Vault server is out of scope of this document. We're assuming you have the Vault server up and running and have the following information required for the setup: - -* The secret access token to the Vault server -* The URL to access the Vault server -* (Optional) The CA file used for SSL verification - - -### Install the extension in PostgreSQL - -1. Add `pg_tde` to `shared_preload_libraries`. - - The recommended way to modify PostgreSQL configuration file is using the [ALTER SYSTEM :octicons-external-link-16:](https://www.postgresql.org/docs/15/sql-altersystem.html) command. [Connect to psql](connect.md) and use the following command: - - ```sql - ALTER SYSTEM SET shared_preload_libraries = 'pg_tde'; - ``` - -2. Start or restart the `postgresql` instance to enable `pg_tde`. Use the following command for restart: - - - === ":material-debian: On Debian and Ubuntu" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql.service - ``` - - - === ":material-redhat: On Red Hat Enterprise Linux and derivatives" - - ```{.bash data-prompt="$"} - $ sudo systemctl restart postgresql-{{pgversion}} - ``` - -3. Install the extension in your PostgreSQL using the CREATE EXTENSION command. [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. You must have the privileges of a superuser or a database owner to use this command. Connect to `psql` as a superuser for a database and run the following command: - - ```sql - CREATE EXTENSION pg_tde; - ``` - - By default, the `pg_tde` extension is created for the currently used database. To enable data encryption in other databases, you must explicitly run the `CREATE EXTENSION` command against them. - - !!! tip - - You can have the `pg_tde` extension automatically enabled for every newly created database. Modify the template `template1` database as follows: - - ``` - psql -d template1 -c 'CREATE EXTENSION pg_tde;' - ``` - - After you enabled `pg_tde`, the [access method :octicons-external-link-16:](https://www.postgresql.org/docs/current/tableam.html) `pg_tde` is created for that database. - -### Key configuration - -1. Set up a key provider for for the database where you have enabled the extension - - ```sql - SELECT pg_tde_add_key_provider_vault_v2('provider-name',:'secret_token','url','mount','ca_path'); - ``` - - where: - - * `url` is the URL of the Vault server - * `mount` is the mount point where the keyring should store the keys - * `secret_token` is an access token with read and write access to the above mount point - * [optional] `ca_path` is the path of the CA file used for SSL verification - -2. Add a principal key - - ```sql - SELECT pg_tde_set_principal_key('name-of-the-principal-key', 'provider-name'); - ``` - -## Usage - -To check if the data is encrypted, do the following: - -1. Create a table for the database where you have enabled `pg_tde` using the `pg_tde` access method: - - ```sql - CREATE TABLE my_encrypted_table ( - id SERIAL PRIMARY KEY, - sensitive_data TEXT - ) USING pg_tde; - ``` - -2. Insert some data ito it: - - ```sql - INSERT INTO my_encrypted_table (sensitive_data) - VALUES ('Sensitive data 1'), ('Sensitive data 2'), ('Sensitive data 3'); - ``` - -3. Check if the data is encrypted: - - ```sql - SELECT pg_tde_is_encrypted('my_encrypted_table'); - ``` - - The function returns `t` if the table is encrypted and `f` - if not. \ No newline at end of file diff --git a/docs/telemetry.md b/docs/telemetry.md index 864c76e54..40dc55c9a 100644 --- a/docs/telemetry.md +++ b/docs/telemetry.md @@ -67,7 +67,7 @@ The telemetry also uses the Percona Platform with the following components: `percona_pg_telemetry` is an extension to collect telemetry data in PostgreSQL. It is added to Percona Distribution for PostgreSQL and is automatically loaded when you install a PostgreSQL server. -`percona_pg_telemetry` collects metrics from the database instance daily to the Metrics File. It creates a new Metrics File for each collection. You can find the Metrics File in its [location](#location) to inspect what data is collected. +`percona_pg_telemetry` collects metrics from the database instance daily to the Metrics File. It creates a new Metrics File for each collection. You can find the Metrics File in its [location](#locations) to inspect what data is collected. Before generating a new file, the `percona_pg_telemetry` deletes the Metrics Files that are older than seven days. This process ensures that only the most recent week's data is maintained. @@ -102,74 +102,74 @@ The Metrics File uses the Javascript Object Notation (JSON) format. Percona rese The following is an example of the collected data generated by the `percona_pg_telemetry` extension: - ```json +```json + { + "db_instance_id": "7310358902660071382", + "pillar_version": "16.3", + "uptime": "36", + "databases_count": "2", + "settings": [ { - "db_instance_id": "7310358902660071382", - "pillar_version": "16.3", - "uptime": "36", - "databases_count": "2", - "settings": [ + "key": "setting", + "value": [ + { + "key": "name", + "value": "allow_in_place_tablespaces" + }, + { + "key": "unit", + "value": "NULL" + }, { "key": "setting", - "value": [ - { - "key": "name", - "value": "allow_in_place_tablespaces" - }, - { - "key": "unit", - "value": "NULL" - }, - { - "key": "setting", - "value": "off" - }, - { - "key": "reset_val", - "value": "off" - }, - { - "key": "boot_val", - "value": "off" - } - ] + "value": "off" + }, + { + "key": "reset_val", + "value": "off" + }, + { + "key": "boot_val", + "value": "off" + } + ] + }, + ... + ], + "databases": [ + { + "key": "database", + "value": [ + { + "key": "database_oid", + "value": "5" }, - ... - ], - "databases": [ { - "key": "database", + "key": "database_size", + "value": "7820895" + }, + { + "key": "active_extensions", "value": [ { - "key": "database_oid", - "value": "5" + "key": "extension_name", + "value": "plpgsql" }, { - "key": "database_size", - "value": "7820895" + "key": "extension_name", + "value": "pg_tde" }, { - "key": "active_extensions", - "value": [ - { - "key": "extension_name", - "value": "plpgsql" - }, - { - "key": "extension_name", - "value": "pg_tde" - }, - { - "key": "extension_name", - "value": "percona_pg_telemetry" - } - ] + "key": "extension_name", + "value": "percona_pg_telemetry" } ] } ] } - ``` + ] + } +``` ### Telemetry Agent diff --git a/docs/yum.md b/docs/yum.md index ec62dc962..2772af85b 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -276,7 +276,7 @@ $ sudo yum -y install curl $ sudo yum install percona-patroni ``` - [Install `pg_stat_monitor`](pg-stat-monitor.md): + [Install `pg_stat_monitor` :octicons-external-link-16:](https://docs.percona.com/pg-stat-monitor/install.html): Install `pgBouncer`: From b04930437ab8ccbbddad08ac471cbc1bcf56a876 Mon Sep 17 00:00:00 2001 From: Philip Olson Date: Wed, 5 Mar 2025 06:32:08 -0800 Subject: [PATCH 22/41] Fix typo (#762) --- docs/get-help.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/get-help.md b/docs/get-help.md index 1eab330c1..f5b0420be 100644 --- a/docs/get-help.md +++ b/docs/get-help.md @@ -24,4 +24,4 @@ We understand your challenges when managing complex database environments. That' | Expert Consulting | Our experienced consultants provide guidance on database topics like architecture design, migration planning, performance optimization, and security best practices. | | Comprehensive Training | Our training programs help your team develop skills to manage databases effectively, offering virtual and in-person courses. | -We're here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we're ready to provide your expertise and support. +We're here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we're ready to provide our expertise and support. From 6f3ce3fa82c689f36f495187bb76953bcca48ae9 Mon Sep 17 00:00:00 2001 From: Philip Olson Date: Tue, 11 Mar 2025 13:29:45 -0700 Subject: [PATCH 23/41] Fix links and version references (branch 16) (#773) * Fix broken link references (404s) found during `mkdocs serve` (branch 16) * Change version 15 references in links to `{{pgversion}}` while leaving hardcoded version 16 references unchanged. --- docs/contrib.md | 2 +- docs/enable-extensions.md | 2 +- docs/ldap.md | 2 +- docs/release-notes-v16.8.md | 2 +- docs/solutions/backup-recovery.md | 6 +++--- docs/solutions/postgis-deploy.md | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/contrib.md b/docs/contrib.md index de668bb39..0b2e12ff1 100644 --- a/docs/contrib.md +++ b/docs/contrib.md @@ -33,7 +33,7 @@ Find the list of controb modules and extensions included in Percona Distribution |[pg_freespacemap](https://www.postgresql.org/docs/{{pgversion}}/pgfreespacemap.html) |Required |Provides a means of examining the free space map (FSM), which PostgreSQL uses to track the locations of available space in tables and indexes. This can be useful for understanding space utilization and planning for maintenance operations. | |[pg_prewarm](https://www.postgresql.org/docs/{{pgversion}}/pgprewarm.html) | | Provides a convenient way to load relation data into either the operating system buffer cache or the PostgreSQL buffer cache. This can be useful for reducing the time needed for a newly started database to reach its full performance potential by preloading frequently accessed data.| |[pgrowlocks](https://www.postgresql.org/docs/{{pgversion}}/pgrowlocks.html) | Required |Provides a function to show row locking information for a specified table. | -|[pg_stat_statements](https://www.postgresql.org/docs/{{pgversion}}/pgstatstatements.html) | Required |A module for tracking planning and execution statistics of all SQL statements executed by a server. Consider using an advanced version of `pg_stat_statements` - [`pg_stat_monitor`](pg-stat-monitor.md) | +|[pg_stat_statements](https://www.postgresql.org/docs/{{pgversion}}/pgstatstatements.html) | Required |A module for tracking planning and execution statistics of all SQL statements executed by a server. Consider using an advanced version of `pg_stat_statements` - [pg_stat_monitor :octicons-link-external-16:](https://github.com/percona/pg_stat_monitor) | |[pgstattuple](https://www.postgresql.org/docs/{{pgversion}}/pgstattuple.html) | Required |Povides various functions to obtain tuple-level statistics. It offers detailed information about tables and indexes, such as the amount of free space and the number of live and dead tuples. | |[pg_surgery](https://www.postgresql.org/docs/{{pgversion}}/pgsurgery.html) | Required | Provides various functions to perform surgery on a damaged relation. These functions are unsafe by design and using them may corrupt (or further corrupt) your database. Use them with caution and only as a last resort| |[pg_trgm](https://www.postgresql.org/docs/{{pgversion}}/pgtrgm.html) | |Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. A trigram is a contiguous sequence of three characters. The extension can be used for text search and pattern matching operations. | diff --git a/docs/enable-extensions.md b/docs/enable-extensions.md index 7e3787837..432f253b4 100644 --- a/docs/enable-extensions.md +++ b/docs/enable-extensions.md @@ -60,7 +60,7 @@ For details about each option, see [pdBadger documentation :octicons-link-extern ## pgaudit -Add the `pgaudit` to `shared_preload_libraries` in `postgresql.conf`. The recommended way is to use the [ALTER SYSTEM](https://www.postgresql.org/docs/16/sql-altersystem.html) command. [Connect to psql](#connect-to-the-postgresql-server) and use the following command: +Add the `pgaudit` to `shared_preload_libraries` in `postgresql.conf`. The recommended way is to use the [ALTER SYSTEM](https://www.postgresql.org/docs/16/sql-altersystem.html) command. [Connect to psql](connect.md) and use the following command: ```sql ALTER SYSTEM SET shared_preload_libraries = 'pgaudit'; diff --git a/docs/ldap.md b/docs/ldap.md index 45e24eba1..03ae6c1ba 100644 --- a/docs/ldap.md +++ b/docs/ldap.md @@ -2,6 +2,6 @@ When a client application or a user that runs the client application connects to the database, it must identify themselves. The process of validating the client's identity and determining whether this client is permitted to access the database it has requested is called **authentication**. -Percona Distribution for PortgreSQL supports several [authentication methods :octicons-link-external-16:](https://www.postgresql.org/docs/15/auth-methods.html), including the [LDAP authentication :octicons-link-external-16:](https://www.postgresql.org/docs/14/auth-ldap.html). The use of LDAP is to provide a central place for authentication - meaning the LDAP server stores usernames and passwords and their resource permissions. +Percona Distribution for PortgreSQL supports several [authentication methods :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/auth-methods.html), including the [LDAP authentication :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/auth-ldap.html). The use of LDAP is to provide a central place for authentication - meaning the LDAP server stores usernames and passwords and their resource permissions. The LDAP authentication in Percona Distribution for PortgreSQL is implemented the same way as in upstream PostgreSQL. \ No newline at end of file diff --git a/docs/release-notes-v16.8.md b/docs/release-notes-v16.8.md index b2ab8ef61..15541f804 100644 --- a/docs/release-notes-v16.8.md +++ b/docs/release-notes-v16.8.md @@ -18,7 +18,7 @@ This release fixes [CVE-2025-1094](https://www.postgresql.org/support/security/C ### PostGIS is included into tarballs -We have extended Percona Distribution for PostgreSQL tarballs with PostGIS - an open-source extension to handle spacial data. This way you can install and run PostgreSQL as a geospatial database on hosts without a direct access to the Internet. Learn more about [installing from tarballs](tarball.md) and [Spacial data manipulation](postgis.md) +We have extended Percona Distribution for PostgreSQL tarballs with PostGIS - an open-source extension to handle spacial data. This way you can install and run PostgreSQL as a geospatial database on hosts without a direct access to the Internet. Learn more about [installing from tarballs](tarball.md) and [Spacial data manipulation](solutions/postgis.md) ### Deprecation of meta packages diff --git a/docs/solutions/backup-recovery.md b/docs/solutions/backup-recovery.md index 57a1da194..edfcd3165 100644 --- a/docs/solutions/backup-recovery.md +++ b/docs/solutions/backup-recovery.md @@ -21,9 +21,9 @@ A Disaster Recovery (DR) solution ensures that a system can be quickly restored
PostgreSQL offers multiple options for setting up database disaster recovery. - - **[pg_dump :octicons-link-external-16:](https://www.postgresql.org/docs/15/app-pgdump.html) or the [pg_dumpall :octicons-link-external-16:](https://www.postgresql.org/docs/15/app-pg-dumpall.html) utilities** + - **[pg_dump :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/app-pgdump.html) or the [pg_dumpall :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/app-pg-dumpall.html) utilities** - This is the basic backup approach. These tools can generate the backup of one or more PostgreSQL databases (either just the structure, or both the structure and data), then restore them through the [pg_restore :octicons-link-external-16:](https://www.postgresql.org/docs/15/app-pgrestore.html) command. + This is the basic backup approach. These tools can generate the backup of one or more PostgreSQL databases (either just the structure, or both the structure and data), then restore them through the [pg_restore :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/app-pgrestore.html) command. | Advantages | Disadvantages | | ------------ | --------------- | @@ -37,7 +37,7 @@ A Disaster Recovery (DR) solution ensures that a system can be quickly restored | ------------ | --------------- | | Consistent snapshot of the data directory or the whole data disk volume | 1. Requires stopping PostgreSQL in order to copy the files. This is not practical for most production setups.
2. No backup of individual databases or tables.| - - **PostgreSQL [pg_basebackup :octicons-link-external-16:](https://www.postgresql.org/docs/15/app-pgbasebackup.html)** + - **PostgreSQL [pg_basebackup :octicons-link-external-16:](https://www.postgresql.org/docs/{{pgversion}}/app-pgbasebackup.html)** This backup tool is provided by PostgreSQL. It is used to back up data when the database instance is running. `pgasebackup` makes a binary copy of the database cluster files, while making sure the system is put in and out of backup mode automatically. diff --git a/docs/solutions/postgis-deploy.md b/docs/solutions/postgis-deploy.md index 56a3c1278..ceb0e8731 100644 --- a/docs/solutions/postgis-deploy.md +++ b/docs/solutions/postgis-deploy.md @@ -68,7 +68,7 @@ The following document provides guidelines how to install PostGIS and how to run === ":octicons-download-16: From tarballs" - PostGIS is included into binary tarball and is a part of the `percona-postgresql{{pgversion}}` binary. Use the [install from tarballs](../tarball/.md) tutorial to install it. + PostGIS is included into binary tarball and is a part of the `percona-postgresql{{pgversion}}` binary. Use the [install from tarballs](../tarball.md) tutorial to install it. ## Enable PostGIS extension From dc3a9a2763c33c29cfd5b1cc3edbb5c3e30556e4 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Tue, 1 Apr 2025 09:53:44 +0200 Subject: [PATCH 24/41] Validated print-site plugin for PDF (#768) * Validated print-site plugin for PDF --- CONTRIBUTING.md | 25 ++-- _resourcepdf/overrides/404.html | 9 ++ _resourcepdf/overrides/main.html | 69 +++++++++ _resourcepdf/overrides/partials/banner.html | 9 ++ .../overrides/partials/copyright.html | 14 ++ _resourcepdf/overrides/partials/header.html | 135 ++++++++++++++++++ docs/templates/pdf_cover_page.tpl | 12 ++ mkdocs-base.yml | 31 ++-- mkdocs-pdf.yml | 17 --- mkdocs.yml | 65 ++++++++- requirements.txt | 3 +- 11 files changed, 345 insertions(+), 44 deletions(-) create mode 100644 _resourcepdf/overrides/404.html create mode 100644 _resourcepdf/overrides/main.html create mode 100644 _resourcepdf/overrides/partials/banner.html create mode 100644 _resourcepdf/overrides/partials/copyright.html create mode 100644 _resourcepdf/overrides/partials/header.html create mode 100644 docs/templates/pdf_cover_page.tpl delete mode 100644 mkdocs-pdf.yml diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index df7c18b79..37ed77117 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -28,12 +28,13 @@ There are several active versions of the documentation. Each version derives fro Each version has a branch in the repository named accordingly: -- 11 -- 12 -- 13 +- 11 (EOL) +- 12 (EOL) +- 13 - 14 - 15 - 16 +- 17 The source .md files are in the ``docs`` directory. @@ -140,14 +141,17 @@ mkdocs serve ``` 6. To build the PDF documentation, do the following: - - Install [mkdocs-with-pdf plugin](https://pypi.org/project/mkdocs-with-pdf/) + - Install [mkdocs-print-site-plugin](https://timvink.github.io/mkdocs-print-site-plugin/index.html) - Run the following command ```sh ENABLE_PDF_EXPORT=1 mkdocs build -f mkdocs-pdf.yml ``` -The PDF document is in the ``site/pdf`` folder. +This creates a single HTML page for the whole doc project. You can find the page at `site/print_page.html`. + +7. Open the `site/print_page.html` in your browser and save as PDF. Depending on the browser, you may need to select the Export to PDF, Print - Save as PDF or just Save and select PDF as the output format. + ## Repository structure @@ -161,13 +165,16 @@ The repository includes the following directories and files: - `_images` - Images, logos and favicons - `css` - Styles - `js` - Javascript files -- `_resource`: - - `templates`: + - `templates`: - ``styles.scss`` - Styling for PDF documents - - `theme`: + - `pdf_cover_page.tpl` - The PDF cover page template +- `_resource`: + - `overrides` - The directory with customized templates for HTML output - `main.html` - The layout template for hosting the documentation on Percona website - - overrides - The folder with the template customization for Netlify builds +- `_resourcepdf`: + - `overrides` - The directory with customized layout templates for PDF - `.github`: - `workflows`: - `main.yml` - The workflow configuration for building documentation with a GitHub action. (The documentation is built with `mike` tool to a dedicated `netlify` branch) - `site` - This is where the output HTML files are put after the build +- `snippets` - The folder with pieces of documentation used in multiple places diff --git a/_resourcepdf/overrides/404.html b/_resourcepdf/overrides/404.html new file mode 100644 index 000000000..3d3717301 --- /dev/null +++ b/_resourcepdf/overrides/404.html @@ -0,0 +1,9 @@ +{#- + This file was automatically generated - do not edit +-#} +{% extends "main.html" %} +{% block content %} +

404 - Not found

+

+We can't find the page you are looking for. Try using the Search or return to homepage .

+{% endblock %} diff --git a/_resourcepdf/overrides/main.html b/_resourcepdf/overrides/main.html new file mode 100644 index 000000000..545cd7c41 --- /dev/null +++ b/_resourcepdf/overrides/main.html @@ -0,0 +1,69 @@ +{# +MkDocs template for builds with Material theme to customize docs layout +by adding marketing-requested elements +#} + +{# Import the theme's layout. #} +{% extends "base.html" %} + + + {% block site_nav %} + {% if nav %} + {% if page.meta and page.meta.hide %} + {% set hidden = "hidden" if "navigation" in page.meta.hide %} + {% endif %} +
+
+
+ {% include "partials/nav.html" %} +
+ +
+
+
+ {% endif %} + {% if "toc.integrate" not in features %} + {% if page.meta and page.meta.hide %} + {% set hidden = "hidden" if "toc" in page.meta.hide %} + {% endif %} +
+
+
+ {% include "partials/toc.html" %} +
+
+ {% include "partials/banner.html" %} +
+
+
+ {% endif %} + {% endblock %} + + {% block content%} + + {{ super() }} + + + + {% endblock %} \ No newline at end of file diff --git a/_resourcepdf/overrides/partials/banner.html b/_resourcepdf/overrides/partials/banner.html new file mode 100644 index 000000000..830718b90 --- /dev/null +++ b/_resourcepdf/overrides/partials/banner.html @@ -0,0 +1,9 @@ +
+

+

For help, click the link below to get free database assistance or contact our experts for personalized support.

+ +
+ + Get help from Percona +
+
\ No newline at end of file diff --git a/_resourcepdf/overrides/partials/copyright.html b/_resourcepdf/overrides/partials/copyright.html new file mode 100644 index 000000000..dd0f101fa --- /dev/null +++ b/_resourcepdf/overrides/partials/copyright.html @@ -0,0 +1,14 @@ +{#- + This file was automatically generated - do not edit +-#} +
+
+ Percona LLC and/or its affiliates, © {{ build_date_utc.strftime('%Y') }} — Cookie Preferences +
+ {% if not config.extra.generator == false %} + Made with + + Material for MkDocs + + {% endif %} +
\ No newline at end of file diff --git a/_resourcepdf/overrides/partials/header.html b/_resourcepdf/overrides/partials/header.html new file mode 100644 index 000000000..2d0d6e740 --- /dev/null +++ b/_resourcepdf/overrides/partials/header.html @@ -0,0 +1,135 @@ + + + +{% set class = "md-header" %} +{% if "navigation.tabs.sticky" in features %} + {% set class = class ~ " md-header--shadow md-header--lifted" %} +{% elif "navigation.tabs" not in features %} + {% set class = class ~ " md-header--shadow" %} +{% endif %} + + +
+ + +
+
+ + + + + + + + + + Percona Software for PostgreSQL Documentation + +
+
+ +
+ + + + {% include "partials/logo.html" %} + + + + + + +
+
+ + + {{ config.site_name }} + + +
+ + {% if page.meta and page.meta.title %} + {{ page.meta.title }} + {% else %} + {{ page.title }} + {% endif %} + +
+
+
+ + + {% if config.theme.palette %} + {% if not config.theme.palette is mapping %} + {% include "partials/palette.html" %} + {% endif %} + {% endif %} + + + {% if not config.theme.palette is mapping %} + {% include "partials/javascripts/palette.html" %} + {% endif %} + + + {% if config.extra.alternate %} + {% include "partials/alternate.html" %} + {% endif %} + + + {% if "material/search" in config.plugins %} + + + + {% include "partials/search.html" %} + {% endif %} + + + {% if config.repo_url %} +
+ {% include "partials/source.html" %} +
+ {% endif %} +
+ + + {% if "navigation.tabs.sticky" in features %} + {% if "navigation.tabs" in features %} + {% include "partials/tabs.html" %} + {% endif %} + {% endif %} +
\ No newline at end of file diff --git a/docs/templates/pdf_cover_page.tpl b/docs/templates/pdf_cover_page.tpl new file mode 100644 index 000000000..b5ab6ed46 --- /dev/null +++ b/docs/templates/pdf_cover_page.tpl @@ -0,0 +1,12 @@ + +{{ config.extra.added_key }} +

+ +

+

Distribution for PostgreSQL

+{% if config.site_description %} +

{{ config.site_description }}

+{% endif %} +

16.8 (February 27, 2025)

+ + diff --git a/mkdocs-base.yml b/mkdocs-base.yml index a08921644..16157af2a 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -125,24 +125,29 @@ plugins: macros: include_yaml: - 'variables.yml' # Use in markdown as '{{ VAR }}' -# exclude: # Don't process these files -# glob: -# - file.md - with-pdf: # https://github.com/orzih/mkdocs-with-pdf - output_path: '_pdf/PerconaDistributionPostgreSQL-16.pdf' - cover_title: 'Distribution for PostgreSQL Documentation' - - cover_subtitle: 16.8 (February 27, 2025) - author: 'Percona Technical Documentation Team' - cover_logo: docs/_images/Percona_Logo_Color.png - debug_html: false - custom_template_path: _resource/templates - enabled_if_env: ENABLE_PDF_EXPORT mike: version_selector: true css_dir: css javascript_dir: js canonical_version: null + print-site: + add_to_navigation: false + print_page_title: 'Percona Distribution for PostgreSQL documentation' + add_print_site_banner: false + # Table of contents + add_table_of_contents: true + toc_title: 'Table of Contents' + toc_depth: 2 + # Content-related + add_full_urls: false + enumerate_headings: false + enumerate_headings_depth: 1 + enumerate_figures: true + add_cover_page: true + cover_page_template: "docs/templates/pdf_cover_page.tpl" + path_to_pdf: "" + include_css: true + enabled: true extra: version: diff --git a/mkdocs-pdf.yml b/mkdocs-pdf.yml deleted file mode 100644 index c6ffae69c..000000000 --- a/mkdocs-pdf.yml +++ /dev/null @@ -1,17 +0,0 @@ -# MkDocs configuration for PDF builds -# Usage: ENABLE_PDF_EXPORT=1 mkdocs build -f mkdocs-pdf.yml - -INHERIT: mkdocs-base.yml - -copyright: Percona LLC, © 2024 - -extra_css: - - https://unicons.iconscout.com/release/v3.0.3/css/line.css - - https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.4.0/css/font-awesome.min.css - - css/percona.css - - css/extra.css - - css/osano.css - -markdown_extensions: - pymdownx.tabbed: {} - admonition: {} diff --git a/mkdocs.yml b/mkdocs.yml index a91b409f6..1c08d4616 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -6,7 +6,7 @@ site_url: "https://docs.percona.com/postgresql/" theme: name: material - custom_dir: _resource/overrides/ + custom_dir: _resourcepdf/overrides/ extra: analytics: @@ -28,6 +28,63 @@ extra: feedback form. -#markdown_extensions: -# - pymdownx.tabbed: -# alternate_style: true +nav: + - 'Home': 'index.md' + - get-help.md + - Get started: + - Quickstart guide: installing.md + - 1. Install: + - Via apt: apt.md + - Via yum: yum.md + - From tarballs: tarball.md + - Run in Docker: docker.md + - enable-extensions.md + - repo-overview.md + - 2. Connect to PostgreSQL: connect.md + - 3. Manipulate data in PostgreSQL: crud.md + - 4. What's next: whats-next.md + - Extensions: + - 'Extensions': extensions.md + - contrib.md + - Percona-authored extensions: percona-ext.md + - third-party.md + - Solutions: + - Overview: solutions.md + - High availability: + - 'High availability': 'solutions/high-availability.md' + - 'Deploying on Debian or Ubuntu': 'solutions/ha-setup-apt.md' + - 'Deploying on RHEL or derivatives': 'solutions/ha-setup-yum.md' + - solutions/pgbackrest.md + - solutions/ha-test.md + - Backup and disaster recovery: + - 'Overview': 'solutions/backup-recovery.md' + - solutions/dr-pgbackrest-setup.md + - Spatial data handling: + - Overview: solutions/postgis.md + - Deployment: solutions/postgis-deploy.md + - Query spatial data: solutions/postgis-testing.md + - Upgrade spatial database: solutions/postgis-upgrade.md + - LDAP authentication: + - ldap.md + - Upgrade: + - "Major upgrade": major-upgrade.md + - minor-upgrade.md + - migration.md + - Troubleshooting guide: troubleshooting.md + - Uninstall: uninstalling.md + - Release Notes: + - "Release notes index": "release-notes.md" + - release-notes-v16.8.md + - release-notes-v16.6.md + - release-notes-v16.4.md + - release-notes-v16.3.md + - release-notes-v16.2.md + - release-notes-v16.1.upd.md + - release-notes-v16.1.md + - release-notes-v16.0.upd.md + - release-notes-v16.0.md + - Reference: + - Telemetry: telemetry.md + - Licensing: licensing.md + - Trademark policy: trademark-policy.md + diff --git a/requirements.txt b/requirements.txt index 031d9e13a..ce82d86ff 100644 --- a/requirements.txt +++ b/requirements.txt @@ -15,4 +15,5 @@ mkdocs-htmlproofer-plugin mkdocs-meta-descriptions-plugin mike Pillow > 10.1.0 -mkdocs-open-in-new-tab \ No newline at end of file +mkdocs-open-in-new-tab +mkdocs-print-site-plugin \ No newline at end of file From 01302c3db5669038c880c640161e66e8ac55094e Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 10 Apr 2025 13:34:56 +0300 Subject: [PATCH 25/41] Update requirements.txt (#780) * Update requirements.txt with commented procedures for our internal doc team. Co-authored-by: Anastasia Alexandrova --- requirements.txt | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/requirements.txt b/requirements.txt index ce82d86ff..d21b46b7b 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,4 +1,13 @@ - +# This file is used to install the required packages for the doc project. +# Ensure you are in the same location/root that the requirements.txt file is in. +# It is recommended to use Windows Powershell in Administrator mode or Linux Terminal to run the commands. +# You can install the required packages using the following command: +# pip install -r requirements.txt +# This will install all the packages listed in this file. +# To update the packages, run the following command: +# pip install --upgrade -r requirements.txt +# To check for outdated packages, run the following command: +# pip list --outdated Markdown mkdocs mkdocs-versioning @@ -16,4 +25,4 @@ mkdocs-meta-descriptions-plugin mike Pillow > 10.1.0 mkdocs-open-in-new-tab -mkdocs-print-site-plugin \ No newline at end of file +mkdocs-print-site-plugin From f261ef60278187eab284d6fff81eb8c0c50fb011 Mon Sep 17 00:00:00 2001 From: pikachuSparkle Date: Thu, 24 Apr 2025 16:59:18 +0800 Subject: [PATCH 26/41] Update crud.md (#757) * Update crud.md fix insert sql syntax error * Update tarball.md -- fix shell command tar syntax error fix shell command tar syntax error -f must close to file name --- docs/crud.md | 4 ++-- docs/tarball.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/crud.md b/docs/crud.md index 8517d1a10..ee52aa440 100644 --- a/docs/crud.md +++ b/docs/crud.md @@ -33,7 +33,7 @@ Populate the table with the sample data as follows: INSERT INTO customers (first_name, last_name, email) VALUES ('John', 'Doe', 'john.doe@example.com'), -- Insert a new row - ('Jane', 'Doe', 'jane.doe@example.com'); + ('Jane', 'Doe', 'jane.doe@example.com'), -- Insert another new row ('Alice', 'Smith', 'alice.smith@example.com'); ``` @@ -109,4 +109,4 @@ Congratulations! You have used basic create, read, update and delete (CRUD) oper ## Next steps -[What's next?](whats-next.md){.md-button} \ No newline at end of file +[What's next?](whats-next.md){.md-button} diff --git a/docs/tarball.md b/docs/tarball.md index 51e589d35..ab4d6da07 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -83,7 +83,7 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use 4. Extract the tarball to the directory for binaries that you created on step 1. ```{.bash data-prompt="$"} - $ sudo tar -xfv percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz -C /opt/pgdistro/ + $ sudo tar -xvf percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz -C /opt/pgdistro/ ``` 5. If you extracted the tarball in a directory other than `/opt`, copy `percona-python3`, `percona-tcl` and `percona-perl` to the `/opt` directory. This is required for the correct run of libraries that require those modules. From 3c6b7a50e619d3e7f0fe98d2087f2346034fbf27 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 24 Apr 2025 17:35:45 +0300 Subject: [PATCH 27/41] Backport: Updated links + install and tarball mds (based on 0d46fe5, with updated links) (#783) Squashed and merged for 16.x --- docs/installing.md | 21 +++++++++------------ docs/tarball.md | 46 ++++++++++++++++++++++++++-------------------- 2 files changed, 35 insertions(+), 32 deletions(-) diff --git a/docs/installing.md b/docs/installing.md index 738877148..27c95e567 100644 --- a/docs/installing.md +++ b/docs/installing.md @@ -4,14 +4,14 @@ Percona Distribution for PostgreSQL is the PostgreSQL server with the collection This document aims to guide database application developers and DevOps engineer in getting started with Percona Distribution for PostgreSQL. Upon completion of this guide, you’ll have Percona Distribution for PostgreSQL installed and operational, and you’ll be able to: -* Connect to PostgreSQL using the `psql` interactive terminal +* Connect to PostgreSQL using the `psql` interactive terminal * Interact with PostgreSQL with basic psql commands -* Manipulate data in PostgreSQL +* Manipulate data in PostgreSQL * Understand the next steps you can take as a database application developer or administrator to expand your knowledge of Percona Distribution for PostgreSQL ## Install Percona Distribution for PostgreSQL -You can select from multiple easy-to-follow installation options, but **we recommend using a Package Manager** for a convenient and quick way to try the software first. +You can select from multiple easy-to-follow installation options, however **we strongly recommend using a Package Manager** for a convenient and quick way to try the software first. === ":octicons-terminal-16: Package manager" @@ -24,7 +24,6 @@ You can select from multiple easy-to-follow installation options, but **we recom [Install via apt :material-arrow-right:](apt.md){.md-button} [Install via yum :material-arrow-right:](yum.md){.md-button} - === ":simple-docker: Docker" Get our image from Docker Hub and spin up a cluster on a Docker container for quick evaluation. @@ -41,15 +40,13 @@ You can select from multiple easy-to-follow installation options, but **we recom [Get started with Percona Operator :octicons-link-external-16:](https://docs.percona.com/percona-operator-for-postgresql/2.0/quickstart.html){.md-button} -=== ":octicons-download-16: Manual download" +=== ":octicons-download-16: Tar download (not recommended)" - If you need to install Percona Distribution for PostgreSQL offline or as a non-superuser, check out the link below for a step-by-step guide and get access to the downloads directory. + If installing the package (the **recommended** method for a safe, secure, and reliable setup) is not an option, refer to the link below for step-by-step instructions on installing from tarballs using the provided download links. - Note that for this scenario you must make sure that all dependencies are satisfied. + In this scenario, you must ensure that all dependencies are met. Failure to do so may result in errors or crashes. + + !!! note + This method is **not recommended** for mission-critical environments. [Install from tarballs :material-arrow-right:](tarball.md){.md-button} - - - - - diff --git a/docs/tarball.md b/docs/tarball.md index ab4d6da07..44aefc32e 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -1,13 +1,21 @@ -# Install Percona Distribiution for PostgreSQL from binary tarballs +# Install Percona Distribution for PostgreSQL from binary tarballs -You can find the binary tarballs on the [Percona website](https://www.percona.com/downloads). Select the desired version from a version dropdown and _All_ from the Select Platform dropdown. +You can download the tarballs using the links below. -There are the following tarballs available both for x86_64 and ARM64 architectures: +!!! note -* percona-postgresql-{{dockertag}}-ssl1.1-linux-.tar.gz - for operating systems that run OpenSSL version 1.x -* percona-postgresql-{{dockertag}}-ssl3-linux-.tar.gz - for operating systems that run OpenSSL version 3.x + Unlike package managers, a tarball installation does **not** provide mechanisms to ensure that all dependencies are resolved to the correct library versions. There is no built-in method to verify that required libraries are present or to prevent them from being removed. As a result, unresolved or broken dependencies may lead to errors, crashes, or even data corruption. + + For this reason, tarball installations are **not recommended** for environments where safety, security, reliability, or mission-critical stability are required. + +The following tarballs are available for the x86_64 and ARM64 architectures: -To check what OpenSSL version you have, run the following command: +* [percona-postgresql-{{dockertag}}-ssl1.1-linux-aarch64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl1.1-linux-aarch64.tar.gz) - for operating systems on ARM64 architecture that run OpenSSL version 1.x +* [percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 1.x +* [percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz) - for operating systems on ARM64 architecture that run OpenSSL version 3.x +* [percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 3.x + +To check what OpenSSL version you have, run the following command: ```{.bash data-prompt="$"} $ openssl version @@ -35,7 +43,7 @@ The tarballs include the following components: === "Debian and Ubuntu" - 1. Uninstall the upstream PostgreSQL package. + 1. Uninstall the upstream PostgreSQL package. 2. Create the user to own the PostgreSQL process. For example, `mypguser`. Run the following command: ```{.bash data-prompt="$"} @@ -50,7 +58,7 @@ The tarballs include the following components: === "RHEL and derivatives" - Create the user to own the PostgreSQL process. For example, `mypguser`, Run the following command: + Create the user to own the PostgreSQL process. For example, `mypguser`, Run the following command: ```{.bash data-prompt="$"} $ sudo useradd mypguser -m @@ -74,7 +82,7 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use $ sudo chown mypguser:mypguser /opt/pgdistro/ ``` -3. Fetch the binary tarball. +3. Fetch the binary tarball. ```{.bash data-prompt="$"} $ wget https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz @@ -86,12 +94,12 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use $ sudo tar -xvf percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz -C /opt/pgdistro/ ``` -5. If you extracted the tarball in a directory other than `/opt`, copy `percona-python3`, `percona-tcl` and `percona-perl` to the `/opt` directory. This is required for the correct run of libraries that require those modules. - +5. If you extracted the tarball in a directory other than `/opt`, copy `percona-python3`, `percona-tcl` and `percona-perl` to the `/opt` directory. This is required for the correct run of libraries that require those modules. + ```{.bash data-prompt="$"} $ sudo cp /percona-perl /percona-python3 /percona-tcl /opt/ ``` - + 6. Add the location of the binaries to the PATH variable: ```{.bash data-prompt="$"} @@ -112,12 +120,11 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use ``` 9. Initiate the PostgreSQL data directory: - + ```{.bash data-prompt="$"} $ /opt/pgdistro/percona-postgresql{{pgversion}}/bin/initdb -D /usr/local/pgsql/data ``` - ??? example "Sample output" ```{.text .no-copy} @@ -133,28 +140,28 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use ``` ??? example "Sample output" - + ```{.text .no-copy} waiting for server to start.... done server started ``` 9. Connect to `psql` - + ```{.bash data-prompt="$"} $ /opt/pgdistro/percona-postgresql{{pgversion}}/bin/psql -d postgres ``` ??? example "Sample output" - + ```{.text .no-copy} psql ({{dockertag}}) Type "help" for help. postgres=# ``` - -## Start the components + +### Start the components After you unpacked the tarball and added the location of the components' binaries to the `$PATH` variable, the components are available for use. You can invoke a component by running its command-line tool. @@ -165,4 +172,3 @@ $ haproxy version ``` Some components require additional setup. Check the [Enabling extensions](enable-extensions.md) page for details. - From 20d139aba5df0100dd18dd6ebfab72992122fabb Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 24 Apr 2025 17:59:09 +0300 Subject: [PATCH 28/41] Fixed links to tarball.md (#784) fixed download links --- docs/tarball.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/tarball.md b/docs/tarball.md index 44aefc32e..a00d3bfbc 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -11,9 +11,9 @@ You can download the tarballs using the links below. The following tarballs are available for the x86_64 and ARM64 architectures: * [percona-postgresql-{{dockertag}}-ssl1.1-linux-aarch64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl1.1-linux-aarch64.tar.gz) - for operating systems on ARM64 architecture that run OpenSSL version 1.x -* [percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 1.x -* [percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz) - for operating systems on ARM64 architecture that run OpenSSL version 3.x -* [percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16.8/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 3.x +* [percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl1.1-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 1.x +* [percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-aarch64.tar.gz) - for operating systems on ARM64 architecture that run OpenSSL version 3.x +* [percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz](https://downloads.percona.com/downloads/postgresql-distribution-16/{{dockertag}}/binary/tarball/percona-postgresql-{{dockertag}}-ssl3-linux-x86_64.tar.gz) - for operating systems on x86_64 architecture that run OpenSSL version 3.x To check what OpenSSL version you have, run the following command: From eaca92d055e072f5fa20cb3514c7812fa7c120a7 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Fri, 25 Apr 2025 16:19:25 +0300 Subject: [PATCH 29/41] Backport: Doc update for HA v16 from 3df907c (#788) --- docs/solutions/ha-setup-apt.md | 34 ++++++++++++++-------------------- docs/solutions/ha-setup-yum.md | 34 ++++++++++++++-------------------- 2 files changed, 28 insertions(+), 40 deletions(-) diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md index 24c34773f..bab3e6674 100644 --- a/docs/solutions/ha-setup-apt.md +++ b/docs/solutions/ha-setup-apt.md @@ -357,31 +357,19 @@ Run the following commands on all nodes. You can do this in parallel: archive_mode: "on" archive_timeout: 600s archive_command: "cp -f %p /home/postgres/archived/%f" + pg_hba: + - local all all peer + - host replication replicator 127.0.0.1/32 trust + - host replication replicator 192.0.0.0/8 scram-sha-256 + - host all all 0.0.0.0/0 scram-sha-256 + recovery_conf: + restore_command: cp /home/postgres/archived/%f %p # some desired options for 'initdb' initdb: # Note: It needs to be a list (some options need values, others are switches) - encoding: UTF8 - data-checksums - - pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - - host replication replicator 127.0.0.1/32 trust - - host replication replicator 0.0.0.0/0 md5 - - host all all 0.0.0.0/0 md5 - - host all all ::0/0 md5 - - # Some additional users which needs to be created after initializing new cluster - users: - admin: - password: qaz123 - options: - - createrole - - createdb - percona: - password: qaz123 - options: - - createrole - - createdb - + postgresql: cluster_name: cluster_1 listen: 0.0.0.0:5432 @@ -403,6 +391,12 @@ Run the following commands on all nodes. You can do this in parallel: basebackup: checkpoint: 'fast' + watchdog: + mode: required # Allowed values: off, automatic, required + device: /dev/watchdog + safety_margin: 5 + + tags: nofailover: false noloadbalance: false diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index a2ef5d66b..b42b32d38 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -366,31 +366,19 @@ Run the following commands on all nodes. You can do this in parallel: archive_mode: "on" archive_timeout: 600s archive_command: "cp -f %p /home/postgres/archived/%f" + pg_hba: + - local all all peer + - host replication replicator 127.0.0.1/32 trust + - host replication replicator 192.0.0.0/8 scram-sha-256 + - host all all 0.0.0.0/0 scram-sha-256 + recovery_conf: + restore_command: cp /home/postgres/archived/%f %p # some desired options for 'initdb' initdb: # Note: It needs to be a list (some options need values, others are switches) - encoding: UTF8 - data-checksums - - pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - - host replication replicator 127.0.0.1/32 trust - - host replication replicator 0.0.0.0/0 md5 - - host all all 0.0.0.0/0 md5 - - host all all ::0/0 md5 - - # Some additional users which needs to be created after initializing new cluster - users: - admin: - password: qaz123 - options: - - createrole - - createdb - percona: - password: qaz123 - options: - - createrole - - createdb - + postgresql: cluster_name: cluster_1 listen: 0.0.0.0:5432 @@ -412,6 +400,12 @@ Run the following commands on all nodes. You can do this in parallel: basebackup: checkpoint: 'fast' + watchdog: + mode: required # Allowed values: off, automatic, required + device: /dev/watchdog + safety_margin: 5 + + tags: nofailover: false noloadbalance: false From 2b90a9dcf4ea98bd1a71b3ba8fcb80e9799698a6 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 29 May 2025 17:33:12 +0300 Subject: [PATCH 30/41] PG-1567-Release-notes-16.9 (#793) * Create release-notes-v16.9.md initial commit * updated ver nr and dates updated version numbers and added release variable * Update variables.yml updated with date * Updated rel notes link For 16.8 and 16.9 * updated major-upgrade Updated with a bit more feedback on the !!! important note, reworded it a bit. * upgraded major steps based on pg-1599, updated steps to upgrade major version On Debian and Ubuntu using `apt` topic * Added documentation note To the release notes regarding the new updated major upgrade chapter steps * small fixes small linting updates and added latest release note to index * updated tarball with a precondition Added additional precondition + set the correct date of release for 16.9 * small fix fixed space and reworded based on feedback from Naeem * date updates date updates and release notes informing user they are based and built upon 16.8 release --- .github/workflows/main.yml | 2 +- docs/major-upgrade.md | 230 +++++++++++++++--------------- docs/release-notes-v16.9.md | 45 ++++++ docs/release-notes.md | 4 +- docs/tarball.md | 19 ++- docs/templates/pdf_cover_page.tpl | 2 +- mkdocs-base.yml | 1 + mkdocs.yml | 1 + variables.yml | 5 +- 9 files changed, 187 insertions(+), 122 deletions(-) create mode 100644 docs/release-notes-v16.9.md diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 3827dbeb4..0fdb93e36 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -43,7 +43,7 @@ jobs: - name: Deploy docs run: | mike deploy 16 -b publish -p - mike retitle 16 "16.8" -b publish -p + mike retitle 16 "16.9" -b publish -p # - name: Install Node.js 14.x # uses: percona-platform/setup-node@v2 diff --git a/docs/major-upgrade.md b/docs/major-upgrade.md index 46fca6a82..ebe842801 100644 --- a/docs/major-upgrade.md +++ b/docs/major-upgrade.md @@ -2,16 +2,15 @@ This document describes the in-place upgrade of Percona Distribution for PostgreSQL using the `pg_upgrade` tool. -!!! important +To ensure a smooth upgrade path, follow these steps: - When running a major upgrade on **RHEL 8 and compatible derivatives**, consider the following: - - Percona Distribution for PostgreSQL 16.3, 15.7, 14.12, 13.15 and 12.18 include `llvm` packages 16.0.6, while its previous versions 16.2, 15.6, 14.11, 13.14, and 12.17 include `llvm` 12.0.1. Since `llvm` libraries differ and are not compatible, the direct major version upgrade from 15.6 to 16.3 may cause issues. +* Upgrade to the latest minor version within your current major version (e.g., from 15.11 to 15.13). +* Then, perform the major upgrade to your desired version (e.g., from 15.13 to 16.9). - To ensure a smooth upgrade path, follow these steps: +!!! Note + When running a major upgrade for **RHEL 8 and compatible derivatives**, consider the following: - * Upgrade to the latest minor version within your current major version (e.g., from 15.6 to 15.7). - * Then, perform the major upgrade to your desired version (e.g., from 15.7 to 16.3). + Percona Distribution for PostgreSQL 16.3, 15.7, 14.12, 13.15 and 12.18 include `llvm` packages 16.0.6, while its previous versions 16.2, 15.6, 14.11, 13.14, and 12.17 include `llvm` 12.0.1. Since `llvm` libraries differ and are not compatible, the direct major version upgrade from 15.6 to 16.3 may cause issues. The in-place upgrade means installing a new version without removing the old version and keeping the data files on the server. @@ -58,17 +57,18 @@ Run **all** commands as root or via **sudo**: 1. Install Percona Distribution for PostgreSQL 16 packages. + !!! note + When installing version 16, if prompted via a pop-up to upgrade to the latest available version, select **No**. * [Install percona-release :octicons-link-external-16:](https://docs.percona.com/percona-software-repositories/installing.html). If you have installed it before, [update it to the latest version](https://docs.percona.com/percona-software-repositories/updating.html) - - * Enable Percona repository: + + * Enable Percona repository ```{.bash data-prompt="$"} $ sudo percona-release setup ppg-16 ``` - - * Install Percona Distribution for PostgreSQL 16 package: + * Install Percona Distribution for PostgreSQL 16 package ```{.bash data-prompt="$"} $ sudo apt install percona-postgresql-16 @@ -82,94 +82,113 @@ Run **all** commands as root or via **sudo**: This stops both Percona Distribution for PostgreSQL 15 and 16. - 3. Run the database upgrade. + * Log in as the `postgres` user - * Log in as the `postgres` user. - - ```{.bash data-prompt="$"} - $ sudo su postgres - ``` - - - * Change the current directory to the `tmp` directory where logs and some scripts will be recorded: - - ```{.bash data-prompt="$"} - $ cd tmp/ - ``` - - - * Check the ability to upgrade Percona Distribution for PostgreSQL from 15 to 16: - - ```{.bash data-prompt="$"} - $ /usr/lib/postgresql/16/bin/pg_upgrade \ - --old-datadir=/var/lib/postgresql/15/main \ - --new-datadir=/var/lib/postgresql/16/main \ - --old-bindir=/usr/lib/postgresql/15/bin \ - --new-bindir=/usr/lib/postgresql/16/bin \ - --old-options '-c config_file=/etc/postgresql/15/main/postgresql.conf' \ - --new-options '-c config_file=/etc/postgresql/16/main/postgresql.conf' \ - --check - ``` - - The `--check` flag here instructs `pg_upgrade` to only check the upgrade without changing any data. + ```{.bash data-prompt="$"} + $ sudo su postgres + ``` - **Sample output** + * Check if you can upgrade Percona Distribution for PostgreSQL from 15 to 16 - ``` - Performing Consistency Checks - ----------------------------- - Checking cluster versions ok - Checking database user is the install user ok - Checking database connection settings ok - Checking for prepared transactions ok - Checking for reg* data types in user tables ok - Checking for contrib/isn with bigint-passing mismatch ok - Checking for tables WITH OIDS ok - Checking for invalid "sql_identifier" user columns ok - Checking for presence of required libraries ok - Checking database user is the install user ok - Checking for prepared transactions ok - - *Clusters are compatible* - ``` + ```bash + $ pg_upgradecluster 15 main --check + # Sample output: pg_upgradecluster pre-upgrade checks ok + ``` + The `--check` flag here instructs `pg_upgrade` to only check the upgrade without changing any data. * Upgrade the Percona Distribution for PostgreSQL - ```{.bash data-prompt="$"} - $ /usr/lib/postgresql/16/bin/pg_upgrade \ - --old-datadir=/var/lib/postgresql/15/main \ - --new-datadir=/var/lib/postgresql/16/main \ - --old-bindir=/usr/lib/postgresql/15/bin \ - --new-bindir=/usr/lib/postgresql/16/bin \ - --old-options '-c config_file=/etc/postgresql/15/main/postgresql.conf' \ - --new-options '-c config_file=/etc/postgresql/16/main/postgresql.conf' \ - --link - ``` - - The `--link` flag creates hard links to the files on the old version cluster so you don’t need to copy data. - - If you don’t wish to use the `--link` option, make sure that you have enough disk space to store 2 copies of files for both old version and new version clusters. - - - * Go back to the regular user: - - ```{.bash data-prompt="$"} - $ exit - ``` - - - * The Percona Distribution for PostgreSQL 15 uses the `5432` port while the Percona Distribution for PostgreSQL 16 is set up to use the `5433` port by default. To start the Percona Distribution for PostgreSQL 15, swap ports in the configuration files of both versions. - - ```{.bash data-prompt="$"} - $ sudo vim /etc/postgresql/16/main/postgresql.conf - $ port = 5433 # Change to 5432 here - $ sudo vim /etc/postgresql/15/main/postgresql.conf - $ port = 5432 # Change to 5433 here - ``` + ```bash + $ pg_upgradecluster 15 main + ``` +
+ Sample output (click to expand) + ```bash + Upgrading cluster 15/main to 16/main ... + Stopping old cluster... + Restarting old cluster with restricted connections... + ... + Success. Please check that the upgraded cluster works. If it does, + you can remove the old cluster with: + pg_dropcluster 15 main + + Ver Cluster Port Status Owner Data directory Log file + 16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log + + Sample output: + Upgrading cluster 15/main to 16/main ... + Stopping old cluster... + Restarting old cluster with restricted connections... + Notice: extra pg_ctl/postgres options given, bypassing systemctl for start operation + Creating new PostgreSQL cluster 16/main ... + /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions --encoding UTF8 --lc-collate C.UTF-8 --lc-ctype C.UTF-8 --locale-provider libc + The files belonging to this database system will be owned by user "postgres". + This user must also own the server process. + + The database cluster will be initialized with locale "C.UTF-8". + The default text search configuration will be set to "english". + + Data page checksums are disabled. + + fixing permissions on existing directory /var/lib/postgresql/16/main ... ok + creating subdirectories ... ok + selecting dynamic shared memory implementation ... posix + selecting default max_connections ... 100 + selecting default shared_buffers ... 128MB + selecting default time zone ... Etc/UTC + creating configuration files ... ok + running bootstrap script ... ok + performing post-bootstrap initialization ... ok + syncing data to disk ... ok + + Copying old configuration files... + Copying old start.conf... + Copying old pg_ctl.conf... + Starting new cluster... + Notice: extra pg_ctl/postgres options given, bypassing systemctl for start operation + Running init phase upgrade hook scripts ... + + Roles, databases, schemas, ACLs... + set_config + ------------ + + (1 row) + + set_config + ------------ + + (1 row) + + Fixing hardcoded library paths for stored procedures... + Upgrading database template1... + Fixing hardcoded library paths for stored procedures... + Upgrading database postgres... + Stopping target cluster... + Stopping old cluster... + Disabling automatic startup of old cluster... + Starting upgraded cluster on port 5432... + Running finish phase upgrade hook scripts ... + vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target) + vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target) + vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets) + vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets) + vacuumdb: processing database "postgres": Generating default (full) optimizer statistics + vacuumdb: processing database "template1": Generating default (full) optimizer statistics + + Success. Please check that the upgraded cluster works. If it does, + you can remove the old cluster with + pg_dropcluster 15 main + + Ver Cluster Port Status Owner Data directory Log file + 15 main 5433 down postgres /var/lib/postgresql/15/main /var/log/postgresql/postgresql-15-main.log + Ver Cluster Port Status Owner Data directory Log file + 16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log + ``` +
4. Start the `postgreqsl` service. @@ -177,38 +196,29 @@ Run **all** commands as root or via **sudo**: $ sudo systemctl start postgresql.service ``` - 5. Check the `postgresql` version. * Log in as a postgres user - + ```{.bash data-prompt="$"} $ sudo su postgres ``` * Check the database version - + ```{.bash data-prompt="$"} $ psql -c "SELECT version();" ``` +6. Delete the old cluster's data files. -6. After the upgrade, the Optimizer statistics are not transferred to the new cluster. Run the `vaccumdb` command to analyze the new cluster: - - ```{.bash data-prompt="$"} - $ /usr/lib/postgresql/16/bin/vacuumdb --all --analyze-in-stages - ``` + !!! note + Before deleting the old cluster, verify that the newly upgraded cluster is fully operational. Keeping the old cluster does not negatively affect the functionality or performance of your upgraded cluster. -7. Delete the old cluster's data files: - ```{.bash data-prompt="$"} - $ ./delete_old_cluster.sh - $ sudo rm -rf /etc/postgresql/15/main - $ #Logout - $ exit + $ pg_dropcluster 15 main ``` - ## On Red Hat Enterprise Linux and CentOS using `yum` Run **all** commands as root or via **sudo**: @@ -216,7 +226,6 @@ Run **all** commands as root or via **sudo**: 1. Install Percona Distribution for PostgreSQL 16 packages - * [Install percona-release :octicons-link-external-16:](https://docs.percona.com/percona-software-repositories/installing.html) * Enable Percona repository: @@ -225,7 +234,6 @@ Run **all** commands as root or via **sudo**: $ sudo percona-release setup ppg-16 ``` - * Install Percona Distribution for PostgreSQL 16: ```{.bash data-prompt="$"} @@ -253,24 +261,20 @@ Run **all** commands as root or via **sudo**: $ /usr/pgsql-16/bin/initdb -D /var/lib/pgsql/16/data ``` - 3. Stop the `postgresql` 15 service ```{.bash data-prompt="$"} $ systemctl stop postgresql-15 ``` - 4. Run the database upgrade. - * Log in as the `postgres` user ```{.bash data-prompt="$"} $ sudo su postgres ``` - * Check the ability to upgrade Percona Distribution for PostgreSQL from 15 to 16: ```{.bash data-prompt="$"} @@ -304,7 +308,6 @@ Run **all** commands as root or via **sudo**: *Clusters are compatible* ``` - * Upgrade the Percona Distribution for PostgreSQL ```{.bash data-prompt="$"} @@ -319,7 +322,6 @@ Run **all** commands as root or via **sudo**: The `--link` flag creates hard links to the files on the old version cluster so you don’t need to copy data. If you don’t wish to use the `--link` option, make sure that you have enough disk space to store 2 copies of files for both old version and new version clusters. - 5. Start the `postgresql` 16 service. ```{.bash data-prompt="$"} @@ -332,10 +334,8 @@ Run **all** commands as root or via **sudo**: $ systemctl status postgresql-16 ``` - 7. After the upgrade, the Optimizer statistics are not transferred to the new cluster. Run the `vaccumdb` command to analyze the new cluster: - * Log in as the postgres user ```{.bash data-prompt="$"} @@ -348,14 +348,12 @@ Run **all** commands as root or via **sudo**: $ /usr/pgsql-16/bin/vacuumdb --all --analyze-in-stages ``` - 8. Delete Percona Distribution for PostgreSQL 15 configuration files ```{.bash data-prompt="$"} $ ./delete_old_cluster.sh ``` - 9. Delete Percona Distribution old data files ```{.bash data-prompt="$"} diff --git a/docs/release-notes-v16.9.md b/docs/release-notes-v16.9.md new file mode 100644 index 000000000..c305f7459 --- /dev/null +++ b/docs/release-notes-v16.9.md @@ -0,0 +1,45 @@ +# Percona Distribution for PostgreSQL 16.9 ({{date.16_9}}) + +[Installation](installing.md){.md-button} + +--8<-- "release-notes-intro.md" + +This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.8](https://www.postgresql.org/docs/16/release-16-8.html) and [PostgreSQL 16.9](https://www.postgresql.org/docs/16/release-16-9.html). + +## Release Highlights + +This release is based on and extends the functionality of [Percona Distribution for PostgreSQL 16.8](https://docs.percona.com/postgresql/16/release-notes-v16.8.html). + +### Updated Major upgrade topic in documentation + +The [Upgrading Percona Distribution for PostgreSQL from 15 to 16](major-upgrade.md) guide has been updated with revised steps for the [On Debian and Ubuntu using `apt`](major-upgrade.md/#on-debian-and-ubuntu-using-apt) section, improving clarity and reliability of the upgrade process. + +## Supplied third-party extensions + +Review each extension’s release notes for What’s new, improvements, or bug fixes. The following is the list of extensions available in Percona Distribution for PostgreSQL. + +The following is the list of extensions available in Percona Distribution for PostgreSQL. + +| Extension | Version | Description | +| ------------------- | -------------- | ---------------------------- | +| [etcd](https://etcd.io/)| 3.5.21 | A distributed, reliable key-value store for setting up high available Patroni clusters | +| [HAProxy](http://www.haproxy.org/) | 2.8.15 | a high-availability and load-balancing solution | +| [Patroni](https://patroni.readthedocs.io/en/latest/) | 4.0.5 | a HA (High Availability) solution for PostgreSQL | +| [PgAudit](https://www.pgaudit.org/) | 16.1 | provides detailed session or object audit logging via the standard logging facility provided by PostgreSQL | +| [pgAudit set_user](https://github.com/pgaudit/set_user)| 4.1.0 | provides an additional layer of logging and control when unprivileged users must escalate themselves to superusers or object owner roles in order to perform needed maintenance tasks.| +| [pgBackRest](https://pgbackrest.org/) | 2.55.0 | a backup and restore solution for PostgreSQL | +| [pgBadger](https://github.com/darold/pgbadger) | 13.1 | a fast PostgreSQL Log Analyzer. | +| [PgBouncer](https://www.pgbouncer.org/) | 1.24.1 | a lightweight connection pooler for PostgreSQL | +| [pg_gather](https://github.com/jobinau/pg_gather) | v30 | an SQL script for running the diagnostics of the health of PostgreSQL cluster | +| [pgpool2](https://git.postgresql.org/gitweb/?p=pgpool2.git;a=summary) | 4.6.0 | a middleware between PostgreSQL server and client for high availability, connection pooling, and load balancing. | +| [pg_repack](https://github.com/reorg/pg_repack) | 1.5.2 | rebuilds PostgreSQL database objects | +| [pgvector](https://github.com/pgvector/pgvector) | v0.8.0 | A vector similarity search for PostgreSQL | +| [PostGIS](https://github.com/postgis/postgis) | 3.3.8 | a spatial extension for PostgreSQL. | +| [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 277 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | +| [wal2json](https://github.com/eulerto/wal2json) | 2.6 | a PostgreSQL logical decoding JSON output plugin | + +For Red Hat Enterprise Linux 8 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. + +Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of +library functions that allow client programs to pass queries to the PostgreSQL +backend server and to receive the results of these queries." diff --git a/docs/release-notes.md b/docs/release-notes.md index b09237010..ab7d0b530 100644 --- a/docs/release-notes.md +++ b/docs/release-notes.md @@ -1,4 +1,6 @@ -# Percona Distribution for PostgreSQL release notes +# Percona Distribution for PostgreSQL release notes + +* [Percona Distribution for PostgreSQL 16.9](release-notes-v16.9.md) ({{date.16_9}}) * [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) diff --git a/docs/tarball.md b/docs/tarball.md index a00d3bfbc..319e29d4d 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -44,7 +44,13 @@ The tarballs include the following components: === "Debian and Ubuntu" 1. Uninstall the upstream PostgreSQL package. - 2. Create the user to own the PostgreSQL process. For example, `mypguser`. Run the following command: + 2. Ensure that the `libreadline` is present on the system, as it is **required** for tarballs to work correctly: + + ```{.bash data-prompt="$"} + $ sudo apt install -y libreadline-dev + ``` + + 3. Create the user to own the PostgreSQL process. For example, `mypguser`. Run the following command: ```{.bash data-prompt="$"} $ sudo useradd -m mypguser @@ -58,6 +64,12 @@ The tarballs include the following components: === "RHEL and derivatives" + Ensure that the `libreadline` is present on the system, as it is **required** for tarballs to work correctly: + + ```{.bash data-prompt="$"} + $ sudo yum install -y readline-devel + ``` + Create the user to own the PostgreSQL process. For example, `mypguser`, Run the following command: ```{.bash data-prompt="$"} @@ -74,6 +86,11 @@ The tarballs include the following components: The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use another tarball if your operating system has OpenSSL version 1.x and / or has the ARM64 architecture. +Before step 1 please perform the following steps: + +x +y as a sudo user + 1. Create the directory where you will store the binaries. For example, `/opt/pgdistro` 2. Grant access to this directory for the `mypguser` user. diff --git a/docs/templates/pdf_cover_page.tpl b/docs/templates/pdf_cover_page.tpl index b5ab6ed46..83dadd948 100644 --- a/docs/templates/pdf_cover_page.tpl +++ b/docs/templates/pdf_cover_page.tpl @@ -7,6 +7,6 @@ {% if config.site_description %}

{{ config.site_description }}

{% endif %} -

16.8 (February 27, 2025)

+

16.9 (May 29, 2025)

diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 16157af2a..352b491c9 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -202,6 +202,7 @@ nav: - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" + - release-notes-v16.9.md - release-notes-v16.8.md - release-notes-v16.6.md - release-notes-v16.4.md diff --git a/mkdocs.yml b/mkdocs.yml index 1c08d4616..15d93b41b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -74,6 +74,7 @@ nav: - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" + - release-notes-v16.9.md - release-notes-v16.8.md - release-notes-v16.6.md - release-notes-v16.4.md diff --git a/variables.yml b/variables.yml index ac6c82c91..f616153d1 100644 --- a/variables.yml +++ b/variables.yml @@ -1,13 +1,14 @@ # PG Variables set for HTML output # See also mkdocs.yml plugins.with-pdf.cover_subtitle and output_path -release: 'release-notes-v16.8' -dockertag: '16.8' +release: 'release-notes-v16.9' +dockertag: '16.9' pgversion: '16' pgsmversion: '2.1.1' date: + 16_9: 2025-05-29 16_8: 2025-02-27 16_6: 2024-12-03 16_4: 2024-09-10 From 64cc158f7662969ebaaa4001697555b126b4ba3f Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 29 May 2025 17:55:21 +0300 Subject: [PATCH 31/41] updated tarball.md (#800) removed these two lines, shouldn't be there --- docs/tarball.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/docs/tarball.md b/docs/tarball.md index 319e29d4d..556d4d290 100644 --- a/docs/tarball.md +++ b/docs/tarball.md @@ -88,9 +88,6 @@ The steps below install the tarballs for OpenSSL 3.x on x86_64 architecture. Use Before step 1 please perform the following steps: -x -y as a sudo user - 1. Create the directory where you will store the binaries. For example, `/opt/pgdistro` 2. Grant access to this directory for the `mypguser` user. From ef451401dea60bad24daab9de2eed42aaad0ec2b Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 29 May 2025 18:09:38 +0300 Subject: [PATCH 32/41] Update release-notes-v16.9.md (#801) * Update release-notes-v16.9.md * Update release-notes-v16.9.md removed unneeded paragraph duplicate --- docs/release-notes-v16.9.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/release-notes-v16.9.md b/docs/release-notes-v16.9.md index c305f7459..124ac1673 100644 --- a/docs/release-notes-v16.9.md +++ b/docs/release-notes-v16.9.md @@ -4,12 +4,10 @@ --8<-- "release-notes-intro.md" -This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.8](https://www.postgresql.org/docs/16/release-16-8.html) and [PostgreSQL 16.9](https://www.postgresql.org/docs/16/release-16-9.html). +This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.9](https://www.postgresql.org/docs/16/release-16-9.html). ## Release Highlights -This release is based on and extends the functionality of [Percona Distribution for PostgreSQL 16.8](https://docs.percona.com/postgresql/16/release-notes-v16.8.html). - ### Updated Major upgrade topic in documentation The [Upgrading Percona Distribution for PostgreSQL from 15 to 16](major-upgrade.md) guide has been updated with revised steps for the [On Debian and Ubuntu using `apt`](major-upgrade.md/#on-debian-and-ubuntu-using-apt) section, improving clarity and reliability of the upgrade process. @@ -25,7 +23,7 @@ The following is the list of extensions available in Percona Distribution for Po | [etcd](https://etcd.io/)| 3.5.21 | A distributed, reliable key-value store for setting up high available Patroni clusters | | [HAProxy](http://www.haproxy.org/) | 2.8.15 | a high-availability and load-balancing solution | | [Patroni](https://patroni.readthedocs.io/en/latest/) | 4.0.5 | a HA (High Availability) solution for PostgreSQL | -| [PgAudit](https://www.pgaudit.org/) | 16.1 | provides detailed session or object audit logging via the standard logging facility provided by PostgreSQL | +| [PgAudit](https://www.pgaudit.org/) | 16.0 | provides detailed session or object audit logging via the standard logging facility provided by PostgreSQL | | [pgAudit set_user](https://github.com/pgaudit/set_user)| 4.1.0 | provides an additional layer of logging and control when unprivileged users must escalate themselves to superusers or object owner roles in order to perform needed maintenance tasks.| | [pgBackRest](https://pgbackrest.org/) | 2.55.0 | a backup and restore solution for PostgreSQL | | [pgBadger](https://github.com/darold/pgbadger) | 13.1 | a fast PostgreSQL Log Analyzer. | From faf78bcd1c3d14b61a0986a7a96a93f8049fcbc9 Mon Sep 17 00:00:00 2001 From: Alina Derkach Date: Fri, 13 Jun 2025 18:07:32 +0200 Subject: [PATCH 33/41] DOCS-177 Add the PostHog script to PDF main.yml --- _resourcepdf/overrides/main.html | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/_resourcepdf/overrides/main.html b/_resourcepdf/overrides/main.html index 545cd7c41..c11e96e01 100644 --- a/_resourcepdf/overrides/main.html +++ b/_resourcepdf/overrides/main.html @@ -66,4 +66,9 @@ } }) - {% endblock %} \ No newline at end of file + + + {% endblock %} From dc99b015066f2245de837387e84a1da2a30ce448 Mon Sep 17 00:00:00 2001 From: Alina Derkach Date: Tue, 17 Jun 2025 16:56:41 +0200 Subject: [PATCH 34/41] Update main.html --- _resourcepdf/overrides/main.html | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/_resourcepdf/overrides/main.html b/_resourcepdf/overrides/main.html index c11e96e01..734db856a 100644 --- a/_resourcepdf/overrides/main.html +++ b/_resourcepdf/overrides/main.html @@ -6,6 +6,28 @@ {# Import the theme's layout. #} {% extends "base.html" %} +{% block scripts %} + +{{ super() }} +{% endblock %} + + {% block extrahead %} + {{ super() }} + {% set title = config.site_name %} + {% if page and page.meta and page.meta.title %} + {% set title = title ~ " - " ~ page.meta.title %} + {% elif page and page.title and not page.is_homepage %} + {% set title = title ~ " - " ~ page.title %} + {% endif %} + + + + + + + + + {% endblock %} {% block site_nav %} {% if nav %} From ff69346a74dfeabbfdc08a5938da52de337d1aea Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 10 Jul 2025 15:47:10 +0200 Subject: [PATCH 35/41] PG-1127 Rewamped HA solution (16) (#814) PG-1127 Rewamp HA solution * backported changes from 17 * fixed admonition, added OS icons --- docs/_images/diagrams/HA-basic.svg | 4 + .../diagrams/ha-architecture-patroni.png | Bin 110268 -> 0 bytes docs/_images/diagrams/ha-overview-backup.svg | 3 + .../_images/diagrams/ha-overview-failover.svg | 3 + .../diagrams/ha-overview-load-balancer.svg | 3 + .../diagrams/ha-overview-replication.svg | 4 + docs/_images/diagrams/ha-recommended.svg | 3 + .../_images/diagrams/patroni-architecture.png | Bin 13002 -> 0 bytes docs/enable-extensions.md | 2 +- docs/solutions/dr-pgbackrest-setup.md | 2 +- docs/solutions/etcd-info.md | 67 ++ docs/solutions/ha-architecture.md | 60 ++ docs/solutions/ha-components.md | 53 ++ docs/solutions/ha-etcd-config.md | 170 +++++ docs/solutions/ha-haproxy.md | 269 ++++++++ docs/solutions/ha-init-setup.md | 81 +++ docs/solutions/ha-measure.md | 39 ++ docs/solutions/ha-patroni.md | 371 +++++++++++ docs/solutions/ha-setup-apt.md | 581 ----------------- docs/solutions/ha-setup-yum.md | 584 ------------------ docs/solutions/haproxy-info.md | 77 +++ docs/solutions/high-availability.md | 131 ++-- docs/solutions/patroni-info.md | 84 +++ docs/solutions/pgbackrest-info.md | 41 ++ docs/solutions/pgbackrest.md | 297 +++++---- mkdocs-base.yml | 21 +- mkdocs.yml | 21 +- 27 files changed, 1600 insertions(+), 1371 deletions(-) create mode 100644 docs/_images/diagrams/HA-basic.svg delete mode 100644 docs/_images/diagrams/ha-architecture-patroni.png create mode 100644 docs/_images/diagrams/ha-overview-backup.svg create mode 100644 docs/_images/diagrams/ha-overview-failover.svg create mode 100644 docs/_images/diagrams/ha-overview-load-balancer.svg create mode 100644 docs/_images/diagrams/ha-overview-replication.svg create mode 100644 docs/_images/diagrams/ha-recommended.svg delete mode 100644 docs/_images/diagrams/patroni-architecture.png create mode 100644 docs/solutions/etcd-info.md create mode 100644 docs/solutions/ha-architecture.md create mode 100644 docs/solutions/ha-components.md create mode 100644 docs/solutions/ha-etcd-config.md create mode 100644 docs/solutions/ha-haproxy.md create mode 100644 docs/solutions/ha-init-setup.md create mode 100644 docs/solutions/ha-measure.md create mode 100644 docs/solutions/ha-patroni.md delete mode 100644 docs/solutions/ha-setup-apt.md delete mode 100644 docs/solutions/ha-setup-yum.md create mode 100644 docs/solutions/haproxy-info.md create mode 100644 docs/solutions/patroni-info.md create mode 100644 docs/solutions/pgbackrest-info.md diff --git a/docs/_images/diagrams/HA-basic.svg b/docs/_images/diagrams/HA-basic.svg new file mode 100644 index 000000000..d47d87be8 --- /dev/null +++ b/docs/_images/diagrams/HA-basic.svg @@ -0,0 +1,4 @@ + + + +
Database layer
Primary
Replica 1
Stream Replication
PostgreSQL
Patroni
                 ETCD
PostgreSQL
Patroni
                   ETCD
           Read Only   
                  Read / write
Application
ETCD Witness
                    ETCD
pgBackRest
(Backup Server)
\ No newline at end of file diff --git a/docs/_images/diagrams/ha-architecture-patroni.png b/docs/_images/diagrams/ha-architecture-patroni.png deleted file mode 100644 index 0f18b0d617df13933c8932e615da97afc98e2c9e..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 110268 zcmeEP2_RJ4|3@k%QNl|L*;2C3SX0Q>WNQ#bXl!FJ#?BDZYDr3Hv6Zw66_G8y7K*H; zvR4waWzYUUcV@08^yYHeiOP06N&|HplRo(82_q4{@ThZ_%l*ym?CB$q9 zF0T9t6@Cc`jH9E7B@Tmi#^7B<9IRZyA@Dn%V2QKFSy__DNQg-wgvG^##ijMdWcU$E zUKWB4W1Bt^u)RDxO<3!H;1!NrCju?c)fIk;Nc zgTKLH@P8dW@W1WgkGPnbxTKl%8t_rc(b3+@(8^o`2UepdiI5gSNP)wG8k_aCH2Ecz z!DoA%ofY^?-O9p_0G*<2<4nMVBWmK}G9qHozu=%A#v0>{qnZNRnUyOBOWqZMXy_qL zRC9DDxNH_yBkE#N?nLr#+^w8la0Ca+=EX&%M8wHIxDp+$$VagR0@fZ}5KuyuUrY&1 z4E={trjQbRaln8Dkx0B1P)C?D1CBycbr-amzODT3kwci-S*SXof`OPLYl>`d_ZbeJ{4p0Yd27)P-2PiBV>=viO@;Yfs` z8fJ*IbOiu`5GgJ$B}-lmG<>ra4r@cXw2Xu_`3F1(9!%MZiw(w-;6Xm0bcmWOWIDps zbS4nMbW}{E{uv&G_f)pBhfoUBk#ZdXroVsg!P3O^msf{Z@ht&Xuf;%n7cp1U$qkdmI+hP8I-9R)9B~ z?4gmHFcx-LXM&r9B^Ar8N&kQ$@T5wVb6p7#`^hYZg%7Z6I@4vPq!Ab?=oeVVAOu)q zTx>|@0r~=dA?ZcDCom(9HX=BJi?j$1Xbl$;Ji!6y3gKsyHNnA^qJqSwq3^&-P%YF- z+)~mCI{qWDC5e;~k*4TM3cMsKNTNcQI8s7{GKd;qu&Dhy@PhUEU*XHe&dSvS(y$Q2 zsKE9KBS{*<qYg-;SP(vqq(0%a0vjs#%uNwO!oiC=OPFvenljLKpnQV8fTeo3I^ zl}SGUtpN@L(fH&j(Ec+VB~3APr0i!VmL|<9B1@Wz>VnX4(nYC`NdPMfuJ>^)Sl>r^ z`FbJzjD$^*4Ldh;D`((mS-FrT1YF6{83jZIa0W2Q@<*UwLL86*(nm1#rN9^HI~Dd} zai9i1lHzXsI>47kk{3>8AhnN+g*= zTPs&rB1KtY+*}Etij0dZ#@ST~@_)c^2Z94Ir$B9jV`@O9Q6$XD6X&{}>hlihGtef8 zP0%kYkd1=`k@N*(1L=zcFo2|!L?nU73x0(TKxYaA83IhB9Qk-Q^t%dt`gC1RI)AH` zGY*(m$WD;1GDV(&)|IhDf~lujdl?CHgcP)l&tlSr;O0znCcuap?Vbb$Xn;6a5L%Gj zyBP<mXZAIL&7bBOf(V@|BLA`y`dxI3q%=ezX{0RWA4(iUG1cPY62QBLf0LDktpW^0 zvX}d-T5hnnPxPiZ?09@v%MCjv(;SJZ%aMR{6EHRZu^;ra)i{EJn!i%xa3TzcCuP1T zlWw0)B77!sXK7`PakB>j)8q`FrtPMsYDpJVf)ltM%m9d9!QYuf{X_hT$Jr`PyqnA@Swm261=3e?okKT1At{8qk1U>l;45Af6N>~Um*JjC!wbL0@IU8pJcr( z>@hAbI14Z>b>a*1T|WgbzXZZk%3MlvGMhzCZy_W|-U@YkOKkGU$3=dU-ujwk7Dz!$ zSU{`#EMC8uvXZ7$vc8#N}`3uf2w%@ounNk_EUv>vgQL#1bx|xaRmuts0IPhFZNN4sWXOT zk(z?`8H&JvotpXRI8J7LB*eZtXZtmCf4st!92a5=8j)}X5Vjis*fst9CE=PiEOQ77 zZs1G)Ynl75S;1)(gO?9H#ILIClL_;A@_ddXe{X(LWbjL79cR=Xpr-OOTm#6MlB_R~ z`N_f(r6ff%w3B&)Pt7jSACsqn&pRNe;Y)?A8FUV{s|vgVYZ(hGlBJ&JFPO_nB0&rb z+wdo;ttu z|BI3Wj5MmS4|MRjVx6t@(4cn_qAyiC5zZG_5bbdemR8OP0Q)Z_f6bu=A1g~ubGU8G ze6sCF$ITT?164jr7M$GfM=Bdn4db#w?@8QAFl(B#03YDS!rS*AkCG;BbMPw#s-GE6NCPu0?JJT(n4 zlTqDQhK1zqeH&#?@}Yhe9X&1Rk%0446rWELPGZ3D``2#fcQp&smJjD!zL8n@M)Uu= zKHtp!Efl|>O5j1v#2WzhDjiHxxUZ7J#E9xQ(4kvBL>oYP}8%8Z1~p zRIks>=IKdaehDd3A|K)uF#Z;k-PBS}ZcrmSNL3-owI?ug@{kl3`pE%w{FCBw${auh z$+**7x-V#9=&8AcKIFEEQy6cYzu*u+RHbATy&YKnVt^6vH&OLQUO{ zFqa@ZQx{0 z_2oeAAE=}sljfI#?LG`ixK;y0?bpt^Pg_20ME_p=^*7_A&t~zaE@!eM1E%Ibj%I&g zKhKPYReZXyuBxvLxjVoHP-%f*$*KN=g8!%YOi*;W1azAgw0nx=NWpO#l`jDc7qu?` z^_*&G7tlsNaVDnl^$U^pv=*yrih0uAqg-q1t*_svHcH_Qa$3Mcar3EoGkN1HHFG8p z!UFq4qvRPSPhz^{O_RF+ZdZhEoCNatJKV1LebtgA>?=~#lH~7jyW-5ir-YwW*5wEI zn3H}ZhNJYM%l~@;>SrU)Y4gL$=D#=p-wc3%b~Fk-8e>X;@*`XWxG78=JfnluBn}-QI|vfu zlL!7yNA~yX^P1V(eyv&lf7IuNghO_Uz5UjGUQ_3nhSLFmPhLN(4-k@+?jp+i|Ka?< znLd!1UKjlT(?|B53ofSLeLqJY6?CJatePlDMrkVPKzuXM*g+4Ms zn*X_;hJRV|Lc+Z<6v+I(ePpoLO{E#8MU+3ik!)%;c{1BVRmFliLS4jyO(C2i`S~?0 z1SKf>D{}v0on&N${DZ!Y-;X&!(?NFN6Xtwbeg4-=MtU3og?4{k&Ysb1{bPjv&oz@F zrkA+?pEs9CPI>6;@7r7go~AKfyFluOs_yf9HJ8XtDT2)81x@ymP}}9ngTIhro~e&` z3U~f%F8Qyygko}J;HRKa0su)V$}>T!oF3Twq((m6)BFnopOP`A(n3EV_4i+M$tO=7 z`XmaQtk!R9tL|Glztc17GuBOWk+9 z|9m>l;Pro_dh+xOz-sF6UEnvX97z3~-qXnj6R11wV{-}U>7ji8TyF{JP}@#@<3bPrU#j5t(Fv?*hM>5`sNjs>JMnT`k~A571p_|FV)L zJH_lHV5>$I$O6qk1{rlE`|CX&2{BXkb4m*52ZXl2XIINF9uvyIu!I9NxEbsBFbva{ z4?lzE@2Tz2+J$d4|8Hg%U?NhRh2QAuNZ+@sMFvhk!mbFcu&ER~EJW0L9VRL)SUKD=MZox=f()FCqlOHet@wo?1I#{T`KdGm`2bax z`=@uc$ifl{ODJsPsK^Z;q9!*i*uN0DrjuF zl9B>1xPsq3V~DeKgHjesytL=jm*P;EEe)&5{{d$IlrD-X#&_~9htrHN)mu5H zzS-j|J?(GkFvDE`VjCk_roX4vhw3d~dSe;Mb;NrrgZ$PP-faecjb5-wpKJ)b|Dc^7xyP{IIF`BL2vC zrvDG(Z-!@xSUH3D1z><5q6OYP0$u}+bHIWx7zaz}5XJ$#wi*ur1ilD@hoP^Tmi+yD zFDLo*5!2SzR#G#+^376O)*Q4Zz~TD$t*1#&$<|NalS-Ac5tpUhMM9mfhx0hUkn1u1 zX#kGcP2hcCx>ha_58=v~xikEPdqL<|H%IWBo|QA8^Ec-3*LK%^Uk*#aZ7vjlLmHw4 zR7IlZFzlmKa~MAM3vu`>T4CUq$o)buowvleV!%7+$t#jjgI*vnvB?u}4<2W`b+fjF z5pfgJ+|bj_!b=Q?*{myOp+a!iL|7s$iINB{qNKY8-ojnWR>?z4S=P%EZ-GN?HgYuD zZmDdJz{;X*m9SdMN#Is?+G7(7iA0*lf<*8|F=b@v50>@Bp+WJaJEus=p3x`B#*k2v9%y-sX*7X zaMSieOKKC9yfkgG2yHJ53AC3gm`e*unxBXi*V0!-g5N#0m64L#wpwB+6)iVy+Z~?T zdP<&JM6f1(B?{h(SS>xImlpU}Tg(Fq)*_~(51j|DrGh2u=o@HhDIz&ZNc#B`KNGywa8)@uRISM}6Z(N+O8ansR9 zO9C1KUZ_fH>tl&16^Mp_2H+g97jYexZGi5oU@u_bUJ%^?U%c$Rw6=LbbOHNQ1$5m8 z_JKy|=v#s{k=Ey-jf2*N0<_*H4$ddh6_6c|cXrt;$rj4qXHsBY)sTjZl z!VrgY--_E|Yo)F#y_GBx0QEo2T2gEbMVCrRQ@&H_QVB{>M6F8UW513nh0XWR^2Pzf zs2bqE)c;Nn?Wju`pQK}LUe%Jo;smqswebO(7sbrGCgaPBp3;h~uQX*V+D*~R>FZ9>veEWDIBHLGs^$Or6(OnVX&EvIeo5w2zd{(|^)xX2 z2fjxt&~y^77V^{3aMPevl=MAK+hgXQT(a%i!8}^k2Q=}Q5(9S8Epj=s=Omkp zdvW2r(+_SX^1R)D(oj6R?e%Gs$ARK=M~lt3&oVdAXAz_8Rk+x~Y3G%7F*{M{xp8SD z@8eqU&JA6eKDW#owlcl;r=jPjp=IC;zoT$usU-5O$vNXSs~jJyW;1J`3|j?AKfVc9 zRMsBL$1^C>(6MZ$^%puc>U}t2d`-(*6HNjC{Jez?7K#$tYAMh}U`Q$hcj3}`EPKGr zN4L`@3!1q{#0GZouFw$hA2b_#A$L$v$J(=njWo3+(q2MsV+ zpiBOM7DwZs#-bR0kze=GyVc+V-Dk)bD3hW!C|tS8nv~xpKZR}yjVj{p<(xk|GOIOY!lCC z59Io<=CvIw;Cjlale0L$uEA#_W_<9~h>_KUZIT7cUFgkI4S0|HMPFrm-kKC#a$A4B zLw4#M`?^M`3jD6YJ%r36GAaRb&;(n#r~2)FsQgJ~VrJaj-|O?+ov| zPVMV{O1+8a_XYGA*`V1;2!AcGM=*3fT^ciqRA+;FJR1DeqwsmhhLfI~Jva3|F6Y`( zR$KqRsO0&1!;rGs*dt{kHC@j(h0k3P$rsea?W4KZFIgd=hnqK8m?RSnQGoXLcV8TW zwo2PTzjynZ4E$5+u@SEgu7iz64pke4hkF{sxvUaf+OB5C4!jsD6;>H4KIc%@M!0?9 z71r$a)1l6hK)J5hE`6iHrE)c;BZOlRDOs<0h)f1Q!lb=F(b zXLtcVy_;RG`($3*Wh|##{*ZEi`G?Rn-G&Ia$FB+!Cx&MkISqlST<-}#zueur8tL|K zNm5?P!D>uYjAc!GhiuFHu{3mEh-7D7Mqdjnvt4rL+X`R5ja$7t(o;A&Ib9zqh*Wev z@q1poqWM|wU|kYH!1o+Jy;i$m1u?!i^>su~qt1$WQ;orAwBB5AUgS-@D$i=a65L-h z(ztvgy?*yHtZH3s`tZR0cyIPX02S-8{j+C&yjSy=baqx;UlBu z=RIsg#m{`xPv^75=Lqava*za=lK?OuRRWPtDWE52uwI+Xcl4|&akx0>QZBYoN4A$w z{X_Ms9-M;KlRHV$WubZ(Th925#}nV4GUQFZwY83;EqSEVbGrbi^b32u)`xah^$!!o z3?J7TlXHE;#wo3P5h4|~iC53sIpl0P*Lh2|V5RTg^lJN~k?ooKiR1P+doK+4B#OGd z+_*1Bqs1YoNiJ8iQ!97)am_mkKMvvRqjAx40*|G+ZXJmA>O|EImif8mKf>B|)eSzO zJ=A%wfRn3SLPI#wXW)gInQ#Ar%}T0hhCV)qu~)4r8AYOvPZ*X!ff}yQm?pIB^r>uC!AEyEiJW=8}FYyVkLz z*l_*r%$lK^*BM0%vkec&WYBz;gty{%S27uFE#>!mSK>F8!Sl{Je)LLPa+xeg%;ju{ zp%epSB2z(0qt3=FV-r~!KpCn;TiA50Uk=H(33_0lkn)SO})+O{C}BG~`u zbcT{CB1my?A;*`k86ZCoJ}iWZ%I=|h8}hkZ=5JG1xoHvJh? zl%lU29_L=M)X1U#{PGEHSb%4>kI#QyYI!J-Qpx8-Ahc&-2YT44C*l%tV4s9 zBp(p;U}AdVc*bR*MunKyKhpMCuLSR45oMCGHRAz$%9ik`mKE13=L~lq>PmB((5m#V zs=r~F8ROS23)GnSp%oW-nPo3zGL7Wo(SD6X9&V3O>MaFpj=PN%4r6Oc--#%z~Sa3>DGJF8{N`} zrBiEcHwfQ&AJ6GF*1Qp;{-KFK$k$W>O4bM<6Icg8Y%thl~W=k(}4mw~$U6a|F=+1&2Y^~J9TA2S}5 zZO+AOWc#c^ZwZe>^z9bTG?#8ZI2@&Z#91?nr2*H~akGT8O)jx7|60=RoBV|-nXR@l z#Bp2}_UTxOcXh{_n?AW8G?+xcz({9=Lsw%uW4U_+P6wY^#qAcIcrMl}y*x9iw%%ks zE~?X6#akiOF8 z@;l-1AVAc z_M*v|n+9ga9NJ z-*5Qt+MnvSZN+n22fWA6wK?2Yd%n|spPa|(JoARwG&W+WAIDB-?Os)a+3x-W&z9ag z%{yEw-@mhXqxW2|SH5IkaIB(|N`;Z-9HDvE? zbt=%+J14%@pp1(+s)U_g9lO3O*Q8*yBjD|@`@Nyedm^qXxf^Ds8DdL7QS1284v%_Q(+KMnv($q9*ZN(`6@=gRVIH`7NoNpz zOzryJq8ldqdUfQSTAZ>5$FrloAYUifqrq@)hxfJK!7V;*Wtb!E<5k7IroIE!Y4?t- zPsnVr?(JbrsW>B?o)lyz%*6^InRNx|VWyej_7Q=?i`r4|c190$ujZ1? zxSU8V$DhwLhH?&5Kjx?Px>jU;oX7t3n>uJEP+KSJxU3*9{ zWgf?YRZtYcZ4Cl|ld}|iNfNy^R35xEce+B5%B0nFkRn+!z&>_^&t~bFboCng69mb&zHuYls4VJK?iKzJ>HwkHijnap`^XO8d` z&Uz3iPS#YXfVXIQ=Oo!%oB$)ntC}(e;^j-r3p}6^t-N6IW{+FM!Q%aEfhXTA@YG)} zNRWBS>U`3sxh=sekJ4JKB&}!^&|?I9qWN?{kGwip1GKP&P;dz!@pA(6Lcl z=e`q+m){!w$3BvKbriT)%5x4ANh^APSyI_e6 z*-Fa;A$qTQoq;5mw2Q>h_1t4Ck7jC~!s$NI)x8&{`$V0|_Jf#S$?P|p$C2xWnrQ#L zzknpR`_(}9fYWFJDQ2Xhzsu|R;Ce>%v$CgRX_wF9UYzCVBpP`NW0sPn0Emhh!1TR@ zU`i&3&;Q=RP$!l4CgnK^29E_y!sEn;UydUrQ~lQ8w3D|g+ZGbIoWXTI`Q)QCK%(gB z{YA)`4_Eqy%fv6_okprL+2)M4&|c1^U)LJH`$fy27bQ0J;h-lub0P_3`Z-PFk@$cfxtC}* z(h{6i0Vxw58`qJx&RuwJ>8yYr8<{8bN#7IVfIyFl-=9U2*GvAu&7HH|ZHa35bK1}= zBsw1z0z24wU>TOAm;m95{{U+8-YEmJ#ux)jY7JerlDt1A`Y^%538!|!=6;Oit+Vc0 z9mA5BFZ=D|z+4;&MwYk@| z5qJaj@Ixq$8uAAC!3Nf;vYL_gI*&gHwS!``F#taE_om+oVzN~OVFls<*#faC8v^sdUs)Sh$f{qz+6ZRXc6Cq z&p3Xul!OSr;_$vB332>mi_;MSJ=f-y-Ut;8J%~smb8;A9{t1_|JSiWkMmHD5xufO7 zb`%HO`0bks7w@_Yc!& zdYw+6+BgryEbHvLpxteqsY-C0pCLFjk9Hx1)MwQ^hKbC!|F;Bxww_*$ik;FU*!HQ+HA}i z0_C?RXWwYLlw7fQ;S-kf`93P)SnhyZWF>bDC^_}Lr_>-`yops@k7C?>dhO*@zU-&P zruz)uof5pfW}W%f@I$KU=S@B%!v1NCWDF{1hrK<96n<9%Fudx>=b$e|mmV>;0@ciR ze~5sV19E*3WTiPvD0pgJ4LmBp1>GbGu4b@b+Zq?f9)?t{C{SH!&)r&ZEqc?J7P4Kd zLL2Wo?21x2)U+gCe%*~@K|MqKD@@O?cGu@g>(;m85}ZTAE{mWB$icsa1d_&Ikl|#2 zoN2`$s8TkO-NEs726&)t{#@C!C3A)Ut(wM1oUsadqPVKN4FxD8p^|QPCOpMpARz@R z>(xhcDZLnOrKq@g*9tzWS9h4;GWPT!^3`(}^8R`CsA}f(K+~}L2-iIc#{6S^I3?7@ zE!CP8Tf%$;^XIn;$PBGxP#y9OOBN0236;;i=FRAEBF!2jt%f{8F=vWr)F9RBk6unv+jOgT zbqQt|Ka)O|mXyHHXwVg&=7C{!=4N7BrWAf?@6CvEFkgI_Y5(E4=Zt0p?EU z@wHhmj4P_b(g*tn%?=dJ&g*NN)lX9TG<0YPxJsv_NMG-Gm)G12b69uP+?Q376X5hd zFMCt6UI%}b)6RC>F~ z+ibO=!d&hbm{}wu_dpItif5xRw626j*E}?t!(A6=q^uT3_V=&eguab^74t@Di`M^qaMMe)MU;)^LK+AId)bRkF_B{MEV#uJ=VG^z&_KE|~j{1VW!V>ynKmMvXWgnKN zv#;?|E3XSGY_52?sS&8a_`3}qJsB6{yHCPU8V73`b2`UPa*61bcVSS*TdJ>fKUFxM z7M*bU&yxiFY_i$nUDA|}ue<$NR;ME4K;0iw7c>)Us(Q}zk8y@Phu3r|26&$C)=$YO zaAa@|vy{e#E_A68kC%^r$z~j~HJnQXmvYyx%zkcu+1#lyUo)DydcdLA&I z$iefpikoR~okzklHw!JWtMo^4bfm9ORV%K~UN}G~J$oQH(|&eu(%q4w4!T=&`8)1x zAr7l+0&&j|$&K2*KS5`{bP98%>)wPIer)^LQQvrqN4^8-wNhKlZbmSlqVv3~s~^>) zZ|5X9#X58fwH~~txSo0GgGw ztIszhP*)n&%uzyj=$WVr(ZSqc$qorGOOv~(f1S>I9aQk<8^+H%u1AgFmsd48xG#Ub1(m1TC<*Ir zTVR%}U7}0iUVxOCrIIAA%PmiL0wt$j!GV#4TtlExRt5J|@r{Y#xKPp@M!s7r3J)|g zv!3e?gHhfODa=w9M$-1TE~6dEz%D~6^SSltOP;M%ub6+}c|xW2&BBU+p0!GqJJwmE z9ZqEVh~OkqDNC}vA`?RRlJv08FE1op@PKd-ZRyq`$p$^lU=73N8vcHvXdyr23X!nn z2v^OM;lMwv7`x~GFuwMP{BDEoOEWyeYh0XPy&b15_uLFClU+de_Pq)q`+}EZ@2mxmRMt0(!h!&>SQJ_ox76ebJ^-2vXLTEHbyR z`P8WlEj)mDlgoEgG@yGMe=eti;i4wYF+5J12klG5JgDW&5Os@;uvF;Xc;fw*G#hhm z@+w|}Tu?2Sw5<~4{Os|gOP3grPuRy_C)%oq%I0t=^xa<36P7;&Bh&&cM&_PS`IWOO zOGy$N5CVuH)rTbIvoh%IRcj3pTH!T$%I@9+rn5Oi!ZvMSJq+Tg+5X;y}TA!XG5O*BrGa&ucH$v!7e$R!jE%62^3-vR-UO+1wr5-bIl^)p)Nm5 zrqv$BhX8M@X62K+3c~ryv@7O>Jv)xvQDG4#caRua+c7&{{`8{2&-;Tw9WaFrta06a zHGz?@&YH0*EScfSS&$Qu)nekjBEXcr`{AA^ckP8rfd@@Ivhg;#GA#@)d$TN_9EH;> z?czm+YE<8F6#1|OmG0wX^C~YnOkhs=)-|NiBRvfYbY=|2r07;}slMuXw#H!gDhKHs zrDnJM>wE_{e=$6oyCrLLMSR#WQDx{|n7vST48l)joiqU3F%XFNTBATPV4D%q4Ewk&4JuhK-7m7z-=sPrc!a@z{Kgh?sL#9rC#kA z_-p{8vWpg#xS3P5b>F6fj=dQN_=;EK=Ay8u)0~ldP~`R(e zv)=9iqQB3_q?&XB`4>^u6{$wPp$TsD={EGGz7b&u=6Tlsj<(N4a*De^q=+)wQPFl_ zVNK9bewftZ3Md(n_E};mo(UIJ*a3{eKy1&$m0;+l&kqH%e^7At3w0IZ9x-upLi-d* zJD0k8Izc@;islTUn0qy$i!s;8RChIqE&82Tn+8UnyfxIx5ML`Tx0al$C;{5Xh260h zN>#A1`p@PcyI7c>aU6NR@CKfJz5V@@R)=!{pnf2k*PtCMAtzYMC z59hf5VXKhKkoc?*9_7t8GCqB~n&*rO;7$hinxi*r9F}In>*`JD%?;U~s#Bs*&PTT| zddc;m{lSt@ve&JAX2rU40^91CXGBzL)g0SUe$Oz zTgSN3S+Z-R$n6cJqAGaJN|L5iT;zXGgj273L(A2$=L+_O^njE-WC!d`e;o+7X(<{f zHeC&yD{WtGB#e!amnqp!&I7iC-~qcj(vzIWz_SQu8keUM33d` zkhbN>fFw=xF2E5Q&&-C%Cnwrw0a`9);GyJ0@$~jLqqp@`pB!3)hxR?PN__>HEUbVm zvGE}(B)cnScXKyD)d{W*XBI7gwOGcY@6Rz3Ztgue>_!^DSr&{hoV9t7bQQ=VL18lF z3Fd+ewuY?OK(30A29viE4n@N9dtAw>MABgLc@-eej;()D?(%UknOFbrBAZvzdNVz> z@4(tg`{egAVQsB@383@>D(WXb>@Tu26{CoXcbKiJ4YT9 zM>NNAuP>)&bd_B;$sSn25laYM7*?$eF2) zT$|zWNc&JL>uo>3yGOM|x366-(g?Lg`fmk0QhpE&cEsIApX}Syw?6A}cFTYdyGgdm zGoJ?v&`w4CGZg$@ozS_xVzAsP$}CM@pNwOuem498y$ncrv;?BX-M= zS#G+9LgKLIZ;;ATS$`!?S@_nBvR>B?ZM=ke2z`W?>ThH>13FU ztV=f#87shn#Wc|0j_Em-@B87Qj6qTFhK27z{p%7)jOX|s+zznfyyazI`M#*_{H3_} zPGA9!47IWYx-oJ#e7&grS=rtGr?dkmZqU23iZJ1S2PkrhaCVfI_|H3yp#!o zB)O!ft_3byKwt8QkK;s%i>T+InvZ*1w8a9_Hlo2c9Ac2T>iZpwnq;{cZh>yHry#^O z<=D6uOiF~BBh}xe{Lx@~XyD{4SNHaW9UN~`%8Ax;ayp``j?w`-Eujw-0m2&yC8u@= zv~UCQRso0k)T|=k`{Hz!Q&|RsdR{A}RAtWdk9{mwk+=kSQWpicy8;!wc^wcWb7kcM zuq{3yA*)x*JrH_1;L&CK!MIW25+SZLNv=Ioh`r}g_u*JY?jf+&;q}6CI+V0GWAk?o}YOxuqEb#@3{`cw(8N|WerjNrS-RSPe)Hg z?V2E#uM{?Te!Ow%27i#4i^0C6z`kxYDtCc> z?FTldVA&sCk#4Ltp*e^hQ6kSTAE3#IaQ0BGWort(y-%(*$$DJ> z(m0!1!ivh&f=vNEp7%7`*6;w@qOY~A1+?7`XnX(Q!6U#tpnx?ET`gDN`?_<76Kf^& zcvEiV&203}?X4n| zl)9U_Tu9&UQT}21St|z1J%9_dlzGz6R3pLVb zo0)~*aAR9{?@C*<2tfthMUm68{l-|gIuk9?jd7- zfi#K(2~p?Rvn8>TkT-ZBva7Pk{#N!1pX^jKLU}o2aOahHLD8li=RM-{#!Xr~`mBo# z`ayS2UA^!4v(SUb#@YC?p5-|;ad~kYukiKV_F;cM-zO_W#zNuU5p3VH zTRO%O88-&nZ$BC4$5)+Ld9KqleaPcMQ*K^^&&7VDn6|-m?`M~@i^j66)>K+M4YjDT z3D1T~SW8gtK9>kBv<|QSVFY3x2 zEl$toAmB2hUT~H1WH~jp@{c&g*BCsn6YhF&V9sHO=24sbSz7#t13I^xl+{nxq}I$~ zJi*pf)9G#+)$U${%e8)GM^~TAlA_N~*fEkTlwMTX9Vxxar)@>4+}_G}wEEs@gR zohRl!z#HYh?vZ3(xb{9sBd_#kXo38YkKT)a3y}A7SO>Z;KZVls6Jzp+q)d{J^Ew8r z61&?y7Ji@1(1JZzi*?NKy}v`a>Gs=~Fn*C1uae z2q*9Qb+sx4o~o6sj|3u3&<>kk2)9LwinOs67(CCW>zwZ>)X4VAbfk0sE#vg%P`HAnw|1bqMABY>D|AR%>n-y#P*UoN0g_uR(`eM zeELC-l<*0;=gUhXdn+&a#WQBa-K?ugFN2gc>SEiXrp~p*2L88B>B$Yleb227i%ig+ zZ0Qb%b{^ZgC7gq>%%doar$*>>S8o4)gN*(M_o6jDX!?!0bRsFsA!M`YPlGK2OI zF59P&k69K58uu5`0uSZRTyT`xX5*EUf7~|To&7ecFHwjUSwiDqaa$XEUINI zn0es#N_1!P8)Jo}yv_!Elu!4QNI_B4^CCCHI;w?C&pIU9UvCl32&tR(k>XXuTo1el!cVN#vlBHzn_f2UKIQDumX*Pr%zQrm+6mUC^CkEbtkvfW)A$j+;Td}SZK)_k0ln=Ki_Fzx6Sa(5o5Bka3xZ0I7>n*@X)4*#d+;E8MrL*jMyUC+ z8M@QK#KZ7Wu#cdru*ZW`{QAL-^+i>+NsSRAUcIr*DXQTdj5xyj^CG@N)^CN)Y@@v8 z&jlK^t=bhodI;C>zR7X}%jIQq{sOocWlFp|qUXgRnLdD}3~C(ly3F&Etts-PUL{d3 zYCE0DTho@!D#Tl_K;Kw8KBqy4YduFuX3g{U6G(JT(hbGnd;tH;m2Yj+Q=-bRGu55W zco@}^>tn=$&wad`zcgZ`>UL-D%i`fypW2T82V;b`f&SNw)j8N(#p#B(IO0&E8$(Uk z#-A5)>UG%X(Q!hqtMg_-lfx@t)7RHF+Kminq!jBvIqsCE6UAZ^yr;v~PFK%QMYfTz z73fSol7{qm>db4j8nGC#D z5BE0P_lzCYWlOwV^SmzWL1Ody^b`)W^taEF_`-9I>+32zUN5UGj`Y^fNn(4y1+OZ7 z#b%8lUGy5}AU^lI+`H5{mL;4G=n-o%M{EW{%Z)P^x(=2e$fRYv{-9u}sB5~BgHE?l zx%P{A$Ms%%$0NIPU#vUkGtee1_s)TBaQ@(N!uX9!q4KLCp#_I}x1^;TJgO^+G!jm+ zj>s@Bi~7vN``p( zEXOEr(^Qw^yoJG{Cf?1d4X4+?4#JK2XD>F*(KUUy%_7bo#w+;){cQm{E=6vWLzpbxO*Kp#Up(}4tGsN8hP(J~%1EqLCusVTybc%3f*4XedEsRq>@*CSeW zr8Bbg@|r8Vo}L@oyR9nusZ`z}%H2UteZv8QI)wI~XD%wldY)K4Kj33C?G6C92`TS4 zzMD44KG4c2`pSN3M(~SgWzAvD3o^Ie>}YZCt^xNel%dwU;V{MCmyNr-uAcEeR?FV2 zQvpoT#q{(Nb&WmGxQ}WA%vt6iFf9ga#quRU5wY;G_&B#k+-sg62rcA?z_5bd)X6^a z&4icj-tipFpn*_z`h4Gd<&LQJ9=+(`w3bVQqqj!ZcMUlYM%W37%Czywsu0thF5K*c zd~{K3;ydx)JbBgib+>s6pR7>uGUKNWbB;J8XdnqVT6u|QO^XA{ZzNaLwSJ$Uy7!~F9f9wy^jLYela#Y>E{|7@XSf=NN%I#?y4O!abTe7b{ zy_tC{JF9N!VqmpH#B8IJ-9i0L-oo!jlRs!?;&)^ec%!0{jy4SDaY@>TM0O!lE`)~r zE0VJb9eqPVP>*3xwS@}Nls68Mhbok1^rY|PDdgR=^TIsV$MsP!O8wrdZ}^~$4cVS? z`3hrokm}H$@{Y8}JM(PbmWW+w3*xGB}j+PRmxW_2}jJ?3qmgD*vn?Y zP5WIhgSORD&Bug1qvzJELRkhiV!c?7vNRpuOA4!G<@$=DdcNnw*Ok@hZ12P@Z^X7h7ET#FfNH6r%MAMq;)&VwjZ0Cpak+t)V zh9}aiUcUl(oT$czQ(}Dq%y!w?4c0OpSlP6~LZHAINL1 zL`-PtrS;Sqst|8C#x3hY6@o5QOVEYNtE{CAVzj+r4Xn$TX9>LZTdRtH zgE&%tWADeH`w{(v{mG+Bdxp5qtrDi+9FGfJGIlkX29+tYTVF((6sx=MY-IQV+%=Ym z!eB2B%5n1xPEDNW(R9^a93PE2$_Uo8bHA+fYEjRYlNryq+dkQ7(%ivLtU*HEgxtX8 z+&vzDJ8F0+#;?`mown29fq_+9gJ^mc@>R+AF0L8g!2#H2^5N3XgLg8SS7uh#x;#eM zp4*sR%FG9)F^Igo65p(-E+vi!8k>4;s^4AE;L5UV{KOHNat&~wh~StrTuUBV|_>6hIPyqR`uQ8$i&2ixJQU!=YAOnXfn}$X(>eIv>cqU03htiJIzHGK=GX&=c!s0ltZWPAW`WcgXk=VTqoN4_drAPT{mHB1aF>7{ z%>%4h%H6ueRUmlONazNY9}f(2=xLdb2b=aJEN67*1nJ{VkgZ-Masj|j z6GC~Q*xS4FREQ#X4A>uzj7z!3Z$9?gefMAki^!NafZ^kROo%+SE+o?n%bg;I;w=gmzf? z(pg~5R4tZ!A$-`xzhDdoC#}2jLTd?_C;qMy7Q!|l|ExU$^8hiPK-iYcIpw?%9edgF zwWcf`0KlAY0f2j3WGx_t&Ir(N$`cX?YR+bK0nT}EE^$Dv&(r%jQFfqx$W6Q~Jm64( zqc|wCSnj3iWlLMX7VZVe)7&r4p$!UHi>#yh)*$A9^Cmyk zVvT&LQ_w>Y{=JS1=LIIy{+j^cD)4IojdB3`oB1m~%%f$1Qy1_JQ|7>#IWl{F2w>vfsQlsbiauVR1Cx&eIu4TGkG1vxjNJd6kqe<;PAk&dvualKM2((y zIG2>Urr*W(@>!gA_6J`>eyy&?owr`khR*I&lWKnC+?$6`<7$8NNNAFHj^WB$DKb`2YnDf5NyLy)P1Q4gw^P)F; zT|2%S+&&{v65Q;Udf)R&nSf-!f?L_<9?RnWms!|p=U9-Hc|Z?EnQvr320Lar?Bmqh zQ#7t%$Nui>=&krVbyj`S?F6=Tox8h2kyT|jbU8{)+OcU0-m2bLV$S!T0+Bt%W~<1O zY&M<$+sYoPXiS7W>v1zDX&4l<&udV5CLok0dnWHuXDV%p*JJKJbB}v_ zc@E@4^piRRE~P*jTbz0r4VaR)pxu?EpgPd}ap4Gb`!vq*6Zfj&}zs zu13Dv9g*&nmrso)X8NQ))rz_97JU$^e(EcoX}6V4e6y|W90O3LbLpB4is~3&_S~W2 zdIDuql0D07*3OF-9~QM}mk4#vAb1^@Y&dB!FTxy9-<%MSq#N=N!ya$?V& zvIAj2Juf({x%gUTk3jMAMmcPTLzc9I*NV5xo@gHAy2yQ$NNy-7-Tl~!1(Z`m6uWI@ zU2a!@K+72c`(s0VVzi$gYMwa#X4w;lS)260{AcKMNX-vH>=xjg8q{iVs@IC|(o(ls z=-Bc*uOszyj7%TzCulC^2fv@5M`|&MeJfT0?rsF8aA)5#;Hb^r0(>t)?Zf;bP%Fi5 zi_>W~<)AU?i~(Ib)G?9tR1g$Y-vsVSibOOQ>D_w%=L5GZ&ULPHp67Y4bC$8fF+=!3+SJ{F|Km-96#>A7 zYc=Oa1J5W<0gC(PB>?aPL;rV{cUged=Hj@u`_EJD|Mh>agp642=ebA!=NJChvi{dX z{{Hi`1h5}$5Mu~It^PX$L4A3MZt-+zo*f+KWL?T&0gh(>dP==gRGS+MoZn~eder^T z>{ndotJse;|JGn-B|e*{8FY9(Ht#XIp{lDBD?}LIf$fp|5Z6abyb8FGM~%>|!{I)a z0l4sc_-(=il}qnL2sVKk7`^0r69}#lyX5fo&qE?M4&X!KL(24o6Z&ipERlkQ4g-9A zglIMi&DL=*0{F%&qS+3J`B9)LVcl>-4UlouHr*95E-K7CK%(rt3poMM@G}!QzkI@@ zzm;|dz~Q3EM0)_2=AsU!Uic&h?DOFjmxZctXMdh|0Wj(&5E0!9ozwX40E|cAR)IOy zs6rw5kKDaYTaBdG*`o;q+TERR|L49sU^kd*nex#{^`%;O1RYj zl>5yDJ{#C!pTPJL#qk1jdXqNA#U7PEuCokIle@n{5Q(fCt0plJWdic&Tk!Y0egNkmcUeX#g2O|6MvNl(O0RW4 zUV#hQ1rS6iuLMkSOA9>r>JZ;42=d;wqhyw>Zb=%5<joJ_@Y@zvOK36eELydeWsy0{3{kK7=P4L)m1L#Nx96mzMVzotrFOc6S4WDI zX$`HY>yB3od(uTG94nUJC;o_UgOHNHh^(P80F%?ZMDJMj)e}@QG#)r}s}lL6onD#E zBK?cQxu~Qk_R->=>v>Sbm?AJ-RQ>TPpVa=Sallk_Ty|xEYZB1~n0!m$f?amB8 z;)@R=;oxD#wYsFZHCHoY-}On}%`vvkgU+i4c@-JV&?^GNMkdN)MeQ~wa9sV#h8sR@ z;iXTS&xTmyOPQ^oSLbfJ%1hlZ?H=cs2Z!7Jtm;=I$h7x&M~v(N4WOSu^*eBT7FA+p zj+Zl!fn70`O;$k+62EQqXNDXfY?d-Oqx-2>AfBE_;e)x8-aVT6vtjaesjMGE4LCcbl&HjC~8ACdK8)K8rQnYDuob@rh=(8-w(pSz89ebTSd*0p= z7i&=bcJkfSC$&1-^;L{I&4SBfm6`vM!cD2&&-F)(4AAm+&egFBTb!&ij1oUy^gwYV z;q!deJmqLL-l&iL7jPnbG31W}*$k7J1e%H>=(1I5UU;W*Z;mD@wDH;(Qyc4 z7Jze%ZB zXzJ2^O0%X0T$T-aRO*VS0m4)ZxEY0OVO(i_mq{4=e3`NO9Z-hj5ElQuFd8m#<1rk3 zAtBrlIz&jC971kC*VTeVkAWv>q!1&0~hK2_fWiq46_4@nguI~b314J z3Z^vOdvEp5lbWW{x#wL7xyGQc+mypuOYJ6Mc$MEjq^9yCi`NbPonWVH`l6o>%)ZAEzX^oRm?QOaEL0s@#Eo7L1v|@!11B_Q7pj z?qC1LjDx*^LYn!Mi#~<0bZ9RLQ!nnhMe*^3fX0f_U-luxq- z&YII{w>p&HL;Ygn-O-ZFg_xat*=V6+s9V1v-C2v!Q`IL)yOLqtnjJ5c-^hIADk+Lo zVknv&&i4Lrl$-9qy1x7)lNRm=cz?j|?a;S+pK+EBtNy4zmR+nm_MUCJcO@PPoIm^+1-a4K}`l`^RFIq>X3)$IcQ*bB%32^fjjX|icalx-`8Q0u@a5-D+VZVT8b!P)_|H}bJ#|JIeAj=8`iOgN6se1hWvvJO zlpa;o4z}f{#~0x_!W>4jv8}<K5Qgkp7J{|o%Ec5jsxDea=V3LEi+2mRsZY7`9Z^4QALj{nDVeR;7k<|Ud;n+ zl+^yKCB1k1$A#1n6rU!rj=S}UEj?XW&yb_}O=t4vo&Uv|m(uAbYALO!ka#K83!Lh` znqTuLm|?exM$;}cLd4Zy8e`&CwOoHT^n4T5mbq>1`%_Co4#@)+6b1Z1<=7tPO=jjE-u_09BO|dJ>FpKskH?lk$afF;zCov zKa-oO$DybyI88uzZme{6CLQU(~) zBvII>DYX~-z+JsBicxHU8hFBht|+Nl8sXqJ@y6<-|X+Xm4Q8N{=P`1WC_Pzdq z9i&pDIN%h$zEI}J0hD~2D@y>yt~aCO|$_Az~R=5(E8M-)H3cAHB+TJL6YA z*Tg_7nlB!W;5oufA6ga;BE-+fY}3UGF$re=sC@uJ(oSHY*<d*&WOVXg{Mtjo8oq@RZltLk!21X= zE|sOqat>YX%=PRI!@1K7+kD!&_CqS*WMVqyL=!+Q@zuZ5Iz0irGWEk2i1%Tn2uCKZ zjR*9ht}K1e3uaT%BjwiOSsU}?Mbm)FoFQ;C8wj!Fm*Zp~NZo6xaLsgPj?y|CljqPM z&>(D+&$eaZJkVd!6%m}gEK&yT{md$tq=4dE2ZBRS+u`5iRkjJHp8dd?Y=$mcXgSgK zqhd1sLi%e&&mhmzn&JHe(lu>lVlsqyu*(~5}&$u{tq8mwP>i%UX=)IJJ@=<9} z94?66kKMkV4CqgnCY|7}_T7?m2A6WQ8U)2nlE)aHRH?%-yhVry9W=Wu9Hkswcl~#a zOqkh0={+W%)7lwm%|E|Xep{3F)xX-6)?qiiXUt>vV@o0H449N@4xaOYI{@+d!aFWh z_Wsc>(16WTK)ymS5Z9^ivrcY!f?O@Bbxe@b)FrB>66PIPm{48=FjEX<@~S*4D(`-W zA3=FVS&eD-(2;yu0*k*K^U3uKOnH!Qb4=pyZ}ge5txg=9V->lM_#Hrev3C|cgC_*? z%WzBkqwsA2LUfLaI~jLDYpbCONZg;2QBQ*oTuM#VAA{Hl+7th?W012!r5;@3LH*V* zasZ(tat@aJe;e{31+P0o>caeho-JP4SVY}~nCDuppO{NmoS=QvcgPr7w7Mizp{>Vn zTL0^hda!KlMA}Il2Tu33y)7ROi-J;CRjhdW5Bgy!GbF#KYv7H9uIOlj!#e}B76*BO zI6FEmcAhc!%aedVpQ=rDEdbf0Tj}&WxKzdK-{|JTva!15liB7<@b#kOjA%~^q4e?o z?22CL2{E0Gjo9b0N;0!3p9kKH3r1=aL0{WhG38uCA@JJ_6_rp^^PS^8+QlHydz18J8TRjkx2LrG(ezP;7WCm zu9$T;BO8?;tAF^T)BPh5;;+)0ype5P^4lL%VsMgpb#XAtFREOlJrI746)IY6#<}wC z7ozJ%^g1IdRq$f8!fz5mG%9-A^C-EPZ7j&SlTN=K>I3u&KaP6Q%^!_%P^5(gdG8p_ zFcl2E*50wdlYtfRn8^NI>loylm6c9z`%696SfJwSgeNkXX|4=wi_D=5cRd+!u`uWQ zywLwANKoi%dv0e*&34%4*>q-4gCM(?hG&&_Tj12-5L~a_?13By_u{Igc zy*Yt&E+qGHw4P7(nsPz>o)ESlU|&*%)s{^%8>cqRMM|XXEOK^qlbi_sB&to-lsP|~ zvo7UxJ#mf^mk6{&a+m?kP-Gs`wm(~cfR3f!>G^oi`>6Ozy+(stbhp#1fTAA@gYpO7 z(eVOBc@q)1{i*5zI?fW2vMOcVePuzz7b>B65LV9vZ*({PR)7c)SXN)t;>$zR%A1{z z0%#<)tV_BX@C!>luF?CvhxQk*CL~%#c~2*&M9fLCJ;}-30SeKRX-@F&$dwc`*qtmK zcez1^p?^iK@H(;A34Q=*ZfVo%Py0ezYUYygZnq@(Rk65o71_7k=y&dLp)Vs5X_iGAk@gp+ao zq(LUCp~3($%4d$P_|pvb11OK4Be&X@-6tw^J~ykwyesSziVRT#rZstfC;RW-mKCpX z78NtqQcsQlP+d@qO}0yygqQCGJ=jjH~Cq&#&dIh|Y z9kKHNc?(d{ihzxOgl}6Ki$I65@-@&uMTeg;ylfwfF2VVQhWP2pfcRi8EJalyCRVdB z#c`a&iL(gjH|J>{Vmd!ke7{4z(u)`UtH0i{Xm`~5tNwXs#(R&2L7fY!^8=#`XyK5S z|7O%BKp*F56^kn;>!^tX?>p)zas}FTBv69G7RinMdG7=W8?2s%DgJIkg(;pywX(cJ z=zl&g?Nh#b?CB}MUL$dakT)N9B|Tl%(uT5f7wWuA$hB71)7y|{UKM+*a+CC zXd1% zN7c~9sic~38GdJFqu|TYrW75RanaR+QHvQ%!d8;iWhGpFcLWlK))#Cv+-rw4zM=S~ z0&tC?^*&GnfxS8n=F!?n?Jb*`O^0SD**8#CLN&gK_F>NoDYlqdqbtR*#M?g>m z)q&M4OvddNr79q;R(-X$_e#5IFo<8K%Ar1MBlJumV)kp9pI^+P!m}Hvx%KWziS0J7 ziuzLNnJTS-I4dIghog~eVM?xwdT6OIfH22_0KPoq9Cs%`0JV&f=~Mv$k;1L-ul>^r z6u?c+p2VtKLsspnZThtz!A=%y=h&$)8`2A$0mEu!ondyHy_0t~jpV&tE+*??g3ujd zyZ0YM(h(Kf0EG>RS?mFK9$~*D6C!Pp?I;|9-bjo2N5m+}c7=xXL|V-J^RAg!`j$L@ z*?RTcZYG?0?T~zVtEE1s@5T$B2bCWh`Fl*N-$hahW%d_-7RLO36f z@EC=c-LO#4nQvslN}W#wF}#@GC?Iy5AQ~1?-?}0y3(D%B?F}Tw=hH{oo-rD zrDbTiziR6jAlzk|jw||0r12>{T533uFvN!!!?f4boXBSPff}D%ZU^n)5(aBOT zE`tlEtD1hFZ`WDxdPLrzcB7&(NU;4Y=DX=lGLGXg8J9!zo>$YBYiGb=$MJTj3ibz&4Uch_gXA-hwHt1KGjZmg!?$y zwi~QhIqXEyf!}CRn)vMK-|z!%lS^&?k}LL48^&{iFUqHUx7*B{Zy+Id2mO8qGH!0$ zy-0DZGmd1NE^O@-Udo3>au=U@&>?m4qt!TH@0*3o`Bdz?ZW#wit$Vrj*%(fJyFnNl z=92j`Wn$fSVT1bczz>r^lu(7=uyX5cTuamM_(P&7ZGc}=<_nk^`<+Z&)Q5^AHw;f$ zdu=i|C@sg?)hUJe&0Qqf9`yCxRcyy40E?~yI^*2MtLNXeW7T6N{XjI5YbeURS?8IP zqegEK6Bp-QT2c{JpQV?Inf5abU~sW2Ot-B%?KhDAc3>t`7}r2KFn18;>ohlO=6Igq zL5zSnz={r+s^Zg~&lfhJjskeW2? z?A)F^y%}csaF#h;xiiKjbj)+nQX0J0KvCTJpu_Kx{9kN)C;Hio*M3RBQH{Zwbx_V@ z-Dq_ybm?>Btis}0=l%j~W0ddd5sC4cWZ{A)+PrAW_fMXDaqLo9z{@#ks@}1EyOS!C z9ywf8em>&Dr}TLukozF<)0LtThiuu+Bw*IEp z6Gy*UyPDV{lp(3LK(|tV_VwIPqH5WJb`A(wt6P>BHXCNqjo+}A>ACC+%77_8f8KRT z$8AxKwZrejZLN)~MlbaKLQQ*wuB0Qm504Aj$)nMBFfpIjK%28vLHI*Q@p*MAapgq} zCL>ItsyJ3kN5iR*mbFyAFzNQSh)ZxGzqz*o$kixAlbr?!pPod#gYrTpqw;;jcg}o( zvZJ6Y53AnCJ}k|^shcR@(_;2dQ7w-4X|A5QiZ?ne%gYW@*}e}D%7kPAwqji6GZ$oT zA@>P<;AYj~nr=tT_Q9hgslRrFD}bVTTrT=q(V;O2tS*VBd8OZJ+$#9jJfHSPd+p`M z-UT|PvRswn*Nwq+380E4C#&&E=`f^Th~&K|{UiHmczH17?nh3r9>Yya^ zZM4vpKWbXN=M@v2)+RHR{o>t`=G;v3#jP9$Y_);*p=s35jc?>U0iZ9{$IX z+&4vu{fjZjn-nO1F*A|l2sq07ADg_i*p6s=3KzoOJ*)yl(sm~_;@6r%7e02mxFnz- z4YMKjk>ah;>+W&ghhZV6o=vp-GM8Ej`Ez}-Hb2@5%*iOkx8UBIzqFEctwI&=^}`4f z6#qF|UjYW;eV$fCYS0~oBj@jo8J6^yNf;i@QNt01D&e|5LYhDqaV4^i66q8#md_V~IuPYl;ul=2X5 z8$^r7xmF5qbhdq69N5N#0NO+R01NVw129%#K|pfTZy5f<241LyAn;A{*kgvRmVQ_y;tD>_wnT6 zOzST?Jbsc_^D5te{mmok0kg-Ihm7q{{i2xP)!hKH^8soZ2tdLPvrwdxX^JJJ^7TO(_!k`*zg&=L2oOnROe`U`;lWuw!6sB~h8>rDSyr>N9tBkzk zN_ytE7sQq!u%X_^%V_<8s8hrHngry{FP&iia?d#ic4)ga@`+DVUqVwJ5b&Iu@qJQu zK6F#&IgIH4hs|B|PIJ&+sRiyufYD=0jQ(RGcyO7TeSY`XY6%h?p#zqF?I*h75J&%aGg&tngTym_!& zTmz+f=4ghz`TCO0Pq%sg2fvv2ahJ86Sme5>A}*z=I>U+9&8L~2wzC}Z#EGsaBKwmR z3&~45p8hiV9)UzzuT>1WMt@5W6(}3VJ8qZv*W+$=Ujn<)nsM2Zl#VGGfJ(>AzEs`& z88h<6&Ej^ND%Z;@X|8p1CJAVku_O-z*Yns3WZ9<8%#6p87foJX?^E7&DUm83Tf0NUZqxLH~_m0c?(+W+Bv6A9B6(fGiPbhb3> z`Ps-9?&cKM1Q)tB`qD@)%C|Cyl=z-~cb>79Dt$%3<*FVPo#6fgb-l{^%3!>4_Mj8a zm^WFpUq1b;Jg6T0BZ8jZbI;++)ThWX5D}Am)buotg{6A6?wCyWjIBssGS!#drwxTsk0}KhhY#SyhT^=L!37k~_ zjWrB8ZR4GZ~`6iJ^1{IZPw z?Zuk`mBAwMBQL=k;->x&5T7Z}bsUFjX^?@-sXD7KAs)HEd%UI{lt4v02DX*#jFwtV7F4OhylOnt-}6SsA4 z@MbXMj0Ug^*1J>dx4u|wYC;q-DSdHIJ8x)hOkK_RN0v4YjyJ%nk-{*3F-8Bi?@1VJ zO~6)`ahqJM4+%WfHP`VEt!76{7}bJ?;Z!4Bj3Emug>*|SkA{;j{pmWAs|ZmLc>HH* zLtP-3F4OYn!>o4_NH@2ZufnZZ4XM1){q<=6relb($UqJh)U;G0ESxm>)@jUr_s9Ft zXBQh$xF^sJ{erxbT-ufHd>wys`Y9yk-o{r27XOAsf0Ob+hc$ZnfS!1>H!!8gmRKWm2dvg<)v)vN*7~FQyD}d%w=oJ zi(~gBiL9WcI2TB^*iBCw56YgoHjiIyrs`GJWoTIa7G#|Knz&>@DLK857hfIw&av&+ zAmdX3qi)2d$c>ZBmVl->#V!3aLuFiFNaK(`5RMFAtJZ8?6BD(;9QDkV!Q8=6>Id3* zFA3?Tpt8R=oZ68SAq}HMw=iD;SF}VIz`MtJ*Q2gzj<-JQr{E4Gw7p7afh@B=?oBajjs9jA%)MsjHNu*EiWBZZX^nVu;A<$5mzYTGtTW6UqLxiZ-n7F0AkuKCD$iOs=i>WK zOlSfnYHXzmGAw$AI;r#O{Qmfaj+c6&`vMl(62+oP!>RVo*a@X@n((n9w`rRZ9qOPR z&tXiI92P+ahBT3&iNO`m0{{_p(CdD>tXc4^A9myRF(^uq%p$3^tMCDKR(b3xbq2OD zE#|eU(N;$~sFUaONu?orr9I_eAvC?1*gr-tVC? z9!52P#G56=^(Rk@&{_OI~?mU0d`q8z=JzWU>V)w$LM?Xm#sfFGlzXtS57L)76 zt;-Wf>FY1v-=umFQ7}*5Is=^;ky{!|wAgw8+D&nd0 z22yyT%y^z(B9(-=fYaKrT;WIJ=;2J2>9vQAkbG9MEhC3FB5rrKvhGzD?J3BriD6$- z>D=Iw+0$maXrHQeOu4Ql5S|xuoPQlW2fY))h3^iRm0xfGqH7UR_u z_l}G`V@Ojk%Ee5Yq&Lyczy0ULz=V~kznk9xCYjB6(^(poOf_%P6v@ya`P3dp4IMS& zA1MTWZHIr;fEjZ*`;2D6m)Ml=es1fh?J!LA*l!+D1;zHehJ-U;L35LozQc6Axex26 zFv?#cPgDC;QQuaY`o4jCH~e{nKH0V0;A<(hWb&wMjZjqH+^rM6+AXw7BkfXO_ij@Br1ku^g^YLf)jgvdrkXci?u#l zs#+^1eC0Xa#bK|{eb}^mj631Rejd=kQQltTUeQ>6a_a8%H*>hH^ow%66b}r2X_)Ym zWIO{ooOTxrK5`ADS~BW%=zmqD0rA~WS(?YKCvl6GS4yvyZpL&m(=sFbD7Dl(s1_Vd z+9(8@YcmqJ+tVIw(Zkdu404@lhtcshaY|;S+vz5!31lbV`UJ_&i0wtGWrPVxF(p(D z&ihq;8`MRVz8#2B9k`@i3jYfL5@M&z14lKqT~orK4WjX+Z@WU&uwmXBmKNGsNT3*Dw=62eP{EC z<4U^A((D^F-%sZZ#(NxdA+O>csU176CF?`19gT>#OA8WhqJ`#&ufbJH~}?ySR$2l8uj(l&e{*;Fm3 zhQjA)!}1ovD*$Suat1VTp)8`b9Wsuti#M{)rlu>J`{adg@e|`-EIte>7H&#GPa$pM z`&LNk?nSC16wm7%eA-N6PKZ{Z`3~B_+zESZz`^m?-+YT%myrXk@%hK1?ze}6AW8dO zQWe@c1-};Ri^O%5{P$^!n!`4Y56>G$(mX^JQGel_(1H^zs2eig9h;2u7197~)6EkqHs~ zKi^`84)tEyXmvxJ{sbTP9vx~5cFcvj@MSOt`D-m>_yId=`q%r&=U_j9qz333A--~3 zT$n=>xa^vHhjXedY3jrS#FJVIq`a7rN=tk+!SnZ!B$xNb825fzKi)n23DJKU!*KOK zO45VRqLtyiHCLb@Mt0`vVtM;mog?b5DR~C={henWJ<;X&Vh(v=AU89A+*-WUn;gar z?20Y1AVaF+XzflQW__Zi`1Rm9(3k4j>5}7VuAgFr0=zta9W}4nmO+pU=gtGcvnl?x zSoNv~%jL0q&pQ73Vt6c5L)mTcGmkA^RYcn*)~m74MGuCve?s=Vvy(9E zrETLBGs;_8pkC5*Za|32_fG0L7`QA>YW#|Hq^|;fLRk;GuE}=ntA7`V#Ss+dx(Lq4 z`H}4o`AD)j&~_QlQ~TJ_Oepy8?egXtB`VGSBpwaEj*h?0mVd+P zTC#Sya6hT!3RBvL>rehv^DM8%OMT`+I-C4OqtYWh>8x=XDc8hEeqdjHu6U&6Msis+ z(BQL>JZcbp6YNHB?%Z<|tYb+R>I!=5?H8|$&oXA~TN8N$@pBrx3T&A+gdj>=+ZP-> zYYJjlg zgA{TTe#ce!JH{#OnHw`r+SES-OLH=CvFSLmti|?zR?+Ga74_KAPeQm&Joy0c=vFU# z2Pr_7>@Wq!M&JZg5!XfPXdK^Tw0kYf42VketAJv3Bgr~(W8OI#r2IF7GA|#~A0d9^ z{|QO`7;=4sb)(BY-HwI7o(fNO)El3uS#>j2O55%nQlLC@(ghcS?=P)coZON}1(pcp z5EFDybdjV~%#sB8UVYdrFd<we*Im}sWM16Mz~0lG2JsQ@5mn*bXf7?PL5X;;ihbf} zq5U(UPk?lAS}JQC(E8x{y1pmw;9-xLPvq+Q$RsD2SUI`7m;$v|yS?#ZxUmo*Iz~UX$Cq1$zaErm%;{J830Zx56`gZKH8JsihxJQ* zK)P;A01(_!2FS{)d3e4F=dzxY4|VlGXpE`l14Znj&6gjF`cXgi3!sg`zAToaSHOLO zt;1dX1WQ9*pz5RbL|Pk98sA}36i3I78X-Ni$&Q}L_uW=&LMJ|G^zKNfDiAiUH9DMR z-p?_ToSan-b*=K;9dZPdf^S^rCkb;RNMUeJUEQzyj2#F#~B z0kC>80PM2=Fx(5MZB+j7y-6~W;u4G#19e)Mn z)0k5v{0>78r$BSAJM~{$hA8j$^E$s)?SMl(HfI zy|8@ksTW?Zg?ry72?WUsCdAmGJENa&U8DwyUySwj&Cm@#sdt2Vh)sGMMSZ~9E(Q}^ zNzOr_y0B*yh7N5*Cu`dph)HQSXkMRD4%o6PXOiRIWtfiS4$XhhQPle#2suo{{u^6e zlZ|C)yvHy^)?fd9KyF!7fTO&s-j5SEQu5e1>jJez>*y+LWMjgRL${Q4cWK&QMpUe> za8mCdK51`Tpq;PT-bAngGs&>$4ejYvx0}==wFBkJ2-sJKP<;mjMc}BO+}^`2x>QW|46}hats$F{xlqW(?9vAgszmz4OdoDWF2s zs6J*l*~D2mk4G80hPrOn2*A2)5u^hhKOs{HPT*%g6;S>Q?FeLREpWm17AwNuf8EDj z`Nv?rof)pkA;tc>WgU0=l_P}vpd|b~1`SF6R!o6jGz|xSIbJTdm1fyaJXDVe_C)1uJYXR_sdM-OQqBvx<3uq4Y4=O7eozRK~pn#?cDJ&xGMe zB>+FaDII_o^0#k)oEDn!a@ioP?0!ew&*g4qDzx{YfUKz=`ac8gY5yIdi7&}teUfA%yB||*(`wEipPL%@$^phcY|hz>1@cl*dlmoPxgrQ=#D=V?#LfXk+b6> zBKYJC$k~Wak~Hm8$R#q1oPBtNpS_`p$wTRPh0gwJJ%r;bW@F&W=u3W>A+X0cEty`v zbE04>^@TeMGo771SAO~VGtEAKc8fZme`#f`^|dXdw{4BB$SdHTwC$gO4Vl{M*ebq$ z?eXxq34uvA2(QJDJ%ue?RNvWW<~#qy)q(1535wy){l)_PA=j7b>m%&-N3xJ+ncZv8 zPD6o6@I?P=Da!e3(OaTiV8;ujRDda)!nbXe5CembF=v=C#h3I}p4O5+|4O$v#(^+= zOSgAVb?oRY=q~1i&vbt&svi9Cr_6O-eav7B6r@{T7mrxa`#yjd9|vnG0b!)BmZ_#5 zXiwL4;+e-D<(Fg?fQdHB63y)Tv=}>_A+}JdmN{Z*ADf)22sO5T3BX=nhZtFY<0l8~ zRKQ9ek;A$fJg?94{)C)Y@yd$<`cxFis^o-4YH=#?JnQ?yFy?-`Cu>(ne;eZRgU_%cJOup1uDK zsOpcPX@kq)2igEsKO!Q`V_YAyCZcc&kOJ(QJc^!CN;o`TPo?s{RlAg>K<~p3!czf7 z`QL&de0JO8Dg{OPNVJIbS~;JYlYdhDz3zO2*7%`ZOIq9%kE+$^q+LOv0U<;m8ycSh ziKbn;Frz__3?k&Vs|s`-ltug*ePE+y`d4& zm+C+yJ`M)!2_kBo3-dAu%8kCj1%b2$g9;=MAyD?g)?E6V@eRuLFLE_2Os$505$AQ0 zBBw{yUu5@_7`^MAF)_4Q!9ITfC)=9OnU#2le5mv6!7MM7UQ^#uQV9ya=|q z#c)>v!nYhX95JynS5@ad-*?#u@jVi1{1l<~-?;%RU|F)U7S;$^>;k-Nm#L|#^On_< zc)+;#ARp;GxxyWqC?55hFtd&=4Tr(U1ik0+CZv`1$yN|Ue)V(%UZJ_NVpVfa%GETv zt2MSd8aWcSNSez=`g%%k2?>Lxot<{WIR%`iVs6qI0RjsPc>!VH{)Ou;%$LDCOlMCK zueXvwc6vKWReGZ=*U~<2%iqn^?J-t}i9~Ky^dz>i8=m=arPx z-!?9a#?$*NV;^s0Ai@OeMnFzn zuWTiW0@u#=ie`%vGayI&ivSZkfC*mD0vvWN@0Wi)cDH}0h%O=Ev7kM__)$s7^*G@e zklJ)N+sHeid=0&t{^QT5PZ9h=f&m5Vp^Z?5Vf;4{X}*hXP2;9qOr_^c8gzN95el6w zClK}%_lc8dKwC+vA=K!R&@?*W392dX6D6DGc%vLc`1`NCFDa%NxkK%HatJQA@Lv}@ zzI?DAyZ8Optm4ZE#;@e&PK{Ahr=B`Pa7amx|LzZ8rH|hXU!`TrRWA>cRER3+L6+Qh zxR_ts9o78Zbs#Meo6wt5p>1$lM5XX#Op)R6dEU3}EI)bA?zh0C{ksPyz@**&gh}=O zUg+{rqxIGsZ)qcbe$c7CdP|uIoL=^2vG`%suKpLTFX+nLnr54!W3TRnLLy$l6EX9}`3T$3Fl~MzdEm;>YX$TG=xhX+^S?n+kpJMKQ-q`8 z(x2w5ZuUq|{xGC_NcQ&rNHznn{diCDiSM-U1rUQuKcsq9-|JP+_dVTRp%*qG67DJy zF%_9%K zID_H-(97^4PQK$o2#|@$9>5#XPl+as^BzizJ5Mj&*M|i9ATs6-_#RP){N36&B4BGR z4}c`XU{u=OOW^R9?n1Opcg?=iwID9s1L^?c+fD9dhMT>>8SX;~DrEW}4Qlr{xdd}N)_ygm7|6>BZ2QiC_8oUV7-6fh;oRn?p}5- zDD&7mqa)Q8(aSg%>Fv>$VVhHwyTHJk?*}HYdE?%iC${Mg;Q%%>2+&U}U(JtpH!%h; z)lIKbZeoIOOhPb*n@n*Y?!6O*O`{FBku+Dp3MW7|aJTl z(5}HLd{!ZKE(%b7%ScC5ndoVK&3k~*mCO3g8i-$>IY+5rm1q`hFHnB$8x6M(;NTx;>nj+YQ#UPF%bWyWAaJ(@n^0Sw?iylu(`Ns%s%tv~cAbk*viaV1U@ z-ZhfI&3INS*~0s8YB^2D{e-BOT`g@tqJi;0)&S3e#0m>G@5pOKzW)kn@{+QwY?tD$ zF5|Q=klGl(1JM0EA;&A>_gVR{*UO~~iGT{s?bTI=Wgzj^7td6c@q4PaD&z4<@^l7t zeq>u-?7(k66QpYdR>DN#fbxe$90u1JNjf$wpNzv9ECZ zv9jO3G=7cljq<7bO}z#7&>n#EjaM6xe5+4h4S9^Qn#3HO4#(AJGqWOS@WQ zJy$c=ui8&L8DL@?aswP z4{up<3bIg~wF5pGr3RKuMvB&>pXQkfM2xFCi2}gU!}-?^1B=8AK&qYV;^#O4Oyu)@@7KIsrhj@OoZFt4P(Lv(b`@OtC z+(=Jb-5_d(t>>3!dTrdfzr2#SJKjhM2(<&wJ6ye?EiG-uCU6rvhZrVPM*-%$R0|`f zWP}{iniQbxq3|F~SFw@l=C@GS+f4wxm(aXMW}%q5bnyNw@H2dA#MdmT(QGi#n&R<8 zF*?OLii%aBGT-um|655=qQxCxoMR^h1o zV>Yg~lQ*K)Rv44mKox6C6$%U*+sH);GGRZl>P)>x-pHa!#(uUtnQO6`QqkA&H?X{bNxQ?|S?rc7hF zUKvQ~xgED7$lOI;qkHJ41FPxay z#YKM9=2uwYs&!twAF2xbCcC1L-`IC^)4H2=rZ);!8?5#d4?j*(kfA|_uILE>P_#G+ zz+ja0TMO-wVC*U!^GWzEZZ zE_%X_ea5=eiDt$ubxRxU-0A&kI-8>BpRgz+V_tTCTK@MU9+Q2xXsgzZuSq2?$|HQF zB>3gA1(96zBmJgayTlV!qozt-D$T(J`Y9$#(02_quhWI}PX_5fiy)X^a zP%0*=vS<+BfHv*y#+|vE<-Rv2k3E!yS*zk5v!`^mB+G9z;?Fod=g*pBC7Q+PLZSs^n zaI@IF4|$jRu4=g5^FZ%=iy-G<$<`fx&MhUjow_CcUTSJm$;6GMRDgkEh z+zh`>p*;DhOE0shrS(645cu$MR(!&e=vRT92tzGz(sF~w6agPbMGGDzL?H*T?*a- z|FQbVH26}>0%IFx>90bqIS5l}?9?h($n*|hyOVHSSgH2h zRaF!Y|2zaa3jh}ulLpM6=6#SRp9<+SzO2Yi7CSS*%F5y;W|kiX>a{4OxfJ{O z`C{k;6_2;7pyh;$JYdL_S*MLIa+^+{M4Mf3z5exGM6)W}PtCe8dl9)h_1&Z;1hT&Y zAj+#O&Uasf(xpeJ`@c!1t`3rnt1BXT(mF<$lM#%&IP2SB`I22oOV@b_V;W54r@wl~ zQwcJRFIonSo}4fiUY@JR$t>dL#_neaD{N(UFAhde_Hi~PN=Ynp(&A!U>jSZHwwa~R z?e+3Wq(pQn`q`4&F26nwI*argXRVo6sCI-s9XVns&}3*tdNcJvhcZc@U&yA9Z&3zo z>ci#YS+UOAj0OXbw~U|;f5qM8JuVQb5PpAe zn{+49ajLqh_Eb~x^5_}YY`Z)2^78QXcQl7eADsjS( zBiS{E1sl;tPnyU%osYq}03hS_PzAZ0Q>Pp4!PH85I+mPsSdbmNP`gwLR-fo@hh|i< zb@WGBb?Q0v4DG1v&l&ygk)h5QpX%qf3R<@?tu7+hm+lsQ0!)b2+@;!nz$Bv2NcL9O zo72y`BhtrduxKab)rdkq{i0C+lg8s2_5)h^3Ii-YUO?yJ-WX_F!vz*6|8I)}#Ys`` zTR&d^ruhbw;XbKW#e0F+{R-B*%pQR!8GetcCM(h#?+!$bKdYOGml-O-{#aH}r(`@( zZ35%VF;`kmd=^uh-f;G|tzZ`f@PW@o{K41P8{kYAe6g*qiIst~Ui2_IhX&d8e$jPO zSQ>SpZeJ=lr+ZP@>zjOaug!Z-OY!5#p~L2Q;g7VkHVz-L3omY}3TNuel6+QhiA}Ql z)5fC0^nIKSnwImMrScHBOl3daUJ^?Po=&%$VLCbeym(VIPsU!etm4C9A_4YxoIL&6 zN(jl5EO_~%z3T;?#F0Eo$rrbUXi|XLD*$F^+-Pmux~a=;t;eyGc&eI5h71Y>_yJxc zCm-WrlG)QWj-cBXk#g|!+Xt?+0Wr7NMmM6-l~kQ79U8eVEn~^ma^1Mw!$?~r!0pv_e}+CoTl}cUns1<} zi?%Z>WkOo+QcZdG)^6OQ6QeBCb|=}!Y)Z-Mfq(c4?_@*o)!zOfRheGe=y3JJXt6V| z(x2(}PAR3^$pO}BagxpF%OOUD7bt6O(KW$zK1CkYzV8m+$lN^c3qP^n8|ggk*S%-E z?SBRz{8GR70q5mqHerT{*snxDa$#`nkZ?vSO5iUdap}3p9SOrQzX+6Sd~?wrSqfQO z-U9fa-=F2)eT*@}s$WAsKRUL|*decEiP&Jpbg*6KPGugG<~<&%C~{{1`FS6AgQm@T zTI~_8_jr}3&Qxc&;q|PuT5yK72*2JV;TJdnkJMBM?QI3R&*=~)_($G9*hsWASGv?E z&b+uhBfw$iNgjF}Gsk5sxH%xGMVT_+R?BdL-jgNgoI!?F5lkY=AF;vsRw(TSOtP&U zONhu=2)v2=4@#~7gHlss2|0rHKI4%&eJer50P$>_CJe18p)^C?rDr;Kdv9U8-X03erEYhKiw$l9xf zg;r8G9$GF{IP`vw6#n+ixTU&3NjC;iP%NtWQn@$l3S40`0xB_~eE{tU|DwE=Jp%i< zrj$@hBFb{3pNS*DYzO>snnx18VDFN+lh z-Ds`iGhIA5R4JG(suQ8dG}Nq&(1S3ER%P*rC{w?(95Ed^P|*G2sFpa6V=-KXN0G(^ z2vy@6JFiCu0K+tR&u&oqD7kcIx)-F(*}eCPGLpsZ+hG3XZFCo>UpMG~BzyOg9X+}M zzvbHu=!8>|hY5DXm#kOi-0K$|yke2v*dt?`wKyb`?yJqbhUfg1_0HxpMn@dq#EY%Q z&`8jKE&6CCae7mD9(57#3W|xzQL}WOt^k}bD4tdnObRI^*bSCPJdVWIwLCM8e=fJ!Qo*p-YeEy%B3>h~;jObbFV|jd-M9 z*m!`fb@R4(u+#N<-q(!WRHeHE5rE$VIl+*SP5!Wrr*}Q9Sv%d!te;fY)g;T_{+YFi z?7C?3VWy1(C;Os?m#fF@W@V@cOvltJ$c`-LIg1zofR>AKQJ{DnyGLGB4OcfHg75Cg ze$uO~dw4hoK-hl#`Bw;pxt>3uLf%c7OxOLoEAnt+SWrtw)xN+MZYOqaUAgNjGrO6t zy0voZ(%x!zlSrkDxCdKCaE$h34z@v8`|y?a{;A#7kpPL?xut6L2?`Q_?_O!pSZ73;^pMp=-&^HREUL$Xe&jJ|)pTxogGp&Z_dQ@!ilcY3~J3Ra=f z;&cen_43`7mmcqE^7!e(d25|2gv1aFDR5JG*anI(g2MmC{o&9;Fmv2I+hh|fOGLPc zLhXoG49r>E1YJfnuL~y}2790Gm=^mm^M{g3yK=RBKkmQ_xe|3Lz{lm-PZSKz-N06v zB?MhK)fGM6)LWqDnZNY!w8f7n$WFEQaJe#Rb%Z6ml_K?k8znHPh1I2!x%Cc5rEUr|EkmqdI60yy(Q zUYoVwLo7_gPK|*`bWYk%xW&dR4m2}7_=6M{zRVw5y^*BTu}5noPvIsaykh_+^tX#! ze#%X3E4pR#qWA?MB-A7{na}fX%nmu+1D(aX?7?2sWy~aIet&F0ahB{#^8_5v;+YFg zP-U2gui!Va_^FSupIl~89bEnX@Wl+`Tt=bpeF4#SB+lNlVT56X2T(dRt4!6Ml1c2l zzoC#%PF=}kfJ`DA{f|B_qCo3zFY%3VSMN>0jE%WITbRQYO!`D)>XCe*4OKIGkEu4$ zK^nIH*1lloxni{UxI!1dnv7zja-W)m{v6-SVaDd*u54qa!W~2%JhQNhO0UXYg)yA^ z14qc<*ViCp0wq~U`;ohkcM?5cf-?Ulkwy|S)LHZ*4e%EKRRG-lz^l`G$WZFGg$D_K z`OlBww@(VLg`rQq?&Q!4V#(N`NUNkCt>txO>2%oJd%B`qTpVJ`F-h%cmCEt`<0S4x z9vk+yIT-l!YFwZH7Z;%EZ#U}eu%!qu*@{FswvJv?`3r z1J|q;SdkHhcZtQbV@mRV(Ih*rp?tlTPf;eSjv%oRysW7Ut17XuSSsBEU`ObAw-RVa zbVbke1}1mF&6>>LU?b_4zRLlOr3%dROh3@>OxwLgs*3l9iV%Rjjo^Rm_XQ1O)HB}| zd>9vU^VgGbEW4b4m`cQ+2kubOwa!hQ@ncvNcZ_#!<1YSol?gxp(RsTneBNF5~sMNUdeV zzUC`Ljl-4#xo7B+66lX9v)r0RDq?8Y;4NZk%-|MQR9 zwQozVv_S&oxDZmSh2_ytkB4i?Y^(E$eVVJ$E+O zm92bq#<84c=N{C9CFsM&G@O+CLt6nozA1r)+irGFC(j!ayncM0@hnwq6C})6npd4UTz^K_lO(!sK{Y3d_9WivpnjyAU zX*NJnX1j3j!8&&YrhDXJM{co#IL&vhu@^1}Q=E?xWQH5G;||@-PZ)0$RR?%{D-yPx z?*{(eMBi-(>%7pkdYojJ;BO7!pVABPS2w?s1s zO$-+RC*&AOj%mo(s_A&aWUdI=JeV%HdoToxE`!r)ZJ{=w>IY-!hKJviuaLy#d$CqB zU;Z+Ks?c09{-G~)b?ENg$?lS;=RROd=1O9bLt@Pg1Qp+Prq&A&Q!GO}S_P{TnTLlg zQj*=T&k^mp<;%%vdc)EulXOvR$DgY`Dw)xW@ZeSgY%V?6!_D@JbruJy1^$ekWCn=M zZi|{p2V)xcUhbN^{d#A+*hL7N72quKAUrEK1q3(qs#B+zbrv6-xPQi;kLIUp2>7W7 z+3_di*Dc>zugboQFOfYRsrI7Ql(6MxC@dZZsteJa+DOsP_@DZVM6G?8D+AMtWGPId z-?JTmBr6^#q($_wr)W%ll6lBndY(9N^3(goh0pGz?tFGk0$X$Z{g8_i1LD4@L_KkYDqAc>=9Ua_uyE>RXSSKGaO7kj%7)}(*W83}n z6}m%b3>B-UCCSdc^tE6C;l?!5R}HRWyVv_^n#~h-t_lOC_far(L3938+h^=~u#-zs z09Zz;=%V;P-+JkR7Ed7;L3tnTbolEFR8#^m74gC^1}kp3RlU(k${x;UWySw>I^qWM@ELInd7uq+>y-C)YWbJJM?(&*&@s? zfV)24Kuw7I)aI@xqjuMO`>|Ilh1f7#rQtAO7}&^MRWdaCVLp)U0pgGypc+Ou->Ux& ziR_5;r6{V)phyPjYEg5CVKAY99<2r;hNBSc0kYI_$2ePA?& z0wj*YVLY-(KyWKm^H1y{!P&cfsuV#Hc)=d2T9OJ`sVmn#&m(uVA~EiG<=D`9a6^@% zI(kucu$N6yI3y)DP-ljeC0N(5g@X)!ulY#@iF7sVbL5S{d0zb+hd(!ITmqP#hlrZY ztui9b&w&IgOmf?9WN1^XK$BINqCK=e-0 zq(BYp_rpO_Wa!K~`!K;Rf$!UU^@#~jAMItkcgYtp7&+bm29uP}0L?4-lShRDTG7v# zT3|-J`~$dK<+iNt?iB&daizBayj*yzJBt53!ER%`H3C5lDl(PYS!c07H7$nD!Hk32 zPGZvP&ErXziB-`lD6_ofGAQIk3bff9lXnQwriC)* zpKOd97{5^I?TBAoG$a@I^3b2R4P~P3L|u|vwnF=Cfx~fta~B2C=)@eFGm%(d-`d zSkRcv#doS>$?RaNY-Kn5jZI769Xmorv9j!X8IBE-BWXYkFK+&sok973{?HFY9f?1r zbPT=%ihmt74vKo{sBQ2eD{Hpn&^P<~fN(&}3)~^B(**zUiTL{x<=gO56FL`}>}kcg ze-4_FAmX4oCiyTG7zIWtLdtX0MOG=qk=N@%dw%q>+N1Op!np|n;^K>lSrwwYH}Bx*0RV? z>(DGYK9Dg`$vaf!k=-=bH^8!bA?oFT7nmGmRZ|;=CF89Wf9^K)So-m##T$xKa@#0f zdqTi5g|mT}fWZf^rvy{hL~fdLz+m%@R)!ygH5!8*{#zil5TP@#dHQ35KpqIH6jA2( zjLGO#fSb$p3yjP>H`ul<%*Q7=6#wJy)>;uUX4ZVcB=gt;YM3*E!vpmB8qFqKyFd=0 z(AJ@$QZtP#=}_s|=^@60i$y$edU7V8fKfqN%CUX!KSzNKItqVRA|nV)WFHfdtzWr$ zWsm1S|LqsWa9nq*_&XupKcuDQu-m%afh2B8a!4E$Jx0q17+@=J|9oHp9GeTK&_vM4 z-rNAh(K@}1{5vJTsQFWIV*?S#eVP!8I1AN`&~YCH-8{%|2`2Y<2?4)lMO8Z%w4|lW)z$t{82idtoeoI7c)eQ1m{DJBvvv}|h zJk0s+C6kZu8F14-=~OX5hl;%_r3$2zY`<)E#A)Y80rQJgGNy|@1mS^l1knVRn-lEp z;~Cbmp#wSBO(@_0^TTiYr_|K^7I-x;ybJ-h38qL_5A0U6-!THn?cjvQU3*ggJ z>aJ2*A$-_t;KP99g?!lBEpiRJG}4}%UD7*mEbc*UK(mCqspaSQ|c09o%+*B{#jRM}~1@v5=5F4^q~;*9A!MbIY{ zF(r*6Mc6wn!roggN{Q76=+oi`%V*d>39E}pH@le*wgxVr8E(G%T!}9Mxq;X~@F^+k zu|tACy2l5V8ELQ&=wsS3OA6IO9KI0Oas}pgQu^HOSE*%`BhF7P1^ddQkc-QVM^h5X z?z17U^mFn45sf|ZjiQ8Rtoj~EG|Qa!5M7!42MA@@{9%|R2+wx_JfDRg2xb14hW?|* z@^1x{-{E;#zXie7XbwT7wd*5sQN&?7D2YUhfRDF7I|O4d=0Tu3!#n-36S2t^Dg@{h z>VCl$UGXvcx@?lBw}6gh))sR4DF}8?p95Gm1 zS5QJLz*{LyBqVVr`zNkVo4D*GcHttkbKrdkiV;L{8BBx}LvBS%#Ev) zEV|d>OueVDFXO3(M(6W(f59RKU+{NxF_-a2Wzv*gyQ#%Z(aRPfHZJc-fr~JJ8B^yO zi$clop7G=ClVdAr5%>8@a5;c2WlTp*`HlTf4=h%q1)^7?HTTpFvJ?Cbatlb3??ABA zYVLVW2>HwxTuB-q!Yi43?e>p5U>slRQu~ye6YPao(al*Q9GFLqty>yG*@leliZ(qN zd%sbtIu#GzCh*mzI=D)y7q@vf)JP13mKAi!I2Ae+oKV$?mlP_B!yEOiLO%8_1WA`qQ>M4t*5Fr%@#|c?rQ$Ika0Z0D?n)!% zMEX}wUz#s_-3ZI{P<6GjQ_POB4YGXe$Kq+X}b-h!NQiNVs73JI=4 z1yhDUpRz^=?xoUC^MdoQ?XmU6{GQGddtq5_^ZfVXfs|IZlyL7yEk0|S8xV}*-}pJt zTN?DqI~V_ZSKCYa)C)HQLb%$J(8Q#aT4gIeZS=IB|0F7i_q05U-`|E7)ruCt&JrRO&~|2!<+KnYOw< zb5$f*B4-Do{#&H|*!N<}m2c%jrE@ONYx@*K83)<|tb-c&*fW#;uAVQ7kg4x^FdvwY zBt7iKfuiZn`qkopRM!(@G4(<0k>$N_(I&+vZC~%eiw)~I(*cM6gT2dZJioD>U^_Og z!bW=yTWTKNP@<)IUfihY@T_^0Oza&>@^y8UUK^VVkqU;TZsUw;1r=S5==g*o3n^JA z?9~-Z1my4pzH)JG+@ixz{nB;M<%eJb%@{GYa(uh#SWi9!@p~6T?G1m64@O(jCBgW0 z@04TqsWwXCrY+TE?7Lh=tprQ+%QA)V-#%Z^$0)4%u_zK)Mrb+YnI^-i43E zc^~ha&v(#t`0Bk3BVfH9E^Iv2=eRWWJ0;ek!BsUd0u&XMM@(LDDQk4p{UkneAqk5^ z$3!V&xC5=JO7|%{*Txt&k9)5yIp4RSBdTF`hB1eAIzXTP>d2VHPo~Vp&eeaxZ&X(uvADM_E?hHvdX9qv6iT&b2jcJZ zFL(U**lrM6;NT)P={z*lZstp1J~jt!A&;A5s)3LWn?Jz(AmgMj?MP+d!nErlo`1|2 z`LW<7`9s;o<3ofbUjCT z=1vsRjJ;88{BY{jjj(xM?F{X%4_zU)*q~UpLaoEfj0{5Fs_5}fCNr$zSMH7`GC$eN zXj>C5l`YdxdI{%`?s`MO^v&i?2t2_{8M+>ulMDx`bL1xEyC!{U==+wMAoZ}FZ;Rjk z$wwC!^Wj&+T^7Rd-80PJrV5m<|U zMNY*E-QQT7?EID{2k}|C-5lu>F2awbKz`(_2cHIhH;L&$w}yqM{Jjz*8Ta3d;L-rm zJ=go2Q>6^W=_}qe#)+yb?A-ik;8k;)KN0dF#NvMG^04p|CWw$P13uTd{`ejp7He^t zYK-n?5~a)+dE2?`cV~55Q4$3)We%@4a3VO^llk z{iczbb~<7JV#8(t?37!4CZ}k1%h7@2-@94>2qELYEn+5%)J3@Zu{fh!q=W<0xG+%c zN)F0cC!QfjwF0ZN5{yyD$G25G>&(I{)5rN!O2+uzW2f(slK#>_gR-(DTO|!$h!;rY zsR|+KT>NrzJJYiE|^3I}3K{$$j z=5Ha)T0x6`AH~S|k@0lUb9txFP0X;P`8|k6ztSJn3%FmCx(s$>L34zuk?%}oASgcg zHZG@9W}=4NUK@dIIT%vy#=T-uFtAX~C( zUovq8?faW1Vc36jW!G2I)no<$BIefL-e-6ObR11J<>81#;Z!!>W$-o!7;-%J05}0ukU+T)?Qf366L8s$S5DBy z^#Et3rTk%zIGzN^A(=oY=*2fOT(+G=@zleI1$Uro>E^fH&t$rw&b@tV{$Bkp@bmdB ze@Y12YYHVR4?pV01ldGr^JX-~<3i9BQyI*|OW@mokpg+UD(zRR)NjFdvb~6v2~^8Z$9wKzsJQ3M2xPXJHH(lLDAwjg=uOG zTn4nSQs)e1t`MaI_pyIY%yV(>mGIxJ_yIG&(@8EsX)o%&`Wk$m_v33r_B@RFR}ree zk|{_`RZ<`5DwvA4_=DWPeH()RDYQ_^cBY_2z>8U1ZDrWUfdp;;4HJ6NS6RaS1R1Do zLAUI2{V>!{cFYX?Ysr8A>jLOsFRq=sv*EQe9%(APMhexFW#DxF?195`i*? zOV6#BoG?;+8*{DfH~G!%on(D4$7cF1f@-94>n9o!_`9;3Jp?gpUAF~o+#yr?<3d_2Bxr`gF9a~-9-{gr-9({AIEP#VU*<^JVKak>A6B`d z0;Gt&2!qPh|7R~rlW6Fi^(#5AqQrrxxbDcHp{h{SN$DZ_z?vVKqU;AGgNUhWTe4!LwLi$D_SHZ-$y`E^|0q)(s%-m1@kV#`7J}v z@BguK^>F58RFq}NzW3u9$Pt-;_YE{335kCy7Ck-!cvke!Fm~8!tiZck$A_NK&4S;N zlxqS+3bD}qKzM>Kid6_fZh!f>)$q5FjB((&Bw)ON6>OiohhuRl0AY zyS+eEs3Ho9iu=t!UWm&Y1f=Z-&2A;tqO}z z;Ax}n1Mt?M+!>YE$-+T5ci*!drIgjp%DHR;B>~#K+R*~7A>W(SkDaG&Z-R$H#~to2 z7XEWD028x8f)zc&KZfD@0fu`(gT!|+6tI~;@u2#RH0NEshmioCCAG5-AnN2od{f|M zDy2YbH)b}ClFuEy;Gr#rBtFYK;@9BalvVI}CZYG6gq;W~;X!b9-f`$Q`s$Q*V-|S$BveUcS6Z{tt3DnEu3J7h7*ZYruOTV#FZX1EXHNv3 ztm8>BT=4(pH^Cs=z6 z1maLVrQox`(;BW;OcP592@PGVob$>AkA%{5CAS_CD2vdRETnnt0qj(~f7j+a@^GWK zH*Nobj|3Om`}_Q|k7F&G3BkzD2T_%idJ>=*VnwLo<8+%qIkydG9~t>0v=$+rzXb-Z z#3xv~df+j|FT2U;u%?1?myCi#T1<}-S=1r8AgA0#h%YI{GL?Wy>GKO^(T`7`K4lgc7cXhEoT?azRu+4*lIqUB z>rZ5TenzCOP1~Edui?N$gU9&CpHXB(BW2l4opZ+_-0;~Fy?B>9zz4wHUn%Y&d;lG| zVFlb)!v|&CfNU~%7(TcJYlLR?J6UKez-c|U)Xr% zb_SmFTkB6Tn53cgQK&BF1%)34sPOYx+1qBD;jtDExN$?(U5?cC4itRwmtXio8HGf# zEe@hHOSK$pI_RfPJYHO&oF*3Y@gsv5<+g(0^~L9Cr8|x8;Egpi@OCGU`;mpv`j^aA z@CYA)pVm|ZxC5@r*69K~-T0~8d@RJ))zvk2G+c5)3SqoGE+WufYpWZ?kn<+qYa7GP zWS0A+w|#0bu1@%wB-Tx<=5P<(bHt?lGGkMoby(vy(;c|>FX&}Tf;54l$*eftr_Qsk zId})V4KKXFIT^y{pT<)yo?)^vNMNV^W0xCX5a&Pb@)0R?zf zjgA#CRc417ci&)wp7F5<|6>+R$g}AnJR9}r9MoeJG;L7I1JSCBJ*W#6ndUtbqM<5# z3n9u8-M6Q|p@h{5Ka3g|N<02{&utm+*W*)#X+@8Q@CRX(jHv+oSf5g>5&Wk^U_L|k z#rxy5X`Q^M-6Zh;vExE~MwLQLVbJ0ZxVJn2r8nyr$dK2LzlHIZ8b~*g8TaxL5)w)v zC|J7neFnD@l-^P!6hren`0?CYv{F}BDkgB7><1Mpay!6fpq?IYZ(vA+1Y*Rz+BY-z z1x^CcbpaRy8F?RCq-*z`a0pH@nWCad(9rx*4RMpH1SLi#J~QwH(^|{mB9|`eSJ-$M zcMv$x;%}Fb@X|*j2sGs>6ZY_G#jQlsG0@CHv%#fQZ^pnE5kYPg(yMt@D+M~#xadYh zlY(6LqnQ65JpbiZJUKd>k1^&We0`kpUp446vL%T}yMZSpYmR5;YOy|QPZZ=FKCY5~ zY)3wz7;7Oo&cn@dhJp+>*-exFVXRwQ-9#_Q4gz5``qZ|1xF>+PLN%U?~29W&N{YY`qe0=axe#SKOgFnYWL1~q4 zxmKJy)x+C*(jh7l!{Un{jRbNteWj$D=posfVkB*Da^xU87c{(`8;CQA13(fNNPpWE zBxrtO*btkE;49M58OJ!pLi5+cH@C1cP6(lHbNm8Lr>H3&D^m=l56x1K2B84-#Zi!t-kJcQC3#fM#kj#o2Md z&I9rFy?(WozwNi6K~k|o6}gR$B9VRz>^gzE)C^#)J`xy z3OM3`5s)2PaNb1xB{D1oOMazt5)h*RxyeH!mzoE$*SM3&FXAhtk-Lya#Efm)CBBzp z-0O@5cNJ)q8W?~dKtjHz1qf6s=De@s6(^nzDi9dESf9@JWYp_Uk9-34;~ zZjZ5A3jWx=SL78Q*cfo_+;B`^PzMGH42BS-3K|cLpC2qw-Zue9(np`(2QAyx-oW?2 zE*msAC3IpfW*(9k-U17KjwYZL!Rq=vz8@I_$@e|<@sQhyq7w@W6B}(mEnd^MS=ELOvrv3BY1J1qF1V>E0NMQyo5`YMPG36l@VG5!#GNO3~2g>@U8n=?pRLDa}3<^BiH2tl^)=0-r;Hl)j3AutrSD z_LpD-Z*h@9vU-rMA_$uJmAsj!&WN;y32xS@?NAd0A&4dc zo0v8}9upi*8vVCG=qOF|^xx+1zZu{EvhDAkS@nnp?fq^$4Asrq zX{-#9ai_K!23a4O(@DWf_ASrJ_1W*xX6K-gj;Y9mfPzx}2y0(>BmtmyphDFa=}Jel ziS={=CrYEI0O3mK0?ohoj0d+N1HWes9a_{F;|{eO#s|9`p#F7LPNh~z3_?P=iy8gAJY!Gj0T%Ez>82=Sc+j9sN}XFt9yvW&?21DUgHq^ruLi znIe&-s{ys(7Mcij>{xih5QF*u_u4Z=w2#_m$mx+(aJG?9z`}n^0j~+zrDH(HH2%<$ zO$YQ3E}G0pZ+su-v-;5)dd2Zm^;Ng+HSIJ(kfzL9TO?2`eOka9rv{P*UYSfdvxy>KXF0^t_nH5~cMPT}!&c{awLQd>{ z!tMWqaEefs5j_2j*Wexr>D$qda3Z}cwO#d zZ?Pn4JG|yy_h)(a{7<>-ZzfBfaTB)guc4}>-if!xhcYBc0p+wHQUybi3TXOYHuLO$ zr(8t6k1XXwY2(lyKE;@^f<&T;-ir;tfU+^#oI#Ekx>=H*=l%9Y^~%?tdqk5ex=OHQ zmwPh~6LW80HLjl3z8&{iSu%4Nt;$@BZ^p^MDyokbOXl)_w>gtmSiQVRWh@*{8IO&41VXORQe z`h#WT+=7CvOj+z4cdMGP!NH75?NVm%`L$r4>qv&E!@Y5P_B4q`HceS>?|6D?R~?=D z`0ZelxzFo~nnIUv=~uVZGw*v#P2#=t-D0iL%8-Yx+buQ~Fwy{d{~uiH{OTNpz}@xNB8`f$Z#!QpmPejF5V zvR+QXEGn9EL^Yi@YRUYG;>xvf!SN(@`Lf|Hd;J>qz;PIX+>o-pcaD3zo|2Amc_i0b zq`g<>)LoBv4QE?0@(O2h-(AUb>})1ok5ILuDVoB|wSxp6{H zDjF;*0S6AvawpMn(RDoI&K@z=)jyaOGC}_T!K!kl1z{Z;G4@NFQ=kn8)0B^u1Q#?M z1?XX_G!z?S%^d#RKUfab>rYG*BSzW&$<3a1^ZJrweG0}ev{It=Hof6AWPkO2^p&Ju zcf8_Cnk|aXVI=ozTP<~F7K<7dr{HO*^=kRCSeCeHogz%vjob6O%cEvxx$8@Qou5+e z&W*EQBY|co4ep_v3$@3;WpOW8-nd=X_pN#tHOLdmH+~9`SL^>0&GDIOyV`ehxgy(k zMNd9*&XFpdOq8orG9TpYJO;|SFLa6>kFclA=S!RFyd`(-)Q&4do5yv zJ=@pQeQ1qj<}obI~o-MV)9t`rTm`zX&uztwE^dOk%vx%%2(Pr=SUmWQkaIfhS~#`Bk|T3k zRJNZanwkVQP#sSiI#jAG`0k9FM=i*vH__fma{2*N(v|Q6CaqgJ=ah5Ii=-J`V#Bge zpL{D~ta2`I;Nm6B0F6D8qv8@-pBCtA%}6e7VHGGkJ@z-$VSUNbu)yxNxJA)2>Ol-Y zr*_T8>ty{=UHm@HW;?oUFEH}$tL5@yA9$PRsZ6t(Q3T$``O-$E^r+5M!`1Gv_3MMR zxRBCy57QKiqNJksiG!Qh{86J<7sW0eoLASWL3iN0h4w8!QNPXw6_=UK626IwqU|l* zb{^KS1vlUC(qCUNy&26Xt3U7HIqPEJ4~-+O!_r?Ln;VL=7H1f>zod%n-%~YT%L=s4 z#BrxmQofuE|B-4Dci~0f+I1`WzViuf`+)?FHY9mE_y5VO(CduL4VITgFAq5)MIv{j8U$ z{;wxxXE`0iUp-YmHy+s&C!<^1xAtj#x#C%IPCPl;jFNhj3|}72A9{9ny06YG&Jkx- zwU6(%8y-pbNYe1vy5LhFc{(`9W3aagV(Zq|HABGpD?4Wll5u6f&SrV_To-+6_cB<#= zGU>jzS(?F_yA+7qs=6|q+)pQv>G8qY#BAAiSwU@_2ekd;+eSP9spBK!5jRM#(Fk6E zmIQ!gvSJ_|+@YPL{Vv(TGohM?2WQpScw5T6l5Nssgq1{gncUtY{GzZi5H&(DQ1*AD z^1aZyEQch4V-e$LdQtbq4^Nr{(q~lElnTx%S>M9NP>=7ie>hs*x5n5Vv%faV>~&9b zXeu|nU|30ImEQi4usr~(RWiYQE4t@CrdI-ijABl;E>FH2oo&w!J&W0Qto9+Z4p=Gp z6?Q8vYgAg$C|z9ca(v&qzcx)FCsNP#anN34nrnst2M0Zvvp=P$@?174GilQU(+*GwcV?)3A1Dvy6l_cP|4ETE|MkH46b} zUDk@(!I3kPIzr_H;^GTGqYRbERIb_=!7Uy{l_4PqtUs{Mt~Cf#&o{od55V-i)4){@J z+iG1X)(h(pe9^m7s1a|sv^f}}VME-e-ISd7(H_5iAV-hRt2g|7|J3~IVJ(3#UN zmrrrT4-FSP_a;3(c79wA7d{mK%Dhr=IibO3w(}|Hg>I7C;`AAPUxO0KoK2;RO4IRS zYODjV%0Y8|@c zP7c%)Zfk7^@C?W^7x-qR*|6mK^X-`$K74qocPf)bYnmhIK-MWW=*=E^wcW~$y046X zf1XQ%25~a!9{o3ODYCU`Heq+ku@qu2oS{7xVdbDCqN}Ez*I#Jq$ZteR?9a-Ymuuc~ zWW&y7I(#B>ka$SZnp1ahSw$yg-MDPuGdZzmY_31hU^>INK2V++AIj3audzSgm%)%E zcP{EpY`NAsk`b@@`^0O-$k@PH zc+%IY{>(3?tJRj|3+)G6qGwpRyMzZoG}-DEe|w%zie{ouu>U`K#_n|d(D;4KK`nQVLrv4*c-7tFY&M@-FLNY%yT^}NPL}<4 zgs=Mn0V2K9{C6n>Bx5aB2JLgfA=}iP9v34+u!nIdk+Qd|1jw&R2t`hctyq1j8iTv5 zRaFW}#$;DoPD?I?t?I&c2=-(X4t@6Db68nJxy%(zZy+O!#can%RC++L@a^4qrcyEH zzT(%x-EO4GsK*#+Qb8cc!w8Six`Bd2myVlSwpm-OHTmfDnEPz=2}jR$IBMs!S#kV; zrn`57?RLjSy@>`A)IF@%@x$1*V!wP#IugFD>@w<0NU9MKLPZY5dyfY{^Hz2tdfrM% zrOBQRWJe>0GTxN&lbfcK2szjcq$7A3uQimfuG3AVEN5`J;J(KU<!eOf-!am%FzWIEj>OBQX$L}6`tw*ub);HSRnxT>uOIZm>xIxk2KTAr!RYseS{jN+c)b*4PZvC&qzB($(t&JBD z1VKVTQfcW%x?7~-&@m|8(p>@~tsh8thk$f<58d62zz{>{0QW_|bB^C#>#oaUvG`~9 z`{eKWJ_`gTPtH2wv?leF8b|(j`OGlxK!ZP@m`uAuEZBnL4r>_Cn5ZQ zX@S_#E4r=O6t2Hqs++)lT<(xb{IHa);fZyX$|9din=frt`J0ChP{T)Jzuk$X&Q}eG zo^IGjfo{@6w!bcvWnzqw(M`d=1fJ(svNZ;`c-~m9q)N|(9!miy+hhf=I|bD)=hjo@ z;frgqwnulXkz15c@A3IOqk(9Ws?RoS+XPr9Fd+p7V**%+&7uIj&AEZAnLV<&*80O* z=KAAAug3M9+ox*7mHk-m&X+##h+OV59WB||z2h(QohrEqAIT~=in4`URvvUl<)MID z#8U;U@Uq~mw-=c}E%-)4^4S;E6d{xHLF9pjuIue5(fi$o3R*65WK&J-yYv;VdqJ@8 zx2%p>9Oiuen%6Z+Cd;sIPwexrp-1DPz-h_q_psm%m3<6UYPT>)c=J);(?bjV+(T5i z+jcGZMtAqG180U+WYfsN_{$y#82_v+;t&*3;^@_}QF2x+%$PFe3)V-bN?$!vR?fNd zW6lKSHN6eIK7V%MIt;!GbiHxiukn~i zRRbQ0oj(c}jT@KijmWyOZ+;6u=;d!6+s77>1o>QWVJmQvTM0DcLDF)Wn% zZb&Odc0|cLy5;iP>-r@9(>T}tOJ$&EwOsmyd%ZPuPggVnn<~Rza@ojzAF14QNl;=~ z8xU8jEt#&AE+_eTUv5N{J-}@6I&t{g+!(W!^zKTlT*=34b8f~G95Ht~eSh#w)J%1l zNZ+lW|ErzxNTV+No>X~>rOWkR*fZ2VuabSM#$xU*)M=F{;ONRcBJ{IkqA#E!UZgOL zME7K(H?9?O(6jNv{Kk{Yq105;qma3+uA3mCV};ZKQREOC_&N=*1&zwguic49rml-m z0g6{B17M+gGpqgx4-zxYrD!276L`oQBxFd*XUh?a-hmOZLyLewfgmR({?{u#_VD9; zkVnY*-cq3F#Wtf9Yxs)mBiXS_reV5avP$~&N~33y0#_2A$7!SmwszJeZgA4<-YVyOuXc^}?Uh#kx_ka!-^*(`Z|67!>c;Ns|Q6xPdWC+4gC; zUC#ZIS0&Cp@5%{ZD$h?e&#SL~;rmO>eXxc#we^!p<~Wb=yOzKkb)~YkN&Abm_l>vN zBg(j0H%$}0XaU}Eey#DYUiPK3L{a>7Yuj9K}JSD_@4n9mzOlVISG z5{16>B8+VA{s{b1m1n&VHE8d$i=egY(#D}a|7axnY~^aD=~YI;%edgBVEbW@tycq! zf^NQUT;?s!ZU?We{TZWPYGjbFk3GBb;}2js(OgwkK`^`fL9)>lXSA1LS44(A0$V$Z ztSri2*(UxR;T^h@eZvqwn+!iRN*0X<8n~e2$JIpzh$_1a5S_Q$xGA^ zE9c!0V54ynT&;H{o}e|m5Zda!4toO8Jy##8@LR?)KQHOoG8LJgDKR^_xYyp^_gPE9 zXiijwl~dwp_?M$pEz@zvlIo?Msrn-oz7^Z;=F!N^>4=%a1+Vy`B#GqZkg&w7ux9tJ zoWUY8@qVtZXDTxCU633J6DYq7cnrGuD-+(4Yb50PB7pHZc&(>I@WX)se$?6Cq|-L8 z3h8ud5mK`ah{r*42CHPWeo0ikNp6YcO$;q&%^p^eHB8l=%p3Vuc0{Z{E0{e)&r2h( zP)}6bbmZ}q^Bv%|=XapxEtX`xbX7(pW#Tct;A5X%!i=<~cx_0nV4M`f}6!73}CC z;#EIG&)vbY&^0$Og@0|Cece&>u~U$Sb%?ah>!-zYFx$(y?=lsW$V^^@;I&rdWB3mzZVE6S|5 z-Hwa}194l~iw>HyGr^9;;_>82dG6;jn*#VZ^z&Ry!6$)`AT^JrQ7_SSQ${>b_-t~OcvpiQ1OfX6ae)Wb-=vujr z=bTnO6A$;)B@;er2l8I>;S*uid=t3|c0T%X_!%&X`gruMD}gnklEfO)l`+x}yPT#* zz1@Vbwg&X}`uPHH|FX|an~PJqoteVvlQ72x56lZnN;H2LW7UH_Q$J#7)>n*^Qe$D4yiOKz%@&hK_%gp(d}bF&jOraAg6zG`mE zBP2z{$PMI=BW{=E!me>~z{Vj?SLQ>};WW~XI&>)XVgWfgze5QB`g;r#{u^#2UACIS z$28y43R+a`l$gsU>m!5)&?=jjcdyGt*V@)%I5PF~;t5$SvJFvbvm4G!8nZz0u0*_b zjp&I-!HzmjifQ{;!p(T(KPx0)&n76!jJ!M{U(nag16hk1ma#j@B-6_5r!(S`WDHAm z=CF)yl(?q;V zeZQme*Bi`fgU!U~Yh%@o1t0uRqb;`ULnZ0)(Nf1B2|uH)^+EO5@^eUW_~2JG1NTaM zhvM#C>e}CmA}NKJCaoYh*r{Uj7vj)FTb)Av+J?HbkorldbxtTVMzqJ%dy!=TuvU+C ztJBY4s=Z{T^ga7&PyjlSS0JCl-Y{crr%I2sg1Yr*p{Q19Q*xK+=XnXbZikcB9uzrN zB%~liv{Y3+B|YlZx~D4lcWVyBlMm83Kvi;`!jo@zvh&$|>0$>OdriBYE%Spe3{&+d zqp{0*6Ajt+$5C8Ms3EvDF1v_}qOG7?K1x$lRly}MFeKy&kP;GJ=~LE$+oLhiD{uKZ zA}~vC{_I0C@FV}iU^-d*Ln8bPv9O0^P%K9f;&^2wmnlSPKyY>Z%5J$OnhNEl*vK8$ zPQ5v7nS1fvfb$l3z2u9&q*s-?bGOnaG{c~#A_99R)hYx|vp#?O>&O6PsJO7cModp`B@YF%%|0a{T7lfeYTYKC*#@b>=Kr>h$afBjH8Q1n&! z?0$@IX#AMwA`{O==+m*(Q8_yb{g;m4vgZGY+S*a^N9t$FID4PmRI)zyA3ciPPmaa{ z;G(_2-XXQ@t>e80p<}k@|F;bMRyuxE43Z0qitI zt6ZqM_Y>24ZDRaYga=yo%qy5eJj21UXXWN%seWU?=z^PSu~qk`(0A3SZWTLx8BGab zH5|l}i_oVKhMZ0h5}^87B3BvOC{75Fan~dBsio?@d zA3?Gn#Thp_ff?y3q`zY@5U?5nRmPdf*UUu;f!LuO1e8H_pz2Nw>5XU-%ll_8YX9eZ z?>Gt&9?ht8zUV4#3BR#2vT(Qoc+OwHgn$~U%{}3Y&9A`P+G){x5C#h!Sor?8z+1JS zfGiTDde9Ef0ga#<0r8U~#x@NPhYm6kI3_p|H4_1aS?X8L)dxG%C(9wCC*%VKtHh%w zt%NfU5V@R2Cps4}g`OvN#Vuedj{oum^;DM!o}l!=6Qf;jpZ&5;gzBZnT5e+bV2s7~yE;1pPyn$NYA!2^5D65b#angOM13S4L_I1jvh! zr&boSmorQ9x8fD*h)G-xO;F!4@3^+wSh=p&s{^nw`s9~4w%q=2d-?rHcDm~w%-Z1nE8HCetJ&*=lkZF;cLQ1O(F zq8eJC6C}8Z^|90kvXNg!x&JX;g|Z|seH=YF9YO@ zF^2Ccd|YR!xVYy`SY*jaax4@WjZ2tGFTe5_{4@yGOLdfkZj=K-<&P9c`#T25 zJ>&KP=>{K|-}HBqp&pw0wyEk@aZ)n+2PutpyGu6Ht{$8I-qS+;a~}RCIA17O7n(CX zl+!~}hmttFLUX(4^$-5@G~_4cgNwCOe!19@+Ie3opWs}lMD+UFWphB_>)KjwgvzVp zH|fO z`@rK==Cp6fNk`pmohO+>-n;BVjEIzxO&D1{Jz#0O$_I=5`cu}fjeiXqjpdh~y|*5t z>+g98(*G{>Z$Z0fprGEY45z2RYg*x6-@esQ#yWeMZ9|ZsX=zy-@ z5Wp*@E7N}x%eYkirexj{lYW>A-(7a~lrg;^UYUwu%5Z%tSt#xmfM?b^=H1f(fBi>t z&EK$qIF5K=C!U+^qrqtYjumQ#Kpx&jO_2>Dtg(4gw6U*#?ITtJ_9HQJHWs}Xka|rH zz_i|pC5Ik>vVRNn<;Cw@!`#MHGT=psls-jAR!Z{k@x6n=d9Uu-MexL~c-yp5MejL0 znKrQc0Y|=1w!vHv_^&G-L>iR%uQ*Eph$E1MZ$EbzjP1qRXy`Z!c5Qy{6hiZ>T8_*7 zaQx@FVu+P^zno|%Hp3B1>fb}TADew0SgqIotc$3ckxZ>IwQ*Z;e) zk&a3~g*?tE|7?i(1X6+IMM5D48(wVxj?5R>{=FeE5&+KsN@qsdrji!4^zf&yunwwC z4KW@^s79RaT}ShJRJF-8p>{+Vl&|!!IgC_+3LrOY0Q~u!99A5)m3Um+d6fv5X&_qTm`$!{KMuo9w%+(nk)(jGlV)^=1Pa{RG^lw`P9bM#RAF-6L9S6aUSC#d2v~9 zkW=FF@R+0e8BVnPzaoku?gKK;m*6&;zr}_SmKsW<&=IDrR1lN@91Nu!Nqe`~A+25Q zLP|!)Z=y2#lHZ&)%doX4_U?8a6B6%gBb)b>`0yv@hl69&*E?f)#=@TOw`&ic9HfUt zcfM24XZ`2~lk&CRMULmy2)GD#H1Kj2sQ#jriNYD1TomFC!12FfXhCR1KV$c8RhWe# zc0?CaEO7=SWLuvlHyV4KWW;L}1rO8!hay-3B^ z_}RtnJ1fG!f8cOdU>hHEKw76q(G=2x8;wtmRq*l_HC0%tZVW z)%L5CJACd?e36?v8rt)nz~l5-wHqZ+IWAMJO?xFF+i1Ou_0yZ_myr!Fy(6t@9r>^1 z-*081aBO{9jZ*rFbS>e*$r}=q{ZCXq#60dpWT6T9NCUcGq`fAVhx5Fj{y}+J_u-kl zLIPTJjqPZ$f!<$t4J_dJJ7h)cnU09(B76?#%)Tp|Apk#m4$j9UK`6GF`v6lVio3Rz zN@WBUeSsfOG2y#?==xlms#>qlACbgMZW{Jn>xONq?n$Tl(8rM+Ika#z+5E5BI90Yx zadwLZNR3_taRqD8usb`u!@Y6hhMnDbx4RsPtKtIs6h1F3)<>xCreftjSdAiZF9~!o zdbvK`n|+2zr(~3Lfqu5pmiq;#3d)N>YChanom{)h@eawAS^snIvu762F^pX^g}}hn zmz+7uWo7b1>#(N~voTB?hItNiQ|j z-ETjmz59V|&T1&tc5M2?$t6&7D2ZsSI$R(PPNCAEt-2@ROhj1vc%!S{>4V*@Smg8o z!3uDRO1)60uT3@EFsS|`bl7!c0@J(>P4-Zf$VC3p@UGEyc{ z?=qyT{n`p&m$!gCAJyc#L4iA7k-%YrZ-7bZp(zD=(D`Q{)@cMH;aDOubd(13RgCVf z%|mA|ur4%N#%%ZaF6|bg_H-)SN<5q*U`rTmR3Oe+y0wdRjX#PKAp12S_!%j2Q%G*G zV%LF?< z*=qQDT$|1iZ_Gbk8eEj_lb4qTT4_X`t1we_0e^O1gI`TKZ4z^ts8HXcMgixCVEOND zM#wBw%g7|_+gI!e0EYcXL5Tp8%kF9S#pFB7Oz-{SRK9B8m*akii1U~)KV{X_Sb2KP zmFd2}d!x+9ZG+}vI)_aPUk^GKgF*UiO@n)t)7so_T$v49#UsBDP+2j-YgacLxZO`L zIg&1*fu}nY8(ZI&6HYd)zF%3QPAjE{<&S_evMMo#uk6)sb~QodoShh?7yF%aL&+a# z>FJk)ms3!mN|hs_uy+A30p6X~DwB;dU%Ma|jEeKQS*yj|by&T0*c0(a2E0~ezC!A} zlP9yiz|SMzGVd}eVj4y$-(mmJq4*~kr02}U_Rk<=&jna~z_nni?(CxND|d zKUrXXy4j2I%?h?x4s`EeJ&OPo&+@mkMW zcoY%ImZGOiAFq|*;kutiU>Z}nSs}-cC3MAq^1oWs(8Eb7yaNJYtOzn-7&*Z!^bUgk>iX0v`6a^vWnoFkxv#@0;j2pl43H866)skf`_WHA(Umvc>j(^rEWD>Aly( zSm$q(e9=Xz67dCZsc&3;g8Ri*u=%GPn>+dB2{BViu;j$OQl|}vr-^#Qi7dx+IOg8l z9H*Vklx0tW_kaG_c>;lYSzzs(kp7Xwz`wRg_h*`H2AEGFN7_)C{vnUEv#qaR3znCk z>un9SFH_5oIdydLnX6?>pbV07yvPlI1s7hb_>bqZ?s?sFcM3VAlYk96c4Nrfvtft1 zqHp&v0tI~nX}PVfZp=t6%UVl=GfuWS_Ilh~*g*=NupSh@s?T*e>LrlLJ@Z6YAo)ylAMEZgD8 zenZZgaI-eVvtfHehu>hs!Dxxl^l| z>4bbs6@zO6&qp8f%G3Q&0vXLI|FQ9dG;PcPJY#MXsF$^ny?hzjRJ~vB){h%|pA(A? zIngZZwroA)Fz5O(@0=p!MkZQzs+q26a1aSxT}}mc!GE#0$6oIznj@b~u*C|Jfyu=< z9E`s@yV_0@K2$B$x609%@EG2kWA~o6IeXN$_fB$ccH$9mfgeR{HFuhj*QcJPhEBg> zo0MkfC%dluGSh>J<-QkWv8~*E!-Fz z!obT?CbeNv&*1gB_o0{mJ3{@B9?Zth?E;`j;y0`L7ueA+)J9_D>B)huDuC1l^V<9G z*4n*W$Ch6eekU}~=>)uLV|FP81-Jd&OL2`^{%PKL`>2sCV-7N0~k`Kry-$Vjh4Ko!jDbnr(|r z@%0(IaV~a8x={C>sXV*soMS{{D*|B6&J25jtd$G(sVhteSP%ee8=WNF$g^!S(|cvic>8pT9^PypZPgs)82uz>6rk;A zFkBmDRFuLW0;Tb&A3p{@BOnU^r^iWdrwUr|n$VLh%w;;(J2EywJDMXT1bnHg?(hx8&)y-F`YG6U90!*5Lk?98V!Vl_via1V8Y9r382~6Fv*` zYio(|N&#+x_A2X&t9P25>*In{`uKH~i~7oL2i#tIjvC>Ym$wgQN={ate==yua%*%Y zn!eJToh}Ip^|{WFtjr^Va;pnR3mYG1Bm^&~wYnod8FUg3DGr`Jo&FV-}I=WD1|Zsh?!XhTgWOhy5oQqdVfVSoNaW>znL3v(Du1 zI;Z*IX9!+TD<0ToBnvcO{u1Q`!Zp+|mw|wQZ2fQ+;QwnC{f|ziR@@Zo4FD60f+X{_ zo}@W?TwGKIJW?&RA@1YawheizYJROctZ!kXB^j)vv$!|ov}Ftix5)QI1t_KoIV`KQ zj@T(1w7ej!v1?z)^d)MwC@+%NLJ(T#+a9^n6EW3t!c6v<%$JS1K}}RjlWTO;$_) z-!fx~tdK0B;4~a6QdB5epCXptuYQ>M?VD$CVjK%VFBco!$b)TgY06P^Ve2@!N(x*k>z1f? ze@?sKc(O_vM#%VCftb_6QZvP8(`TR4Sj;e$HhUX}v3UGQvTGtgLEwYf$LM!^z`Uo2 zz!cc+__rXk`#nVMb>9n#tNoh`)y^r!68bc1Yt2W)9d**qnfkStDGt`kN%I})BEWpe z{Kd_fR%BnYC&1Ey+e{p10% zbN`+IesXM~1MaaWf~q zMd^@`_`q%C=bluxQ=^g0T`S3Gx@!67M1VpA*dwB=xH0X5_cu4DV$A&TN-KUdHkdM; z_Ga`i75(6&K*X6^$vVH2}`6u{)wg!h}fKRA>xUpP!Uh@h-U2gk6A1X)u z^{QAd6N%W#r1cA{aZjK!Yzefrx<#HKp&XMh{cm3Z_{;+5lHA~#&slGifjI*3BPXpS KRVrch@&5pthlUvd diff --git a/docs/_images/diagrams/ha-overview-backup.svg b/docs/_images/diagrams/ha-overview-backup.svg new file mode 100644 index 000000000..03b06cda1 --- /dev/null +++ b/docs/_images/diagrams/ha-overview-backup.svg @@ -0,0 +1,3 @@ + + +
PostgreSQL 
Primary
PostgreSQL 
Replicas
Replication
Failover
Client
Load balancing proxy
Backup tool
\ No newline at end of file diff --git a/docs/_images/diagrams/ha-overview-failover.svg b/docs/_images/diagrams/ha-overview-failover.svg new file mode 100644 index 000000000..ea77da45c --- /dev/null +++ b/docs/_images/diagrams/ha-overview-failover.svg @@ -0,0 +1,3 @@ + + +
PostgreSQL 
Primary
PostgreSQL 
Replicas
Replication
Failover
\ No newline at end of file diff --git a/docs/_images/diagrams/ha-overview-load-balancer.svg b/docs/_images/diagrams/ha-overview-load-balancer.svg new file mode 100644 index 000000000..318ede1ed --- /dev/null +++ b/docs/_images/diagrams/ha-overview-load-balancer.svg @@ -0,0 +1,3 @@ + + +
PostgreSQL 
Primary
PostgreSQL 
Replicas
Replication
Failover
Client
Load balancing proxy
\ No newline at end of file diff --git a/docs/_images/diagrams/ha-overview-replication.svg b/docs/_images/diagrams/ha-overview-replication.svg new file mode 100644 index 000000000..114320498 --- /dev/null +++ b/docs/_images/diagrams/ha-overview-replication.svg @@ -0,0 +1,4 @@ + + + +
PostgreSQL 
Primary
PostgreSQL 
Replicas
Replication
\ No newline at end of file diff --git a/docs/_images/diagrams/ha-recommended.svg b/docs/_images/diagrams/ha-recommended.svg new file mode 100644 index 000000000..4fe393fa6 --- /dev/null +++ b/docs/_images/diagrams/ha-recommended.svg @@ -0,0 +1,3 @@ + + +
Proxy Layer
HAProxy-Node2
HAProxy-Node1
Database layer
DCS Layer
ETCD-Node2
ETCD-Node3
ETCD-Node1
Replica 2
Primary
Replica 1
Stream Replication
PostgreSQL
Patroni
ETCD
PMM Client
PMM Server
pgBackRest
(Backup Server)
Stream Replication
PostgreSQL
Patroni
ETCD
PMM Client
PostgreSQL
Patroni
ETCD
PMM Client
   Read/write   
   Read  Only
Application
PMM Client
PMM Client
PMM Client
PMM Client
PMM Client
HAProxy-Node3
PMM Client
watchdog
watchdog
watchdog
\ No newline at end of file diff --git a/docs/_images/diagrams/patroni-architecture.png b/docs/_images/diagrams/patroni-architecture.png deleted file mode 100644 index 20729d3c49c315c1dfd39dddcb7c6c37d9e0ce35..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 13002 zcmd^mX*iVO`{>YMY-MRzvWt>o#xNwtV60;q+e}#oV~m}#CtE4Ivdb7tWQ{^e*^-o! zwT(jART9}M_YPi^rgE?BXq!bL!Us37x)Cr;N6S>-t54i65LTq z#ZgrbTw?tEyePI5XJZ=RL0<)-dJ2I8%g2oktV~U$m9gO3i{?oIA2^DOr!VUftb2g3 zH(1d}1EQypD1_Q66dJ4($Zq5S+JD2s5{VK>rtXMCGqa*t;av>foZZQ)M&^#L&cS*+ zVgyqH=rmuS9WW&pQpC=Uz%W0`&MMW{m+A#*DoZ2ufJXnLI)I<{A7!bj>Vzv%+(U5~ zlzFIQn4YRKS!oA}4;e7~PwD1BUkd+L5qc^%9?mX)^bkFBIN3bhS2s+@L)FxZh6wk! za1B)PGE+uT%&6X07B*Onzlwg4w6adHyCDIG4W;=h`_PRH?85X+EXg*3hANh-fpm&D zmJr~NR0%gSGuOo#kkvxN;3!|5iBAwk#UxnQ%Rt$NL{q3cXb9Cn6v@Pyf-|L%D5kz9rU+*rHzO5_8qNn`3z|wXOO;;uCuunhK1RbWNU&oHZ$|mRRN}>Z=)O# zXlh2Z@F#fK*qIW;a0DV<2gkA<7rcv!Dcr~}BpeywhO%%CA(-fd8HXu{=!aN@xr77; zo4UfosJ=9;X|N|k7j8xMx5m38@Ww9R-QJ7Dz0?6j)xh^ zB_Pz_*#brPB9aUO;A*Odgdn(w7uL#~>|x{@Y8?#vP~cW^LkbN>kH^n;C}NhI*M|mAqZ`!~D$+l_)wsAt*m1 zPhX6gnVPB}!o-IP9#%Fq2*s)4gU|-35M?!0r2wR=D@7NrY^G=Kg9^f%kgSo$G^CMU z5K=dc?2ied2dLuEbQ5c;sX;K+7#HSZ9p-Lm7zmEH4pxO8j`6edB8MVu^g}|N4c#n! zLUg?YC?p?ceKM8mtmflsh*1lr8yef_8`&5D>cJ|i;9S5_*1>Q~7d<#|7&>7drh0Tg zPjH3A*jTHWSh%^+jjTtMjhB+EEqKn_MFkJ1IER@c^*zOD#0*6t zs##F{!vjb@u0+-ur-lP-*I`*Ml8$11nE1FTk?ewP@!=#p7X-koLn4GLIs2QtnEKKU zyzR((wumqiHqg)@NEhu#vn68DUMQ3i+1+0?nCR?jM4)grnC5)j5nBF!ueN73=LVQnj(=t~EV)FM<5X-y1JL*l5`VH5*zH#HoJ z>J5xW$2SBCgwzkgu-qYnCEkDe(SJ@AeE(0R0`75X-$4jO3}UE@u?lrqEZ`2@r?=ki zdn676^R$DyUemLd|l&XlI&164VZ2R<7-DEiX!v<(^HZ88!BIDW8HyykN4iu42%9iiT=C;waM6nvbCPDu&B#IyX3N4(k5J(kevN2p3ZCE!n<)HN(mdRKUd92+!)_xG zjV59Qy+aeV7w2c0i8&X?@|#EKN|H_juq%*Wxg+8C-()}|URFj~O7w&}URku*43K!P9YUdvd}vv?J-C3p2&+B{Ha`os80I9`=BANN1u%YMWfV9ng)>B=|?5v)8Aox-S820PgCX!H`{WSS`|-ziL-S%w|D^nF1-~v zbjd*MD*DNWsDN|Qr$Q#`yil)PM6x?Z9NdmeOt6ZTgwT0Xh5OjqRJl7CA4=2O?8PDU z01gdxj+m>k4u(!=T!yt+G{2y{l_0tT51B0ukGnQu4Ryi`aKN6KL1v|*QUr>hfbaWv z>CwfUb^*2Vn-7+L04o7+xdvk(!qI39*Qg+@QM_|QbCD67v90;{J0pWI-8$C)Am!}* zllC?|wU7D3L_%89ctE6XciEM1zBcZ?aM}LMy)h&`!2Tz+Y_-9SqZ-^tPDf-qzwa6u ziM!?POS&6hWm7xdv?n*?*5=^-WI$2bhC?aN$jM<;DBOF?s>-$rRrWGZ_q61-KOYY{ zMAWf0{{G#%`ut;C=D`V$S*i1b0)=!ad!BIi8l##cT-hu-==qEG5S7%MKj|_FmxGr^ zjH&%SO0j>4byILOSb$(+~0GuU@u#tunu;1H^)_ldG%76 z-fnI>4>tmD69efrjFXq(YEb-z&p?RC_Vz3o6>GiahAPWOm^}O zH$`~Xm3C+#_W&@Z7IW#YgYnTL2mWX;lKz};@b6^Q)Ivuyn2>L1Lq`edK`0yG1%vcn zkaL^5geHXV=&<6%_`$D9ZNt+n_5*elqMkooNho*!xiXV^_3Fp)qXOEY zEW%gbdd}PC1HyaNJ*Nq(O8>8U!fW?wA@TRQg$^Ugv#ZB^oaj#Ls1EOpPLcq_uWU;%>x4)zibI= z&qMW-5Bk5>12nKpnk^^((MZ#%T^D0}-J7r^{-nEm=LC7yC%s!yEk4a zat}L?fcZ^9rn>5xJ0hEpFxd0<27hTjrS*ax?6fZtfdDs{)N8rFF&d zf&)Da9%Z>U#keNL03+}9wTuadep;bUCCzy83%oa5cv55d_M<;ecyd7343F@@AS1$* zXX#e?<@-PKhw~cd+*j|*1%Cf-u^?1KY8K)b8A>zK&yGeL+8%{U@v{}YG`ub$cJ(&* zv}*Znk-!SJlV8x#!(*Xoz-^va!FJgS+co*~vJPTy%IUx;M3rT8`fBd(qi2j>oUv1w zWH6OUU5iT{nz9Qo5+Ju^AoTj{$4BW&+7D}6-8VMQk9{5wZ10|p?>roh&g)L%Z8ZYt z_w{q}Z?UlNCYPvE4(QO!Z!0f+iT>2dV3+9Rt`dw!SK3%i-;e)&StCo@ZITzcl(}tz_(cOGtm^_{+PrnG$r|8~)kS zTQzH@M`lZX7prS7X9=a@PhUGfPT{HwT2kbFudwIN5&; zWe<2xl?!PhGEwK#srS;G(!NNac{7#u=z^S5TasgDXrCcxmN+L4$aCaLN}rcRhvN4; z-jiM)V}IRLZ;^Dj&}M&k=Vts-(cY5|bZuxOaLMI5#CJhdIie=C;`IqGEoAQoljrw+ zzCMoG>8wNv{0kuDNo>Mp?G6;+@%=yvnFx@(-r}2NkJ$z&F#Mz7on{*#^%VmE_Uz{u zzk-9z?iJNK$Ip%jp&5-ibiqImxHN0D03U2Gh__I(JAf#)@40rH3*b6!hrDz|lr5JK zy5*+=Q^Nogfl^k*_W>ms=ge_LJM`|2R}W^Tl_c{rLhq`NBJ| z>sW%*k@E=E7kfpW+1TC#vR=64-L8@!J@NHVW)($qGi>GK)NV-t3r>_8UFM4!oGLf> z7`5#0$`_c(yUhP>E-zH@ILr=k=gOhi_(_ia3+m=@Z!;~knVz%ik9*sm#SBwsiayZN z|79mw^Z^>V{U_vCI~F&erk7+Hw=y$--Lv+Zx>Ih(s~L=6u+Ycf>NZCks4siHt2PMz zcCSzNglhHHPY+y`guctX*&miB)26A~)GktObF<0eeB5D``vo`qBh%L177uKv8P6Nr zAM-RW7;QrBJ3FiTaDHQZ3!if1Oa6oL4T)UBJJ=uuCXjMiB~d^1h+G?Rk8Dz6V*D{Z z2unHd<$&wIW~*P7k}o_z=(rSq{%CCnu-nIV^w|%bI#>E@qp{>Ij=!ug?XNE? z@YAH;b$HP3AdhXh_vY%4bM(VXGI7rNW2PIb53i(BxAW7Vn#wqkD{Pzg^B>C9Z5p?{ zvj60t=*TnK3fZvXrbxY}2~tkCY`e*O<=~-@pIo4q=fbt{{Zi{NUT!x$`P1VkVt#SJ z9E`4B>)1HoP+>X;CA9~fv2MR(6eGTJ@_8D4T!ZkJVOpS|;WgXGje(oGohd8`lr~($q6-4Ie69pw}sv7i|B+ds$E1 zr&Po5JbkFCFL`-4jF+cZ?qtOMgV0)mj(X8dF%7oyGm+m11gp%?BpPKU^U0qZPwYb! zC+>@FpZ)Ojht>2039Hg)L725qOH#Q6yG=j;eRSM?JC4U5L3(#%Y2)^s^w) zZ353-_d$v9&z0r#y?KbRlFtLhyT$^CO_AizOCJjB&NZixHu)+>qTC0znccG?$3Gr0 znud45*t1{yQAT<$e{r&A> zts5^rV4s#c7n>Q&uUpkRy_Y@r>U!TQGb$rTGwo@apw=expkJd zIC{4^&om#E;t;+dt$yJba&7T_j!*NrEg;ohriUdGi9p) zGRyb(yy-b>ZZEG(CtswRq==~Q=X7ZqIUcpWI&8LZBkhck_t&w9In49l(9507>}>r# zr@uqI-`&Ra4mgB<;#k%%0|;&`Pdy#}`D3mUmnPci_VVz9rMBOf5mhB%6oKRx+kO3 zp*1`+H}^Q-GT4I?PrAC|f4ocHJlEGiY;co!Gk61+6j5$gh+ke_=I9=)rx@GH^ZctF ztwGc?ESWq!`|3f^cvJPvs!41YccqlP#N##!E_oS_uU0lZ{By@83id@S3%;OjYwQsM?BR7k?9@fjh(e&AYy7s?D zGtaYx5eJ!f$AaH`zbZF7My`4oZ2vPt%BR77u*9m#$6h^h>wfMG#nm;wOb>O0DQ#V0 zjyWiB%&X0}Wtnqvqye|O@$upwp5d*9^1`l@2~-OOQe#nqu}lB;KQDn`j(u~)&(X+|4$vQ6>H<;Pr({rW49xQ@kdH^?ogA61-rU15RUN~Su4P*g;Id^y2% z|CQZGh2EZCc_ulzu}5=N;K`FGgZKTXE`9p`sGsz`{%F3i$d`?YsKP6!pZ2)*%y#KB zu3i;$v$L+UDlUxrOPuaN$T0uh@#jo@zfj-FV92EYwE`7jdPV1>T;cx2&{=SQ>fq^6 zxQg%ix!xL{2XoW;68XaY0q-D@KW7feJG9)l6K$DF)###m9cc+%pG;C5vUR+m7cv%< z16M=wGK-fVy@+kOc=h+M?~nJ;rn*x)BTm^rkc!;CqP2zTeQ;lqNxLJrd_fYWiYhNH zl@e4`oLcr@nKo1pU#*zWd6^K^zfsIbR}cQ0d;geQu5iBZ|E}Px=+>XJ`IF;nQ zk1Y6^wr%SgJ0;R>Cye=Qm;Yu`K7U`zuGwkMpw&v{e=d%Hl|h*6D~LmXS0(|wqf4rO zdl|9PBRh#M>DSsm-S5-7HQSXh8k)g=?DXSL=g)Z|Bfj^CjU?wxhh7m?oLf#m*3Sq% z@UgEzqtf7-gmd|aOgX~JN$y!(Py5`D&5!rPsK-QjCw_c>Rhf|+v;0T{GID!!j^iaG z`M1q!y7R6n{$qb9XNmbDT)jQ~pXUh_;McRNMO=LjC_0oLi`u#-Z(Wsneo1bi?DY95 zN>lsxpQSkF^Nog3UJc(pbNnLRj|%rZHonlZ_u0lkMAU%_->D6^$=&xB&HFE+2xSb7(E)vt-lmIv~iOLqi4Rv#)AS)CaKYHqG^U;(xWJNG9%)f@md4;u+ zIn`I*m4O~t`pvyCT~=_^<4Hh%kOqyC9IbF$z=&b%bqPsU!{S_Y>ZvS{?N~l&G3b*6 zkmUw9j@g47ftARW#w@_HO^f?EEV#j+l$v=92v^UPBfZK8D3*Rno=;*`AnXZl+8^72 z5Lek@cG@5bXVXQpv7ccr?D}sDa9L0))Bq;xvt>!USbAj>xE&2Bu*bWzL*evTPVEj7 zAmkZVq9V}I%Y7Ka#>+w<9WSE|LHxgcI7WcG^yW=VU0o# z3t~m8K*!^2xOEvALp>;w)q#VMr#tQ(CM@ z5j@g%7dJ~5a3VPW_;fKvUS0iqwxT2csCr-#h}2|*h~$GxItLH#7J$SI0J(Lh;uqALH_(*m=rEwSA7A~-W}!U!n&da7 z&1zR8-+>JvOX}`Y7yhn#zO%-VYAqIW0wAs_ibpj{n-6^DZjkYwiSAO60$U;=bA2d7 z`|pUbn`!uG2=S~j*hU4j#+%}76Rb5b)RlX$1J-zLkX0gVF6)ay_3b<_)JwV}%CFh~z6U-uU_2%LhM53_O#qe( zY1jAk#KS8->c%!q%iN|7%S_h2duO*Cw+q^Wk9N$zt18>X82K`Z*;*0kz_~pm@ahVk zVLPbu|$MM>i zAJwQim(Ga+KKbYYw?EDs*|RJ2xdkmqAj-L)6;VMi&KehGBy^PwKf%4p1dAUBa-7_4ejzCZ=TqXFY1yK)6e zaIk+8;<{qtGcw@m?0kWeA4310UP*TrsBUCEJ=BSDVoLsdnwi1Ro_&oaw*IO3p8Or< z=8s}tr3(W67+;2d3eEi|Vwjya%wCiwVwk&_P~DC$>Z)PO#uy;tNe&pa82XQlrH+w7 zr#B0*=}=Kr&<;Q9;W{#>SQ>wrd**>n@;{7nGV-NUyIEpuaN8C{6j3^vfm+n z$^*;1$0Ggyy4c~ji93MOXerMFEI|9_V*5WkfRPWd-6#PT;MM)tUuEt9%AYCqR$u|% zJyfhxutU0!NB3Jci}cxSzQaSz9Y94aSeA*D(FLXv`HcCpH$>9JC^C zmLtD-=!Uj5`#^)yYu!`elpZZc};vnI5O6%PjP0 zY%~YUSoW93{B!<%ZprLfFztSMYvE=f2#6h65Ia~B$07%yFsIo|kCH8lgxKbZWgZd- z4y`8V2?zc!qV_W2xk@}%5)jR5E7NI-i%f-lw0&Ddp0cvC`^LAiz+bC0b^2`brt;r{ z?caA?p3V=I8Cx~FTU41BvuB+orh|&f`eVpc$lB-EH!kywSYDIRD-spBodsF-I^lr) z24_?JJd%kn&8dkuBRCrHwNEYN7&5_XF;3g@0zI$cOKK&e(lK zP1`v8Wbxwrrn$aUSK(--YF51%v@jG0iUXOm@$onzpfjmB?St6@=GOg@FB8gRcn_S~ z>j>KSyLlu;bOQsCdU|?4?Mq&?2hRD9C~glq1qY)jZf=Q`$4;rBh$J-mjCCEzX1geU za~(aHz5~tnCz_H`T#C2u2>v{EOHW^awDuaP>}F5N))SY9oXN(C z5$%ceLrC|li+755p}DpD1Kx00xvhg?OK#{3@hTkbo~a7sOb)){*k zQVzx-bEwBJq9gkGu8^Riv;*Nb)i>3tHyW6dAYZ3n-gE5K=ZR87Vba53g)BLOK3bD; zIxL-Z&&v>0tisLGva-Xs$&WK%5H>F9r-_;lG<=gcmyIe|J0d77oRPs~o~p{r&X)P6 zR2lwVvtn~I^qPZ-37_{!z20l9yDzIj6$;h#JR3P9^p@B!sGI3x6}U9sYE^BcFvH^| zwF;ULmY`Qa|M8-y$JBm=5oCN)B3AmsOf)m7x-IZ^Z2r=k5n{UpjnKC9MGIsAOyX`=1Ir1-f=&gUD^XgR{<2i*uR z#A=Phg`gJ(R82<;UVnL{bo0v~?pPT(Bj@qL<@kbAjH@%Zwkta}+ks>!hdxIM>sgN79yO+T(xH7){} z@X`Zi3jo;*91H1L=GK^_@ z2(fn}gm~3d`!pTl7TCf^JMvCI{A}6RB-vaM+7YEd2!ywe^;>{v0}U6p2J7!a1K%`M z0@}{~4ToIxT_Qpzhy7B@uqbht%iY_3T8PJ&VV+zGwj>07*P-xrqoB1U0#);Uy?AWO zqY$jlU&;e*p%?9Z?guT%Z^>9)v8wQ`@l+9jTy*rBl5xRWGQXg*ypU3!OwkX1SFi6% z3L-ScSHyn6)2)+&xrf-*C$ul4^DL4-yq&M1dc|Kon;(BlcV{*3`2IbN+6Lww@y=cB zdhw5h2YzxY_sWG9gwB}!L+oH7H_4Y6-KTn$jK>&u5o8xwwv?HbB`w-!y>?V$(_SJ% z%Runr2w@6Y;dBAm$dSFu3@Ig`E5=hs@&Qll!U`(?ul*vkQp**{LnO>v9MiP3ivs45 z#D0$JgujWQe-gqcBAoEGX{u*;!EUwH&0^516JD)($n&blk*bMv zb#u>5D9Y0)5OiH=|+Qs2F$HBO|5wxl&&zoq3eCrk7 zcvVTLG%GyA?BR&HE(y`hpSoN_1jq#A`hXcDr5Eo&DQUgW<8{@(p!osJ$++cu z!@&~0vdYOa)#i+H`QEkOE93Y-T)dwoUkOP|nyX@6L6 zk9I+Nx`fGR&>oLmP<0>B&dns-Y%M)|X7Ii`X#FdZtlND+gw@X8T4(jaue~V3T=Yug zKl%*B%j||z@z0O2*|X9IrTP~+io+nPK6wkesuxS_>Krryq8d=4mc^&Uv?oU$SS{fH zzPd2n3FH0U*MIzu;Np0zMui=z$;Wi;L6EScUgBQv&xaYMr$9Hf`oO?KHg$ZsiG?8Mdk9XK+tc> zq0&`V?y49C^l$?hzE0&_38fahyVXS6hp%(qwc^PQe`#ex>2`SP|1Tn35$ZZI=EgiijCeO=Y${lhdw$w5LEUl-Jg>b$6O_MyGw~kSoAdfvU z?di`3dxz;rW~Lp}!h?yyzQQZrE-_lopgHM#Px5plH@mnX{KlN-`Sk%XuVBp)oLyYv zn5_M&s1{r+Q*eDr|_0&Hhu}T~WomMNivB z3x2-(_MmMpOQ&Y4@u^cQvQW*J*X7J}S z1rPdud!uGFfAk{27zaAJ-M%_x{OVzd@2@`_6HgptLBCw;Z}f@XD|g>I+;D!9I0#b2 z3di=y)t_^P!>t?R#$0S}9|1hg{6JKC-_MVsE zoV1KgC&MJ`=z*U?e#ZSzQ;*&N-SMGf?3Idundi=5K_Bz7j2Y8}o16Q%5G+^XV>YwN z`@KGByvp7&v_HRicXfnXXklWKyr6Z@O8l37?nTAJfC_UjXcguftBH~BNCN?Dvj6_m5hnZmsOQ{;)w6AG* z>}6|OWn^SX{_S`WxR7qg@$u~z$n%FlfnYi#n2o+O_#tZhj|AlJ@ima`4i|23QGuh$ zf+eSoG*E{tEN;wq#Owk`TN*g8{davQ98OZc4{_wb8w;5*1p^hqFPSZ8dJI>ZecS(_ zfN9STXtf3#?(Dn5>RXko9biHo! zb-ZKs5>aWMcDr}s%CX3z(Yu~WXCx&gjw28mfTE0>S=yOj6`Sb(SkZ{@m}T2f3QZq$ zaH&TGyC387QQj5a@9$X^>u^lwng4YGCZ3s@38nMgty0sHVsU`&-@+YMr2aVO?9!bq zxWVk@o4zX4-D4!|N9gCYd$R^gtxJ2n##@LVIC<7!FBp-(7GyyI;wDNztmfzR?Sj-_ z4v}792<|=U($2waYi}Lr9(+zw9XG&U*9I%@SpHE13%{$78ac4(v~cix5Kd5 zex~LVAI)IxgO#vWA1aaSI+!Ki_1k-)xC0-mqHg(mvD!;E>67^{CfMX zeTY5tW?!L}Vr{i$8PDGUKIozLkYwQ04tfMmL)ad!^`;px&~%v)t=N%FFD2QAM<3O9 mJgR^baG3M|SI%=w>txKHDxwoD6#VBDh@l={w^qj~_J079{S>|c diff --git a/docs/enable-extensions.md b/docs/enable-extensions.md index 432f253b4..31cee70e0 100644 --- a/docs/enable-extensions.md +++ b/docs/enable-extensions.md @@ -16,7 +16,7 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section in this document. -See the configuration guidelines for [Debian and Ubuntu](solutions/ha-setup-apt.md) and [RHEL and CentOS](solutions/ha-setup-yum.md). +See the configuration guidelines for [Patroni](solutions/ha-patroni.md) and [etcd](solutions/ha-etcd-config.md). ## etcd diff --git a/docs/solutions/dr-pgbackrest-setup.md b/docs/solutions/dr-pgbackrest-setup.md index 1e89747e4..5146f9942 100644 --- a/docs/solutions/dr-pgbackrest-setup.md +++ b/docs/solutions/dr-pgbackrest-setup.md @@ -239,7 +239,7 @@ log-level-console=info log-level-file=debug [prod_backup] -pg1-path=/var/lib/postgresql/14/main +pg1-path=/var/lib/postgresql/{{pgversion}}/main ``` diff --git a/docs/solutions/etcd-info.md b/docs/solutions/etcd-info.md new file mode 100644 index 000000000..dd1ddb993 --- /dev/null +++ b/docs/solutions/etcd-info.md @@ -0,0 +1,67 @@ +# ETCD + +`etcd` is one of the key components in high availability architecture, therefore, it's important to understand it. + +`etcd` is a distributed key-value consensus store that helps applications store and manage cluster configuration data and perform distributed coordination of a PostgreSQL cluster. + +`etcd` runs as a cluster of nodes that communicate with each other to maintain a consistent state. The primary node in the cluster is called the "leader", and the remaining nodes are the "followers". + +## How `etcd` works + +Each node in the cluster stores data in a structured format and keeps a copy of the same data to ensure redundancy and fault tolerance. When you write data to `etcd`, the change is sent to the leader node, which then replicates it to the other nodes in the cluster. This ensures that all nodes remain synchronized and maintain data consistency. + +When a client wants to change data, it sends the request to the leader. The leader accepts the writes and proposes this change to the followers. The followers vote on the proposal. If a majority of followers agree (including the leader), the change is committed, ensuring consistency. The leader then confirms the change to the client. + +This flow corresponds to the Raft consensus algorithm, based on which `etcd` works. Read morea bout it the [`ectd` Raft consensus](#etcd-raft-consensus) section. + +## Leader election + +An `etcd` cluster can have only one leader node at a time. The leader is responsible for receiving client requests, proposing changes, and ensuring they are replicated to the followers. When an `etcd` cluster starts, or if the current leader fails, the nodes hold an election to choose a new leader. Each node waits for a random amount of time before sending a vote request to other nodes, and the first node to get a majority of votes becomes the new leader. The cluster remains available as long as a majority of nodes (quorum) are still running. + +### How many members to have in a cluster + +The recommended approach is to deploy an odd-sized cluster (e.g., 3, 5, or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with `n` members, the majority is `(n/2)+1`. + +To better illustrate this concept, take an example of clusters with 3 nodes and 4 nodes. In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. In a 4-node cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. + +## `etcd` Raft consensus + +The heart of `etcd`'s reliability is the Raft consensus algorithm. Raft ensures that all nodes in the cluster agree on the same data. This ensures a consistent view of the data, even if some nodes are unavailable or experiencing network issues. + +An example of the Raft's role in `etcd` is the situation when there is no majority in the cluster. If a majority of nodes can't communicate (for example, due to network partitions), no new leader can be elected, and no new changes can be committed. This prevents the system from getting into an inconsistent state. The system waits for the network to heal and a majority to be re-established. This is crucial for data integrity. + +You can also check [this resource :octicons-link-external-16:](https://thesecretlivesofdata.com/raft/) to learn more about Raft and understand it better. + +## `etcd` logs and performance considerations + +`etcd` keeps a detailed log of every change made to the data. These logs are essential for several reasons, including the ensurance of consistency, fault tolerance, leader elections, auditing, and others, maintaining a consistent state across nodes. For example, if a node fails, it can use the logs to catch up with the other nodes and restore its data. The logs also provide a history of all changes, which can be useful for debugging and security analysis if needed. + +### Slow disk performance + +`etcd` is very sensitive to disk I/O performance. Writing to the logs is a frequent operation and will be slow if the disk is slow. This can lead to timeouts, delaying consensus, instability, and even data loss. In extreme cases, slow disk performance can cause a leader to fail health checks, triggering unnecessary leader elections. Always use fast, reliable storage for `etcd`. + +### Slow or high-latency networks + +Communication between `etcd` nodes is critical. A slow or unreliable network can cause delays in replicating data, increasing the risk of stale reads. This can trigger premature timeouts leading to leader elections happening more frequently, and even delays in leader elections in some cases, impacting performance and stability. Also keep in mind that if nodes cannot reach each other in a timely manner, the cluster may lose quorum and become unavailable. + +## etcd Locks + +`etcd` provides a distributed locking mechanism, which helps applications coordinate actions across multiple nodes and access to shared resources preventing conflicts. Locks ensure that only one process can hold a resource at a time, avoiding race conditions and inconsistencies. Patroni is an example of an application that uses `etcd` locks for primary election control in the PostgreSQL cluster. + +### Deployment considerations + +Running `etcd` on separate hosts has the following benefits: + +* Both PostgreSQL and `etcd` are highly dependant on I/O. And running them on the separate hosts improves performance. + +* Higher resilience. If one or even two PostgreSQL node crash, the `etcd` cluster remains healthy and can trigger a new primary election. + +* Scalability and better performance. You can scale the `etcd` cluster separately from PostgreSQL based on the load and thus achieve better performance. + +Note that separate deployment increases the complexity of the infrastructure and requires additional effort on maintenance. Also, pay close attention to network configuration to eliminate the latency that might occur due to the communication between `etcd` and Patroni nodes over the network. + +If a separate dedicated host for 1 is not a viable option, you can use the same host machines used for Patroni and PostgreSQL. + +## Next steps + +[Patroni](patroni-info.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/ha-architecture.md b/docs/solutions/ha-architecture.md new file mode 100644 index 000000000..c3a9c743c --- /dev/null +++ b/docs/solutions/ha-architecture.md @@ -0,0 +1,60 @@ +# Architecture + +In the [overview of high availability](high-availability.md), we discussed the required components to achieve high-availability. + +Our recommended minimalistic approach to a highly-available deployment is to have a three-node PostgreSQL cluster with the cluster management and failover mechanisms, load balancer and a backup / restore solution. + +The following diagram shows this architecture, including all additional components. If you are considering a simple and cost-effective setup, refer to the [Bare-minimum architecture](#bare-minimum-architecture) section. + +![Architecture of the three-node, single primary PostgreSQL cluster](../_images/diagrams/ha-recommended.svg) + +## Components + +The components in this architecture are: + +### Database layer + +- PostgreSQL nodes bearing the user data. + +- [Patroni](patroni-info.md) - an automatic failover system. Patroni requires and uses the Distributed Configuration Store to store the cluster configuration, health and status. + +- watchdog - a mechanism that will reset the whole system when they do not get a keepalive heartbeat within a specified timeframe. This adds an additional layer of fail safe in case usual Patroni split-brain protection mechanisms fail. + +### DCS layer + +- [etcd](etcd-info.md) - a Distributed Configuration Store. It stores the state of the PostgreSQL cluster and handles the election of a new primary. The odd number of nodes (minimum three) is required to always have the majority to agree on updates to the cluster state. + +### Load balancing layer + +- [HAProxy](haproxy-info.md) - the load balancer and the single point of entry to the cluster for client applications. Minimum two instances are required for redundancy. + +- keepalived - a high-availability and failover solution for HAProxy. It provides a virtual IP (VIP) address for HAProxy and prevents its single point of failure by failing over the services to the operational instance + +- (Optional) pgbouncer - a connection pooler for PostgreSQL. The aim of pgbouncer is to lower the performance impact of opening new connections to PostgreSQL. + +### Services layer + +- [pgBackRest](pgbackrest-info.md) - the backup and restore solution for PostgreSQL. It should also be redundant to eliminate a single point of failure. + +- (Optional) Percona Monitoring and Management (PMM) - the solution to monitor the health of your cluster + +## Bare-minimum architecture + +There may be constraints to use the [reference architecture with all additional components](#architecture), like the number of available servers or the cost for additional hardware. You can still achieve high-availability with the minimum two database nodes and three `etcd` instances. The following diagram shows this architecture: + +![Bare-minimum architecture of the PostgreSQL cluster](../_images/diagrams/HA-basic.svg) + +Using such architecture has the following limitations: + +* This setup only protects against a one node failure, either a database or a etcd node. Losing more than one node results in the read-only database. +* The application must be able to connect to multiple database nodes and fail over to the new primary in the case of outage. +* The application must act as the load-balancer. It must be able to determine read/write and read-only requests and distribute them across the cluster. +- The `pbBackRest` component is optional as it doesn't server the purpose of high-availability. But it is highly-recommended for disaster recovery and is a must fo production environments. [Contact us](https://www.percona.com/about/contact) to discuss backup configurations and retention policies. + +## Additional reading + +[How components work together](ha-components.md){.md-button} + +## Next steps + +[Deployment - initial setup :material-arrow-right:](ha-init-setup.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/ha-components.md b/docs/solutions/ha-components.md new file mode 100644 index 000000000..3b7f24a81 --- /dev/null +++ b/docs/solutions/ha-components.md @@ -0,0 +1,53 @@ +# How components work together + +This document explains how components of the proposed [high-availability architecture](ha-architecture.md) work together. + +## Database and DSC layers + +Let's start with the database and DCS layers as they are interconnected and work closely together. + +Every database node hosts PostgreSQL and Patroni instances. + +Each PostgreSQL instance in the cluster maintains consistency with other members through streaming replication. Streaming replication is asynchronous by default, meaning that the primary does not wait for the secondaries to acknowledge the receipt of the data to consider the transaction complete. + +Each Patroni instance manages its own PostgreSQL instance. This means that Patroni starts and stops PostgreSQL and manages its configuration, being a sophisticated service manager for a PostgreSQL cluster. + +Patroni also can make an initial cluster initialization, monitor the cluster state and take other automatic actions if needed. To do so, Patroni relies on and uses the Distributed Configuration Store (DCS), represented by `etcd` in our architecture. + +Though Patroni supports various Distributed Configuration Stores like ZooKeeper, etcd, Consul or Kubernetes, we recommend and support `etcd` as the most popular DCS due to its simplicity, consistency and reliability. + +Note that the PostgreSQL high availability (HA) cluster and Patroni cluster are the same thing, and we will use these names interchangeably. + +When you start Patroni, it writes the cluster configuration information in `etcd`. During the initial cluster initialization, Patroni uses the `etcd` locking mechanism to ensure that only one instance becomes the primary. This mechanism ensures that only a single process can hold a resource at a time avoiding race conditions and inconsistencies. + +You start Patroni instances one by one so the first instance acquires the lock with a lease in `etcd` and becomes the primary PostgreSQL node. The other instances join the primary as replicas, waiting for the lock to be released. + +If the current primary node crashes, its lease on the lock in `etcd` expires. The lock is automatically released after its expiration time. `etcd` the starts a new election and a standby node attempts to acquire the lock to become the new primary. + +Patroni uses not only `etcd` locking mechanism. It also uses `etcd` to store the current state of the cluster, ensuring that all nodes are aware of the latest topology and status. + +Another important component is the watchdog. It runs on each database node. The purpose of watchdog is to prevent split-brain scenarios, where multiple nodes might mistakenly think they are the primary node. The watchdog monitors the node's health by receiving periodic "keepalive" signals from Patroni. If these signals stop due to a crash, high system load or any other reason, the watchdog resets the node to ensure it does not cause inconsistencies. + +## Load balancing layer + +This layer consists of HAProxy as the connection router and load balancer. + +HAProxy acts as a single point of entry to your cluster for client applications. It accepts all requests from client applications and distributes the load evenly across the cluster nodes. It can route read/write requests to the primary and read-only requests to the secondary nodes. This behavior is defined within HAProxy configuration. To determine the current primary node, HAProxy queries the Patroni REST API. + +HAProxy must be also redundant. Each application server or Pod can have its own HAProxy. If it cannot have own HAProxy, you can deploy HAProxy outside the application layer. This may introduce additional network hops and a failure point. + +If you are deploying HAProxy outside the application layer, you need a minimum of 2 HAProxy nodes (one is active and another one standby) to avoid a single point of failure. These instances share a floating virtual IP address using Keepalived. + +Keepalived acts as the failover tool for HAProxy. It provides the virtual IP address (VIP) for HAProxy and monitors its state. When the current active HAProxy node is down, it transfers the VIP to the remaining node and fails over the services there. + +## Services layer + +Finally, the services layer is represented by `pgBackRest` and PMM. + +`pgBackRest` can manage a dedicated backup server or make backups to the cloud. `pgBackRest` agent are deployed on every database node. `pgBackRest` can utilize standby nodes to offload the backup load from the primary. However, WAL archiving is happening only from the primary node. By communicating with its agents,`pgBackRest` determines the current cluster topology and uses the nodes to make backups most effectively without any manual reconfiguration at the event of a switchover or failover. + +The monitoring solution is optional but nice to have. It enables you to monitor the health of your high-availability architecture, receive timely alerts should performance issues occur and proactively react to them. + +## Next steps + +[Deployment - initial setup :material-arrow-right:](ha-init-setup.md){.md-button} diff --git a/docs/solutions/ha-etcd-config.md b/docs/solutions/ha-etcd-config.md new file mode 100644 index 000000000..9b95b3493 --- /dev/null +++ b/docs/solutions/ha-etcd-config.md @@ -0,0 +1,170 @@ +# Etcd setup + +In our solutions, we use etcd distributed configuration store. [Refresh your knowledge about etcd](etcd-info.md). + +## Install etcd + +Install etcd on all PostgreSQL nodes: `node1`, `node2` and `node3`. + +=== ":material-debian: On Debian / Ubuntu" + + 1. Install etcd: + + ```{.bash data-prompt="$"} + $ sudo apt install etcd etcd-server etcd-client + ``` + + 3. Stop and disable etcd: + + ```{.bash data-prompt="$"} + $ sudo systemctl stop etcd + $ sudo systemctl disable etcd + ``` + +=== ":material-redhat: On RHEL and derivatives" + + + 1. Install etcd. + + ```{.bash data-prompt="$"} + $ sudo yum install etcd python3-python-etcd + ``` + + 3. Stop and disable etcd: + + ```{.bash data-prompt="$"} + $ sudo systemctl stop etcd + $ sudo systemctl disable etcd + ``` + +!!! note + + If you [installed etcd from tarballs](../tarball.md), you must first [enable it](../enable-extensions.md#etcd) before configuring it. + +## Configure etcd + +To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: + +* Static in the case when the IP addresses of the cluster nodes are known +* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. + +Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-link-external-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. + +We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. + +### Method 1. Modify the configuration file + +1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. + + === "node1" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node1' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.1:2380 + listen-peer-urls: http://10.104.0.1:2380 + advertise-client-urls: http://10.104.0.1:2379 + listen-client-urls: http://10.104.0.1:2379 + ``` + + === "node2" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node2' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.2:2380 + listen-peer-urls: http://10.104.0.2:2380 + advertise-client-urls: http://10.104.0.2:2379 + listen-client-urls: http://10.104.0.2:2379 + ``` + + === "node3" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node3' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.3:2380 + listen-peer-urls: http://10.104.0.3:2380 + advertise-client-urls: http://10.104.0.3:2379 + listen-client-urls: http://10.104.0.3:2379 + ``` + +2. Enable and start the `etcd` service on all nodes: + + ```{.bash data-prompt="$"} + $ sudo systemctl enable --now etcd + $ sudo systemctl status etcd + ``` + + During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. + +--8<-- "check-etcd.md" + +### Method 2. Start etcd nodes with command line options + +1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: + + ``` + TOKEN=PostgreSQL_HA_Cluster_1 + CLUSTER_STATE=new + NAME_1=node1 + NAME_2=node2 + NAME_3=node3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 + ``` + +2. Start each etcd node in parallel using the following command: + + === "node1" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_1} + THIS_IP=${HOST_1} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} & + ``` + + === "node2" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_2} + THIS_IP=${HOST_2} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} & + ``` + + === "node3" + + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_3} + THIS_IP=${HOST_3} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} & + ``` + +--8<-- "check-etcd.md" + +## Next steps + +[Patroni setup :material-arrow-right:](ha-patroni.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/ha-haproxy.md b/docs/solutions/ha-haproxy.md new file mode 100644 index 000000000..e89957216 --- /dev/null +++ b/docs/solutions/ha-haproxy.md @@ -0,0 +1,269 @@ +# Configure HAProxy + +HAproxy is the connection router and acts as a single point of entry to your PostgreSQL cluster for client applications. Additionally, HAProxy provides load-balancing for read-only connections. + +A client application connects to HAProxy and sends its read/write requests there. You can provide different ports in the HAProxy configuration file so that the client application can explicitly choose between read-write (primary) connection or read-only (replica) connection using the right port number to connect. In this deployment, writes are routed to port 5000 and reads - to port 5001. + +The client application doesn't know what node in the underlying cluster is the current primary. But it must connect to the HAProxy read-write connection to send all write requests. This ensures that HAProxy correctly routes all write load to the current primary node. Read requests are routed to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. + +When you deploy HAProxy outside the application layer, you must deploy multiple instances of it and have the automatic failover mechanism to eliminate a single point of failure for HAProxy. + +For this document we focus on deployment on premises and we use `keepalived`. It monitors HAProxy state and manages the virtual IP for HAProxy. + +If you use a cloud infrastructure, it may be easier to use the load balancer provided by the cloud provider to achieve high-availability with HAProxy. + +## HAProxy setup + +1. Install HAProxy on the HAProxy nodes: `HAProxy1`, `HAProxy2` and `HAProxy3`: + + ```{.bash data-prompt="$"} + $ sudo apt install percona-haproxy + ``` + +2. The HAProxy configuration file path is: `/etc/haproxy/haproxy.cfg`. Specify the following configuration in this file for every node. + + ``` + global + maxconn 100 # Maximum number of concurrent connections + + defaults + log global # Use global logging configuration + mode tcp # TCP mode for PostgreSQL connections + retries 2 # Number of retries before marking a server as failed + timeout client 30m # Maximum time to wait for client data + timeout connect 4s # Maximum time to establish connection to server + timeout server 30m # Maximum time to wait for server response + timeout check 5s # Maximum time to wait for health check response + + listen stats # Statistics monitoring + mode http # The protocol for web-based stats UI + bind *:7000 # Port to listen to on all network interfaces + stats enable # Statistics reporting interface + stats uri /stats # URL path for the stats page + stats auth percona:myS3cr3tpass # Username:password authentication + + listen primary + bind *:5000 # Port for write connections + option httpchk /primary + http-check expect status 200 + default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions # Server health check parameters + server node1 node1:5432 maxconn 100 check port 8008 + server node2 node2:5432 maxconn 100 check port 8008 + server node3 node3:5432 maxconn 100 check port 8008 + + listen standbys + balance roundrobin # Round-robin load balancing for read connections + bind *:5001 # Port for read connections + option httpchk /replica + http-check expect status 200 + default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions # Server health check parameters + server node1 node1:5432 maxconn 100 check port 8008 + server node2 node2:5432 maxconn 100 check port 8008 + server node3 node3:5432 maxconn 100 check port 8008 + ``` + + HAProxy will use the REST APIs hosted by Patroni to check the health status of each PostgreSQL node and route the requests appropriately. + + To monitor HAProxy stats, create the user who has the access to it. Read more about statistics dashboard in [HAProxy documentation :octicons-link-external-16:](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/alerts-and-monitoring/statistics/) + +3. Restart HAProxy: + + ```{.bash data-prompt="$"} + $ sudo systemctl restart haproxy + ``` + +4. Check the HAProxy logs to see if there are any errors: + + ```{.bash data-prompt="$"} + $ sudo journalctl -u haproxy.service -n 100 -f + ``` + +## Keepalived setup + +The HAproxy instances will share a virtual IP address `203.0.113.1` as the single point of entry for client applications. + +In this setup we define the basic health check for HAProxy. You may want to use a more sophisticated check. You can do this by writing a script and referencing it in the `keeplaived` configuration. See the [Example of HAProxy health check](#example-of-haproxy-health-check) section for details. + +1. Install `keepalived` on all HAProxy nodes: + + === ":material-debian: On Debian and Ubuntu" + + ```{.bash data-prompt="$"} + $ sudo apt install keepalived + ``` + + === ":material-redhat: On RHEL and derivatives" + + ```{.bash data-prompt="$"} + $ sudo yum install keepalived + ``` + +2. Create the `keepalived` configuration file at `/etc/keepalived/keepalived.conf` with the following contents for each node: + + === "Primary HAProxy (HAProxy1)" + + ```ini + vrrp_script chk_haproxy { + script "killall -0 haproxy" # Basic check if HAProxy process is running + interval 3 # Check every 2 seconds + fall 3 # The number of failures to mark the node as down + rise 2 # The number of successes to mark the node as up + weight -11 # Reduce priority by 2 on failure + } + + vrrp_instance CLUSTER_1 { # The name of Patroni cluster + state MASTER # Initial state for the primary node + interface eth1 # Network interface to bind to + virtual_router_id 99 # Unique ID for this VRRP instance + priority 110 # The priority for the primary must be the highest + advert_int 1 # Advertisement interval + authentication { + auth_type PASS + auth_pass myS3cr3tpass # Authentication password + } + virtual_ipaddress { + 203.0.113.1/24 # The virtual IP address + } + track_script { + chk_haproxy + } + } + ``` + + === "HAProxy2" + + ```ini + vrrp_script chk_haproxy { + script "killall -0 haproxy" # Basic check if HAProxy process is running + interval 2 # Check every 2 seconds + fall 2 # The number of failures to mark the node as down + rise 2 # The number of successes to mark the node as up + weight 2 # Reduce priority by 2 on failure + } + + vrrp_instance CLUSTER_1 { + state BACKUP # Initial state for backup node + interface eth1 # Network interface to bind to + virtual_router_id 99 # Same ID as primary + priority 100 # Lower priority than primary + advert_int 1 # Advertisement interval + authentication { + auth_type PASS + auth_pass myS3cr3tpass # Same password as primary + } + virtual_ipaddress { + 203.0.113.1/24 + } + track_script { + chk_haproxy + } + } + ``` + + === "HAProxy3" + + ```ini + vrrp_script chk_haproxy { + script "killall -0 haproxy" # Basic check if HAProxy process is running + interval 2 # Check every 2 seconds + fall 3 # The number of failures to mark the node as down + rise 2 # The number of successes to mark the node as up + weight 6 # Reduce priority by 2 on failure + } + + vrrp_instance CLUSTER_1 { + state BACKUP # Initial state for backup node + interface eth1 # Network interface to bind to + virtual_router_id 99 # Same ID as primary + priority 105 # Lowest priority + advert_int 1 # Advertisement interval + authentication { + auth_type PASS + auth_pass myS3cr3tpass # Same password as primary + } + virtual_ipaddress { + 203.0.113.1/24 + } + track_script { + chk_haproxy + } + } + ``` + +3. Start `keepalived`: + + ```{.bash data-prompt="$"} + $ sudo systemctl start keepalived + ``` + +4. Check the `keepalived` status: + + ```{.bash data-prompt="$"} + $ sudo systemctl status keepalived + ``` + +!!! note + + The basic health check (`killall -0 haproxy`) only verifies that the HAProxy process is running. For production environments, consider implementing more comprehensive health checks that verify the node's overall responsiveness and HAProxy's ability to handle connections. + +### Example of HAProxy health check + +Sometimes checking only the running haproxy process is not enough. The process may be running while HAProxy is in a degraded state. A good practice is to make additional checks to ensure HAProxy is healthy. + +Here's an example health check script for HAProxy. It performs the following checks: + +1. Verifies that the HAProxy process is running +2. Tests if the HAProxy admin socket is accessible +3. Confirms that HAProxy is binding to the default port `5432` + +```bash +#!/bin/bash + +# Exit codes: +# 0 - HAProxy is healthy +# 1 - HAProxy is not healthy + +# Check if HAProxy process is running +if ! pgrep -x haproxy > /dev/null; then + echo "HAProxy process is not running" + exit 1 +fi + +# Check if HAProxy socket is accessible +if ! socat - UNIX-CONNECT:/var/run/haproxy/admin.sock > /dev/null 2>&1; then + echo "HAProxy socket is not accessible" + exit 1 +fi + +# Check if HAProxy is binding to port 5432 +if ! netstat -tuln | grep -q ":5432 "; then + exit 1 +fi + +# All checks passed +exit 0 +``` + +Save this script as `/usr/local/bin/check_haproxy.sh` and make it executable: + +```{.bash data-prompt="$"} +$ sudo chmod +x /usr/local/bin/check_haproxy.sh +``` + +Then define this script in Keepalived configuration on each node: + +```ini +vrrp_script chk_haproxy { + script "/usr/local/bin/check_haproxy.sh" + interval 2 + fall 3 + rise 2 + weight -10 +} +``` + +Congratulations! You have successfully configured your HAProxy solution. Now you can proceed to testing it. + +## Next steps + +[Test Patroni PostgreSQL cluster :material-arrow-right:](ha-test.md){.md-button} diff --git a/docs/solutions/ha-init-setup.md b/docs/solutions/ha-init-setup.md new file mode 100644 index 000000000..6d8d5ee53 --- /dev/null +++ b/docs/solutions/ha-init-setup.md @@ -0,0 +1,81 @@ +# Initial setup for high availability + +This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni. This guide relies on the provided [architecture](ha-architecture.md) for high-availability. + +## Considerations + +1. This is an example deployment where etcd runs on the same host machines as the Patroni and PostgreSQL and there is a single dedicated HAProxy host. Alternatively etcd can run on different set of nodes. + + If etcd is deployed on the same host machine as Patroni and PostgreSQL, separate disk system for etcd and PostgreSQL is recommended due to performance reasons. + +2. For this setup, we will use the nodes that have the following IP addresses: + + + | Node name | Public IP address | Internal IP address + |---------------|-------------------|-------------------- + | node1 | 157.230.42.174 | 10.104.0.7 + | node2 | 68.183.177.183 | 10.104.0.2 + | node3 | 165.22.62.167 | 10.104.0.8 + | HAProxy1 | 112.209.126.159 | 10.104.0.6 + | HAProxy2 | 134.209.111.138 | 10.104.0.5 + | HAProxy3 | 134.60.204.27 | 10.104.0.3 + | backup | 97.78.129.11 | 10.104.0.9 + + We also need a virtual IP address for HAProxy: `203.0.113.1` + + +!!! important + + We recommend not to expose the hosts/nodes where Patroni / etcd / PostgreSQL are running to public networks due to security risks. Use Firewalls, Virtual networks, subnets or the like to protect the database hosts from any kind of attack. + +## Configure name resolution + +It’s not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other’s names and allow their seamless communication. + +Run the following commands on each node. + +1. Set the hostname for nodes. Change the node name to `node1`, `node2`, `node3`, `HAProxy1`, `HAProxy2` and `backup`, respectively: + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node1 + ``` + +2. Modify the `/etc/hosts` file of each node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + + ```text + # Cluster IP and names + + 10.104.0.7 node1 + 10.104.0.2 node2 + 10.104.0.8 node3 + 10.104.0.6 HAProxy1 + 10.104.0.5 HAProxy2 + 10.104.0.3 HAProxy3 + 10.104.0.9 backup + ``` + +## Configure Percona repository + +To install the software from Percona, you need to subscribe to Percona repositories. To do this, you require `percona-release` - the repository management tool. + +Run the following commands on each node as the root user or with `sudo` privileges. + +1. Install `percona-release` + + === ":material-debian: On Debian and Ubuntu" + + --8<-- "percona-release-apt.md" + + === ":material-redhat: On RHEL and derivatives" + + --8<-- "percona-release-yum.md" + +2. Enable the repository: + + ```{.bash data-prompt="$"} + $ sudo percona-release setup ppg{{pgversion}} + ``` + +## Next steps + +[Set up etcd :material-arrow-right:](ha-etcd-config.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/ha-measure.md b/docs/solutions/ha-measure.md new file mode 100644 index 000000000..058350022 --- /dev/null +++ b/docs/solutions/ha-measure.md @@ -0,0 +1,39 @@ +# Measuring high availability + +The need for high availability is determined by the business requirements, potential risks, and operational limitations. For example, the more components you add to your infrastructure, the more complex and time-consuming it is to maintain. Moreover, it may introduce extra failure points. The recommendation is to follow the principle "The simpler the better". + +The level of high availability depends on the following: + +* how frequently you may encounter an outage or a downtime. +* how much downtime you can bear without negatively impacting your users for every outage, and +* how much data loss you can tolerate during the outage. + + +When you evaluate high-availability, consider these two aspects: + +* Expected level of availability. +* Actual availability level of your infrastructure. + +### Expected level of availability + +It is measured by establishing a measurement time frame and dividing it by the time that it was available. This ratio will rarely be one, which is equal to 100% availability. At Percona, we don't consider a solution to be highly available if it is not at least 99% or two nines available. + +The following table shows the amount of downtime for each level of availability from two to five nines. + +| Availability % | Downtime per year | Downtime per month | Downtime per week | Downtime per day | +|--------------------------|-------------------|--------------------|-------------------|-------------------| +| 99% (“two nines”) | 3.65 days | 7.31 hours | 1.68 hours | 14.40 minutes | +| 99.5% (“two nines five”) | 1.83 days | 3.65 hours | 50.40 minutes | 7.20 minutes | +| 99.9% (“three nines”) | 8.77 hours | 43.83 minutes | 10.08 minutes | 1.44 minutes | +| 99.95% (“three nines five”) | 4.38 hours | 21.92 minutes | 5.04 minutes | 43.20 seconds | +| 99.99% (“four nines”) | 52.60 minutes | 4.38 minutes | 1.01 minutes | 8.64 seconds | +| 99.995% (“four nines five”) | 26.30 minutes | 2.19 minutes | 30.24 seconds | 4.32 seconds | +| 99.999% (“five nines”) | 5.26 minutes | 26.30 seconds | 6.05 seconds | 864.00 milliseconds | + +### Actual level of availability + +Measuring the real level of high availability (HA) in your system is key to making sure your investment in HA infrastructure pays off. Instead of relying on assumptions or expectations, you should base your availability insights on incident management data. This is the information collected during service disruptions, failures, or outages that affect the normal functioning of the setup. With this data, you can track metrics like uptime, Mean Time to Recovery (MTTR), and Mean Time Between Failures (MTBF). + +MTBF gives you a picture of how reliable your infrastructure really is. In well-designed high-availability environment, the incidents should be rare, typically occurring no more than once every 2 to 4 years. This assumes a robust infrastructure, as not all systems equally suit for handling database load. + +Recovery speed matters too. For example, a typical Patroni-based cluster can fail over to a new primary node within 30 to 50 seconds. However, note that database availability metrics typically don't consider the application's ability to detect the failover and reconnect. Some applications recover seamlessly, while others may require a restart. diff --git a/docs/solutions/ha-patroni.md b/docs/solutions/ha-patroni.md new file mode 100644 index 000000000..d0516ef61 --- /dev/null +++ b/docs/solutions/ha-patroni.md @@ -0,0 +1,371 @@ +# Patroni setup + +## Install Percona Distribution for PostgreSQL and Patroni + +Run the following commands as root or with `sudo` privileges on `node1`, `node2` and `node3`. + +=== ":material-debian: On Debian / Ubuntu" + + 1. Disable the upstream `postgresql-{{pgversion}}` package. + + 2. Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql-{{pgversion}} + ``` + + 3. Install some Python and auxiliary packages to help with Patroni + + ```{.bash data-prompt="$"} + $ sudo apt install python3-pip python3-dev binutils + ``` + + 4. Install Patroni + + ```{.bash data-prompt="$"} + $ sudo apt install percona-patroni + ``` + + 5. Stop and disable all installed services: + + ```{.bash data-prompt="$"} + $ sudo systemctl stop {patroni,postgresql} + $ sudo systemctl disable {patroni,postgresql} + ``` + + 6. Even though Patroni can use an existing Postgres installation, our recommendation for a **new cluster that has no data** is to remove the data directory. This forces Patroni to initialize a new Postgres cluster instance. + + ```{.bash data-prompt="$"} + $ sudo systemctl stop postgresql + $ sudo rm -rf /var/lib/postgresql/{{pgversion}}/main + ``` + +=== ":material-redhat: On RHEL and derivatives" + + 1. Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo yum install percona-postgresql{{pgversion}}-server + ``` + + 2. Check the [platform specific notes for Patroni](../yum.md#for-percona-distribution-for-postgresql-packages) + + 3. Install some Python and auxiliary packages to help with Patroni and etcd + + ```{.bash data-prompt="$"} + $ sudo yum install python3-pip python3-devel binutils + ``` + + 4. Install Patroni + + ```{.bash data-prompt="$"} + $ sudo yum install percona-patroni + ``` + + 3. Stop and disable all installed services: + + ```{.bash data-prompt="$"} + $ sudo systemctl stop {patroni,postgresql-{{pgversion}}} + $ sudo systemctl disable {patroni,postgresql-{{pgversion}}} + ``` + + !!! important + + **Don't** initialize the cluster and start the `postgresql` service. The cluster initialization and setup are handled by Patroni during the bootsrapping stage. + +## Configure Patroni + +Run the following commands on all nodes. You can do this in parallel: + +### Create environment variables + +Environment variables simplify the config file creation: + +1. Node name: + + ```{.bash data-prompt="$"} + $ export NODE_NAME=`hostname -f` + ``` + +2. Node IP: + + ```{.bash data-prompt="$"} + $ export NODE_IP=`getent hosts $(hostname -f) | awk '{ print $1 }' | grep -v grep | grep -v '127.0.1.1'` + ``` + + * Check that the correct IP address is defined: + + ```{.bash data-prompt="$"} + $ echo $NODE_IP + ``` + + ??? example "Sample output `node1`" + + ```{text .no-copy} + 10.104.0.7 + ``` + + If you have multiple IP addresses defined on your server and the environment variable contains the wrong one, you can manually redefine it. For example, run the following command for `node1`: + + ```{.bash data-prompt="$"} + $ NODE_IP=10.104.0.7 + ``` + +3. Create variables to store the `PATH`. Check the path to the `data` and `bin` folders on your operating system and change it for the variables accordingly: + + === ":material-debian: Debian and Ubuntu" + + ```bash + DATA_DIR="/var/lib/postgresql/{{pgversion}}/main" + PG_BIN_DIR="/usr/lib/postgresql/{{pgversion}}/bin" + ``` + + === ":material-redhat: RHEL and derivatives" + + ```bash + DATA_DIR="/var/lib/pgsql/data/" + PG_BIN_DIR="/usr/pgsql-{{pgversion}}/bin" + ``` + +4. Patroni information: + + ```bash + NAMESPACE="percona_lab" + SCOPE="cluster_1" + ``` + +### Create the directories required by Patroni + +Create the directory to store the configuration file and make it owned by the `postgres` user. + +```{.bash data-prompt="$"} +$ sudo mkdir -p /etc/patroni/ +$ sudo chown -R postgres:postgres /etc/patroni/ +``` + +### Patroni configuration file + +Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for every node: + +```bash +echo " +namespace: ${NAMESPACE} +scope: ${SCOPE} +name: ${NODE_NAME} + +restapi: + listen: 0.0.0.0:8008 + connect_address: ${NODE_IP}:8008 + +etcd3: + host: ${NODE_IP}:2379 + +bootstrap: + # this section will be written into Etcd:///config after initializing new cluster + dcs: + ttl: 30 + loop_wait: 10 + retry_timeout: 10 + maximum_lag_on_failover: 1048576 + + postgresql: + use_pg_rewind: true + use_slots: true + parameters: + wal_level: replica + hot_standby: "on" + wal_keep_segments: 10 + max_wal_senders: 5 + max_replication_slots: 10 + wal_log_hints: "on" + logging_collector: 'on' + max_wal_size: '10GB' + archive_mode: "on" + archive_timeout: 600s + archive_command: "cp -f %p /home/postgres/archived/%f" + + pg_hba: # Add following lines to pg_hba.conf after running 'initdb' + - host replication replicator 127.0.0.1/32 trust + - host replication replicator 0.0.0.0/0 md5 + - host all all 0.0.0.0/0 md5 + - host all all ::0/0 md5 + recovery_conf: + restore_command: cp /home/postgres/archived/%f %p + + # some desired options for 'initdb' + initdb: # Note: It needs to be a list (some options need values, others are switches) + - encoding: UTF8 + - data-checksums + + +postgresql: + cluster_name: cluster_1 + listen: 0.0.0.0:5432 + connect_address: ${NODE_IP}:5432 + data_dir: ${DATA_DIR} + bin_dir: ${PG_BIN_DIR} + pgpass: /tmp/pgpass0 + authentication: + replication: + username: replicator + password: replPasswd + superuser: + username: postgres + password: qaz123 + parameters: + unix_socket_directories: "/var/run/postgresql/" + create_replica_methods: + - basebackup + basebackup: + checkpoint: 'fast' + + watchdog: + mode: required # Allowed values: off, automatic, required + device: /dev/watchdog + safety_margin: 5 + +tags: + nofailover: false + noloadbalance: false + clonefrom: false + nosync: false +" | sudo tee /etc/patroni/patroni.yml +``` + +??? admonition "Patroni configuration file" + + Let’s take a moment to understand the contents of the `patroni.yml` file. + + The first section provides the details of the node and its connection ports. After that, we have the `etcd` service and its port details. + + Following these, there is a `bootstrap` section that contains the PostgreSQL configurations and the steps to run once + +### Systemd configuration + +1. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. + + If it's **not created**, create it manually and specify the following contents within: + + ```ini title="/etc/systemd/system/percona-patroni.service" + [Unit] + Description=Runners to orchestrate a high-availability PostgreSQL + After=syslog.target network.target + + [Service] + Type=simple + + User=postgres + Group=postgres + + # Start the patroni process + ExecStart=/bin/patroni /etc/patroni/patroni.yml + + # Send HUP to reload from patroni.yml + ExecReload=/bin/kill -s HUP $MAINPID + + # only kill the patroni process, not its children, so it will gracefully stop postgres + KillMode=process + + # Give a reasonable amount of time for the server to start up/shut down + TimeoutSec=30 + + # Do not restart the service if it crashes, we want to manually inspect database on failure + Restart=no + + [Install] + WantedBy=multi-user.target + ``` + +2. Make `systemd` aware of the new service: + + ```{.bash data-prompt="$"} + $ sudo systemctl daemon-reload + ``` + +3. Make sure you have the configuration file and the `systemd` unit file created on every node. + +### Start Patroni + +Now it's time to start Patroni. You need the following commands on all nodes but **not in parallel**. + +1. Start Patroni on `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: + + ```{.bash data-prompt="$"} + $ sudo systemctl enable --now percona-patroni + ``` + + When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. + +2. Check the service to see if there are errors: + + ```{.bash data-prompt="$"} + $ sudo journalctl -fu percona-patroni + ``` + + See [Troubleshooting Patroni startup](#troubleshooting-patroni-startup) for guidelines in case of errors. + + If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following command: + + ```{.bash data-prompt="$"} + $ sudo psql -U postgres + + psql ({{dockertag}}) + Type "help" for help. + + postgres=# + ``` + +9. When all nodes are up and running, you can check the cluster status using the following command: + + ```{.bash data-prompt="$"} + $ sudo patronictl -c /etc/patroni/patroni.yml list + ``` + + The output resembles the following: + + ??? example "Sample output node1" + + ```{.text .no-copy} + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + ``` + + ??? example "Sample output node3" + + ```{.text .no-copy} + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | + | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | + +--------+------------+---------+-----------+----+-----------+ + ``` + +### Troubleshooting Patroni startup + + A common error is Patroni complaining about the lack of proper entries in the `pg_hba.conf` file. If you see such errors, you must manually add or fix the entries in that file and then restart the service. + +An example of such an error is `No pg_hba.conf entry for replication connection from host to , user replicator, no encryption`. This means that Patroni cannot connect to the node you're adding to the cluster. To resolve this issue, add the IP addresses of the nodes to the `pg_hba:` section of the Patroni configuration file. + +``` +pg_hba: # Add following lines to pg_hba.conf after running 'initdb' +- host replication replicator 127.0.0.1/32 trust +- host replication replicator 0.0.0.0/0 md5 +- host replication replicator 10.0.100.2/32 trust +- host replication replicator 10.0.100.3/32 trust +- host all all 0.0.0.0/0 md5 +- host all all ::0/0 md5 +recovery_conf: + restore_command: cp /home/postgres/archived/%f %p +``` + +For production use, we recommend adding nodes individually as the more secure way. However, if your network is secure and you trust it, you can add the whole network these nodes belong to as the trusted one to bypass passwords use during authentication. Then all nodes from this network can connect to Patroni cluster. + +Changing the `patroni.yml` file and restarting the service will not have any effect here because the bootstrap section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if the Patroni configuration file is modified and the service is restarted. + +## Next steps + +[pgBackRest setup :material-arrow-right:](pgbackrest.md){.md-button} diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md deleted file mode 100644 index bab3e6674..000000000 --- a/docs/solutions/ha-setup-apt.md +++ /dev/null @@ -1,581 +0,0 @@ -# Deploying PostgreSQL for high availability with Patroni on Debian or Ubuntu - -This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on Debian or Ubuntu. - - -## Preconditions - -1. This is an example deployment where etcd runs on the same host machines as the Patroni and PostgreSQL and there is a single dedicated HAProxy host. Alternatively etcd can run on different set of nodes. - - If etcd is deployed on the same host machine as Patroni and PostgreSQL, separate disk system for etcd and PostgreSQL is recommended due to performance reasons. - -2. For this setup, we will use the nodes running on Ubuntu 22.04 as the base operating system: - - -| Node name | Public IP address | Internal IP address -|---------------|-------------------|-------------------- -| node1 | 157.230.42.174 | 10.104.0.7 -| node2 | 68.183.177.183 | 10.104.0.2 -| node3 | 165.22.62.167 | 10.104.0.8 -| HAProxy-demo | 134.209.111.138 | 10.104.0.6 - - -!!! note - - We recommend not to expose the hosts/nodes where Patroni / etcd / PostgreSQL are running to public networks due to security risks. Use Firewalls, Virtual networks, subnets or the like to protect the database hosts from any kind of attack. - -## Initial setup - -Configure every node. - -### Set up hostnames in the `/etc/hosts` file - -It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. - -=== "node1" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node1 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="3 4" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "node2" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node2 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="2 4" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "node3" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node3 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="2 3" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "HAproxy-demo" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname HAProxy-demo - ``` - - 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: - - ```text hl_lines="3 4 5" - # Cluster IP and names - 10.104.0.6 HAProxy-demo - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -### Install the software - -Run the following commands on `node1`, `node2` and `node3`: - -1. Install Percona Distribution for PostgreSQL - - * Disable the upstream `postgresql-{{pgversion}}` package. - - * Install the `percona-release` repository management tool - - --8<-- "percona-release-apt.md" - - * Enable the repository - - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg{{pgversion}} - ``` - - * Install Percona Distribution for PostgreSQL package - - ```{.bash data-prompt="$"} - $ sudo apt install percona-postgresql-{{pgversion}} - ``` - -2. Install some Python and auxiliary packages to help with Patroni and etcd - - ```{.bash data-prompt="$"} - $ sudo apt install python3-pip python3-dev binutils - ``` - -3. Install etcd, Patroni, pgBackRest packages: - - - ```{.bash data-prompt="$"} - $ sudo apt install percona-patroni \ - etcd etcd-server etcd-client \ - percona-pgbackrest - ``` - -4. Stop and disable all installed services: - - ```{.bash data-prompt="$"} - $ sudo systemctl stop {etcd,patroni,postgresql} - $ systemctl disable {etcd,patroni,postgresql} - ``` - -5. Even though Patroni can use an existing Postgres installation, remove the data directory to force it to initialize a new Postgres cluster instance. - - ```{.bash data-prompt="$"} - $ sudo systemctl stop postgresql - $ sudo rm -rf /var/lib/postgresql/16/main - ``` - -## Configure etcd distributed store - -In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). - -!!! note - - If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. - -To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: - -* Static in the case when the IP addresses of the cluster nodes are known -* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - -Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. - -We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. - -### Method 1. Modify the configuration file - -1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - - === "node1" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - - === "node2" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 - ``` - - === "node3" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node3' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.3:2380 - listen-peer-urls: http://10.104.0.3:2380 - advertise-client-urls: http://10.104.0.3:2379 - listen-client-urls: http://10.104.0.3:2379 - ``` - -2. Enable and start the `etcd` service on all nodes: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl start etcd - $ sudo systemctl status etcd - ``` - - During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ---8<-- "check-etcd.md" - -### Method 2. Start etcd nodes with command line options - -1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: - - ``` - TOKEN=PostgreSQL_HA_Cluster_1 - CLUSTER_STATE=new - NAME_1=node1 - NAME_2=node2 - NAME_3=node3 - HOST_1=10.104.0.1 - HOST_2=10.104.0.2 - HOST_3=10.104.0.3 - CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 - ``` - -2. Start each etcd node in parallel using the following command: - - === "node1" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_1} - THIS_IP=${HOST_1} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - - === "node2" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_2} - THIS_IP=${HOST_2} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - - === "node3" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_3} - THIS_IP=${HOST_3} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - ---8<-- "check-etcd.md" - -## Configure Patroni - -Run the following commands on all nodes. You can do this in parallel: - -1. Export and create environment variables to simplify the config file creation: - - * Node name: - - ```{.bash data-prompt="$"} - $ export NODE_NAME=`hostname -f` - ``` - - * Node IP: - - ```{.bash data-prompt="$"} - $ export NODE_IP=`hostname -i | awk '{print $1}'` - ``` - - * Create variables to store the PATH: - - ```bash - DATA_DIR="/var/lib/postgresql/16/main" - PG_BIN_DIR="/usr/lib/postgresql/16/bin" - ``` - - **NOTE**: Check the path to the data and bin folders on your operating system and change it for the variables accordingly. - - * Patroni information: - - ```bash - NAMESPACE="percona_lab" - SCOPE="cluster_1" - ``` - -2. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: - - ```bash - echo " - namespace: ${NAMESPACE} - scope: ${SCOPE} - name: ${NODE_NAME} - - restapi: - listen: 0.0.0.0:8008 - connect_address: ${NODE_IP}:8008 - - etcd3: - host: ${NODE_IP}:2379 - - bootstrap: - # this section will be written into Etcd:///config after initializing new cluster - dcs: - ttl: 30 - loop_wait: 10 - retry_timeout: 10 - maximum_lag_on_failover: 1048576 - - postgresql: - use_pg_rewind: true - use_slots: true - parameters: - wal_level: replica - hot_standby: "on" - wal_keep_segments: 10 - max_wal_senders: 5 - max_replication_slots: 10 - wal_log_hints: "on" - logging_collector: 'on' - max_wal_size: '10GB' - archive_mode: "on" - archive_timeout: 600s - archive_command: "cp -f %p /home/postgres/archived/%f" - pg_hba: - - local all all peer - - host replication replicator 127.0.0.1/32 trust - - host replication replicator 192.0.0.0/8 scram-sha-256 - - host all all 0.0.0.0/0 scram-sha-256 - recovery_conf: - restore_command: cp /home/postgres/archived/%f %p - - # some desired options for 'initdb' - initdb: # Note: It needs to be a list (some options need values, others are switches) - - encoding: UTF8 - - data-checksums - - postgresql: - cluster_name: cluster_1 - listen: 0.0.0.0:5432 - connect_address: ${NODE_IP}:5432 - data_dir: ${DATA_DIR} - bin_dir: ${PG_BIN_DIR} - pgpass: /tmp/pgpass0 - authentication: - replication: - username: replicator - password: replPasswd - superuser: - username: postgres - password: qaz123 - parameters: - unix_socket_directories: "/var/run/postgresql/" - create_replica_methods: - - basebackup - basebackup: - checkpoint: 'fast' - - watchdog: - mode: required # Allowed values: off, automatic, required - device: /dev/watchdog - safety_margin: 5 - - - tags: - nofailover: false - noloadbalance: false - clonefrom: false - nosync: false - " | sudo tee -a /etc/patroni/patroni.yml - ``` - - ??? admonition "Patroni configuration file" - - Let’s take a moment to understand the contents of the `patroni.yml` file. - - The first section provides the details of the node and its connection ports. After that, we have the `etcd` service and its port details. - - Following these, there is a `bootstrap` section that contains the PostgreSQL configurations and the steps to run once the database is initialized. The `pg_hba.conf` entries specify all the other nodes that can connect to this node and their authentication mechanism. - - -3. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. - - If it's **not created**, create it manually and specify the following contents within: - - ```ini title="/etc/systemd/system/percona-patroni.service" - [Unit] - Description=Runners to orchestrate a high-availability PostgreSQL - After=syslog.target network.target - - [Service] - Type=simple - - User=postgres - Group=postgres - - # Start the patroni process - ExecStart=/bin/patroni /etc/patroni/patroni.yml - - # Send HUP to reload from patroni.yml - ExecReload=/bin/kill -s HUP $MAINPID - - # only kill the patroni process, not its children, so it will gracefully stop postgres - KillMode=process - - # Give a reasonable amount of time for the server to start up/shut down - TimeoutSec=30 - - # Do not restart the service if it crashes, we want to manually inspect database on failure - Restart=no - - [Install] - WantedBy=multi-user.target - ``` - -4. Make systemd aware of the new service: - - ```{.bash data-prompt="$"} - $ sudo systemctl daemon-reload - ``` - -5. Repeat steps 1-4 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. -6. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: - - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now patroni - $ sudo systemctl restart patroni - ``` - -When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. - -7. Check the service to see if there are errors: - - ```{.bash data-prompt="$"} - $ sudo journalctl -fu patroni - ``` - - A common error is Patroni complaining about the lack of proper entries in the pg_hba.conf file. If you see such errors, you must manually add or fix the entries in that file and then restart the service. - - Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if the Patroni configuration file is modified and the service is restarted. - -8. Check the cluster. Run the following command on any node: - - ```{.bash data-prompt="$"} - $ patronictl -c /etc/patroni/patroni.yml list $SCOPE - ``` - - The output resembles the following: - - ```{.text .no-copy} - + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+------------+---------+-----------+----+-----------+ - | node1 | 10.0.100.1 | Leader | running | 1 | | - | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | - | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | - +--------+------------+---------+-----------+----+-----------+ - ``` - -If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following command: - -```{.bash data-prompt="$"} -$ sudo psql -U postgres -``` - -The command output is the following: - -``` -psql ({{pgversion}}) -Type "help" for help. - -postgres=# -``` - -## Configure HAProxy - -HAproxy is the load balancer and the single point of entry to your PostgreSQL cluster for client applications. A client application accesses the HAPpoxy URL and sends its read/write requests there. Behind-the-scene, HAProxy routes write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001 - -This way, a client application doesn’t know what node in the underlying cluster is the current primary. HAProxy sends connections to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. - -1. Install HAProxy on the `HAProxy-demo` node: - - ```{.bash data-prompt="$"} - $ sudo apt install percona-haproxy - ``` - -2. The HAProxy configuration file path is: `/etc/haproxy/haproxy.cfg`. Specify the following configuration in this file. - - ``` - global - maxconn 100 - - defaults - log global - mode tcp - retries 2 - timeout client 30m - timeout connect 4s - timeout server 30m - timeout check 5s - - listen stats - mode http - bind *:7000 - stats enable - stats uri / - - listen primary - bind *:5000 - option httpchk /primary - http-check expect status 200 - default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions - server node1 node1:5432 maxconn 100 check port 8008 - server node2 node2:5432 maxconn 100 check port 8008 - server node3 node3:5432 maxconn 100 check port 8008 - - listen standbys - balance roundrobin - bind *:5001 - option httpchk /replica - http-check expect status 200 - default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions - server node1 node1:5432 maxconn 100 check port 8008 - server node2 node2:5432 maxconn 100 check port 8008 - server node3 node3:5432 maxconn 100 check port 8008 - ``` - - - HAProxy will use the REST APIs hosted by Patroni to check the health status of each PostgreSQL node and route the requests appropriately. - -3. Restart HAProxy: - - ```{.bash data-prompt="$"} - $ sudo systemctl restart haproxy - ``` - -4. Check the HAProxy logs to see if there are any errors: - - ```{.bash data-prompt="$"} - $ sudo journalctl -u haproxy.service -n 100 -f - ``` - -## Next steps - -[Configure pgBackRest](pgbackrest.md){.md-button} diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md deleted file mode 100644 index b42b32d38..000000000 --- a/docs/solutions/ha-setup-yum.md +++ /dev/null @@ -1,584 +0,0 @@ -# Deploying PostgreSQL for high availability with Patroni on RHEL or CentOS - -This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on Red Hat Enterprise Linux or CentOS. - - -## Considerations - -1. This is an example deployment where etcd runs on the same host machines as the Patroni and PostgreSQL and there is a single dedicated HAProxy host. Alternatively etcd can run on different set of nodes. - - If etcd is deployed on the same host machine as Patroni and PostgreSQL, separate disk system for etcd and PostgreSQL is recommended due to performance reasons. - -2. For this setup, we use the nodes running on Red Hat Enterprise Linux 8 as the base operating system: - - | Node name | Application | IP address - |---------------|-------------------|-------------------- - | node1 | Patroni, PostgreSQL, etcd | 10.104.0.1 - | node2 | Patroni, PostgreSQL, etcd | 10.104.0.2 - | node3 | Patroni, PostgreSQL, etcd | 10.104.0.3 - | HAProxy-demo | HAProxy | 10.104.0.6 - - -!!! note - - We recommend not to expose the hosts/nodes where Patroni / etcd / PostgreSQL are running to public networks due to security risks. Use Firewalls, Virtual networks, subnets or the like to protect the database hosts from any kind of attack. - -## Initial setup - -### Set up hostnames in the `/etc/hosts` file - -It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. - -=== "node1" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node1 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="3 4" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "node2" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node2 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="2 4" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "node3" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node3 - ``` - - 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: - - ```text hl_lines="2 3" - # Cluster IP and names - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -=== "HAproxy-demo" - - 1. Set up the hostname for the node - - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname HAProxy-demo - ``` - - 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: - - ```text hl_lines="3 4 5" - # Cluster IP and names - 10.104.0.6 HAProxy-demo - 10.104.0.1 node1 - 10.104.0.2 node2 - 10.104.0.3 node3 - ``` - -### Install the software - -Run the following commands on `node1`, `node2` and `node3`: - -1. Install Percona Distribution for PostgreSQL: - - * Check the [platform specific notes](../yum.md#for-percona-distribution-for-postgresql-packages) - * Install the `percona-release` repository management tool - - --8<-- "percona-release-yum.md" - - * Enable the repository: - - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg16 - ``` - - * Install Percona Distribution for PostgreSQL package - - ```{.bash data-prompt="$"} - $ sudo yum install percona-postgresql{{pgversion}}-server - ``` - - !!! important - - **Don't** initialize the cluster and start the `postgresql` service. The cluster initialization and setup are handled by Patroni during the bootsrapping stage. - -2. Install some Python and auxiliary packages to help with Patroni and etcd - - ```{.bash data-prompt="$"} - $ sudo yum install python3-pip python3-devel binutils - ``` - -3. Install etcd, Patroni, pgBackRest packages. Check [platform specific notes for Patroni](../yum.md#for-percona-patroni-package): - - ```{.bash data-prompt="$"} - $ sudo yum install percona-patroni \ - etcd python3-python-etcd\ - percona-pgbackrest - ``` - -4. Stop and disable all installed services: - - ```{.bash data-prompt="$"} - $ sudo systemctl stop {etcd,patroni,postgresql-{{pgversion}}} - $ sudo systemctl disable {etcd,patroni,postgresql-{{pgversion}}} - ``` - -## Configure etcd distributed store - -In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). - -!!! note - - If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. - -To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: - -* Static in the case when the IP addresses of the cluster nodes are known -* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - -Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. - -We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. - -### Method 1. Modify the configuration file - -1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - - === "node1" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - - === "node2" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 - ``` - - === "node3" - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node3' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.3:2380 - listen-peer-urls: http://10.104.0.3:2380 - advertise-client-urls: http://10.104.0.3:2379 - listen-client-urls: http://10.104.0.3:2379 - ``` - -2. Enable and start the `etcd` service on all nodes: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl start etcd - $ sudo systemctl status etcd - ``` - - During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ---8<-- "check-etcd.md" - -### Method 2. Start etcd nodes with command line options - -1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: - - ``` - TOKEN=PostgreSQL_HA_Cluster_1 - CLUSTER_STATE=new - NAME_1=node1 - NAME_2=node2 - NAME_3=node3 - HOST_1=10.104.0.1 - HOST_2=10.104.0.2 - HOST_3=10.104.0.3 - CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 - ``` - -2. Start each etcd node in parallel using the following command: - - === "node1" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_1} - THIS_IP=${HOST_1} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - - === "node2" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_2} - THIS_IP=${HOST_2} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - - === "node3" - - ```{.bash data-prompt="$"} - THIS_NAME=${NAME_3} - THIS_IP=${HOST_3} - etcd --data-dir=data.etcd --name ${THIS_NAME} \ - --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ - --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ - --initial-cluster ${CLUSTER} \ - --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} - ``` - ---8<-- "check-etcd.md" - -## Configure Patroni - -Run the following commands on all nodes. You can do this in parallel: - -1. Export and create environment variables to simplify the config file creation: - - * Node name: - - ```{.bash data-prompt="$"} - $ export NODE_NAME=`hostname -f` - ``` - - * Node IP: - - ```{.bash data-prompt="$"} - $ export NODE_IP=`hostname -i | awk '{print $1}'` - ``` - - * Create variables to store the PATH: - - ```bash - DATA_DIR="/var/lib/pgsql/data/" - PG_BIN_DIR="/usr/pgsql-16/bin" - ``` - - **NOTE**: Check the path to the data and bin folders on your operating system and change it for the variables accordingly. - - * Patroni information: - - ```bash - NAMESPACE="percona_lab" - SCOPE="cluster_1 - ``` - -2. Create the directories required by Patroni - - * Create the directory to store the configuration file and make it owned by the `postgres` user. - - ```{.bash data-prompt="$"} - $ sudo mkdir -p /etc/patroni/ - $ sudo chown -R postgres:postgres /etc/patroni/ - ``` - - * Create the data directory to store PostgreSQL data. Change its ownership to the `postgres` user and restrict the access to it - - ```{.bash data-prompt="$"} - $ sudo mkdir /data/pgsql -p - $ sudo chown -R postgres:postgres /data/pgsql - $ sudo chmod 700 /data/pgsql - ``` - -3. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: - - ```bash - echo " - namespace: ${NAMESPACE} - scope: ${SCOPE} - name: ${NODE_NAME} - - restapi: - listen: 0.0.0.0:8008 - connect_address: ${NODE_IP}:8008 - - etcd3: - host: ${NODE_IP}:2379 - - bootstrap: - # this section will be written into Etcd:///config after initializing new cluster - dcs: - ttl: 30 - loop_wait: 10 - retry_timeout: 10 - maximum_lag_on_failover: 1048576 - - postgresql: - use_pg_rewind: true - use_slots: true - parameters: - wal_level: replica - hot_standby: "on" - wal_keep_segments: 10 - max_wal_senders: 5 - max_replication_slots: 10 - wal_log_hints: "on" - logging_collector: 'on' - max_wal_size: '10GB' - archive_mode: "on" - archive_timeout: 600s - archive_command: "cp -f %p /home/postgres/archived/%f" - pg_hba: - - local all all peer - - host replication replicator 127.0.0.1/32 trust - - host replication replicator 192.0.0.0/8 scram-sha-256 - - host all all 0.0.0.0/0 scram-sha-256 - recovery_conf: - restore_command: cp /home/postgres/archived/%f %p - - # some desired options for 'initdb' - initdb: # Note: It needs to be a list (some options need values, others are switches) - - encoding: UTF8 - - data-checksums - - postgresql: - cluster_name: cluster_1 - listen: 0.0.0.0:5432 - connect_address: ${NODE_IP}:5432 - data_dir: ${DATA_DIR} - bin_dir: ${PG_BIN_DIR} - pgpass: /tmp/pgpass0 - authentication: - replication: - username: replicator - password: replPasswd - superuser: - username: postgres - password: qaz123 - parameters: - unix_socket_directories: "/var/run/postgresql/" - create_replica_methods: - - basebackup - basebackup: - checkpoint: 'fast' - - watchdog: - mode: required # Allowed values: off, automatic, required - device: /dev/watchdog - safety_margin: 5 - - - tags: - nofailover: false - noloadbalance: false - clonefrom: false - nosync: false - " | sudo tee -a /etc/patroni/patroni.yml - ``` - -4. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. - - If it's **not created**, create it manually and specify the following contents within: - - ```ini title="/etc/systemd/system/percona-patroni.service" - [Unit] - Description=Runners to orchestrate a high-availability PostgreSQL - After=syslog.target network.target - - [Service] - Type=simple - - User=postgres - Group=postgres - - # Start the patroni process - ExecStart=/bin/patroni /etc/patroni/patroni.yml - - # Send HUP to reload from patroni.yml - ExecReload=/bin/kill -s HUP $MAINPID - - # only kill the patroni process, not its children, so it will gracefully stop postgres - KillMode=process - - # Give a reasonable amount of time for the server to start up/shut down - TimeoutSec=30 - - # Do not restart the service if it crashes, we want to manually inspect database on failure - Restart=no - - [Install] - WantedBy=multi-user.target - ``` - -5. Make `systemd` aware of the new service: - - ```{.bash data-prompt="$"} - $ sudo systemctl daemon-reload - ``` - -6. Repeat steps 1-5 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. -7. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now patroni - $ sudo systemctl restart patroni - ``` - - When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. - -8. Check the service to see if there are errors: - - ```{.bash data-prompt="$"} - $ sudo journalctl -fu patroni - ``` - - A common error is Patroni complaining about the lack of proper entries in the pg_hba.conf file. If you see such errors, you must manually add or fix the entries in that file and then restart the service. - - Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if the Patroni configuration file is modified and the service is restarted. - - If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following command: - - ```{.bash data-prompt="$"} - $ sudo psql -U postgres - - psql (16.0) - Type "help" for help. - - postgres=# - ``` - -9. When all nodes are up and running, you can check the cluster status using the following command: - - ```{.bash data-prompt="$"} - $ sudo patronictl -c /etc/patroni/patroni.yml list - ``` - - The output resembles the following: - - ```{.text .no-copy} - + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+------------+---------+-----------+----+-----------+ - | node1 | 10.0.100.1 | Leader | running | 1 | | - | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | - | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | - +--------+------------+---------+-----------+----+-----------+ - ``` - -## Configure HAProxy - -HAproxy is the load balancer and the single point of entry to your PostgreSQL cluster for client applications. A client application accesses the HAPpoxy URL and sends its read/write requests there. Behind-the-scene, HAProxy routes write requests to the primary node and read requests - to the secondaries in a round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads - to port 5001 - -This way, a client application doesn’t know what node in the underlying cluster is the current primary. HAProxy sends connections to a healthy node (as long as there is at least one healthy node available) and ensures that client application requests are never rejected. - -1. Install HAProxy on the `HAProxy-demo` node: - - ```{.bash data-prompt="$"} - $ sudo yum install percona-haproxy - ``` - -2. The HAProxy configuration file path is: `/etc/haproxy/haproxy.cfg`. Specify the following configuration in this file. - - ``` - global - maxconn 100 - - defaults - log global - mode tcp - retries 2 - timeout client 30m - timeout connect 4s - timeout server 30m - timeout check 5s - - listen stats - mode http - bind *:7000 - stats enable - stats uri / - - listen primary - bind *:5000 - option httpchk /primary - http-check expect status 200 - default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions - server node1 node1:5432 maxconn 100 check port 8008 - server node2 node2:5432 maxconn 100 check port 8008 - server node3 node3:5432 maxconn 100 check port 8008 - - listen standbys - balance roundrobin - bind *:5001 - option httpchk /replica - http-check expect status 200 - default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions - server node1 node1:5432 maxconn 100 check port 8008 - server node2 node2:5432 maxconn 100 check port 8008 - server node3 node3:5432 maxconn 100 check port 8008 - ``` - - - HAProxy will use the REST APIs hosted by Patroni to check the health status of each PostgreSQL node and route the requests appropriately. - -3. Enable a SELinux boolean to allow HAProxy to bind to non standard ports: - - ```{.bash data-prompt="$"} - $ sudo setsebool -P haproxy_connect_any on - ``` - -4. Restart HAProxy: - - ```{.bash data-prompt="$"} - $ sudo systemctl restart haproxy - ``` - -5. Check the HAProxy logs to see if there are any errors: - - ```{.bash data-prompt="$"} - $ sudo journalctl -u haproxy.service -n 100 -f - ``` - -## Next steps - -[Configure pgBackRest](pgbackrest.md){.md-button} - - diff --git a/docs/solutions/haproxy-info.md b/docs/solutions/haproxy-info.md new file mode 100644 index 000000000..8c2ae5c89 --- /dev/null +++ b/docs/solutions/haproxy-info.md @@ -0,0 +1,77 @@ +# HAProxy + +HAProxy (High Availability Proxy) is a powerful, open-source load balancer and +proxy server used to improve the performance and reliability of web services by +distributing network traffic across multiple servers. It is widely used to enhance the scalability, availability, and reliability of web applications by balancing client requests among backend servers. + +HAProxy architecture is +optimized to move data as fast as possible with the least possible operations. +It focuses on optimizing the CPU cache's efficiency by sticking connections to +the same CPU as long as possible. + +## How HAProxy works + +HAProxy operates as a reverse proxy, which means it accepts client requests and distributes them to one or more backend servers using the configured load-balancing algorithm. This ensures efficient use of server resources and prevents any single server from becoming overloaded. + +- **Client request processing**: + + 1. A client application connects to HAProxy instead of directly to the server. + 2. HAProxy analyzes the requests and determines what server to route it to for further processing. + 3. HAProxy forwards the request to the selected server using the routing algorithm defined in its configuration. It can be round robin, least connections, and others. + 4. HAProxy receives the response from the server and forwards it back to the client. + 5. After sending the response, HAProxy either closes the connection or keeps it open, depending on the configuration. + +- **Load balancing**: HAProxy distributes incoming traffic using various algorithms such as round-robin, least connections, and IP hash. +- **Health checks**: HAProxy continuously monitors the health of backend servers to ensure requests are only routed to healthy servers. +- **SSL termination**: HAProxy offloads SSL/TLS encryption and decryption, reducing the workload on backend servers. +- **Session persistence**: HAProxy ensures that requests from the same client are routed to the same server for session consistency. +- **Traffic management**: HAProxy supports rate limiting, request queuing, and connection pooling for optimal resource utilization. +- **Security**: HAProxy supports SSL/TLS, IP filtering, and integration with Web Application Firewalls (WAF). + +## Role in a HA Patroni cluster + +HAProxy plays a crucial role in managing PostgreSQL high availability in a Patroni cluster. Patroni is an open-source tool that automates PostgreSQL cluster management, including failover and replication. HAProxy acts as a load balancer and proxy, distributing client connections across the cluster nodes. + +Client applications connect to HAProxy, which transparently forwards their requests to the appropriate PostgreSQL node. This ensures that clients always connect to the active primary node without needing to know the cluster's internal state and topology. + +HAProxy monitors the health of PostgreSQL nodes using Patroni's API and routes traffic to the primary node. If the primary node fails, Patroni promotes a secondary node to a new primary, and HAProxy updates its routing to reflect the change. You can configure HAProxy to route write requests to the primary node and read requests - to the secondary nodes. + +## Redundancy for HAProxy + +A single HAProxy node creates a single point of failure. If HAProxy goes down, clients lose connection to the cluster. To prevent this, set up multiple HAProxy instances with a failover mechanism. This way, if one instance fails, another takes over automatically. + +To implement HAProxy redundancy: + +1. Set up a virtual IP address that can move between HAProxy instances. + +2. Install and configure a failover mechanism to monitor HAProxy instances and move the virtual IP to a backup if the primary fails. + +3. Keep HAProxy configurations synchronized across all instances. + +!!! note + + In this reference architecture we focus on the on-premises deployment and use Keepalived as the failover mechanism. + + If you use a cloud infrastructure, it may be easier to use the load balancer provided by the cloud provider to achieve high-availability for HAProxy. + +## How Keepalived works + +Keepalived manages failover by moving the virtual IP to a backup HAProxy node when the primary fails. + +No matter how many HAProxy nodes you have, only one of them can be a primary and have the MASTER state. All other nodes are BACKUP nodes. They monitor the MASTER state and take over when it is down. + +To determine the MASTER, Keepalived uses the `priority` setting. Every node must have a different priority. + +The node with the highest priority becomes the MASTER. Keepalived periodically checks every node's health. + +When the MASTER node is down or unavailable, it's priority is lowered so that the next highest priority node becomes the new MASTER and takes over. The priority is adjusted by the value you define in the `weight` setting. + +You must carefully define the `priority` and `weight` values in the configuration. When a primary node is down, its priority must be adjusted to be lower than the active node with the lowest priority by at least 1. + +For example, your nodes have priority 110 and 100. The node with priority 110 is MASTER. When it is down, its priority must be lower than the priority of the remaining node (100). + +When a failed node restores, its priority adjusts again. If it is the highest one among the nodes, this node restores its MASTER state, holds the virtual IP address and handles the client connections. + +## Next steps + +[pgBackRest](pgbackrest-info.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/high-availability.md b/docs/solutions/high-availability.md index e6118b3fc..b9bc79502 100644 --- a/docs/solutions/high-availability.md +++ b/docs/solutions/high-availability.md @@ -1,110 +1,119 @@ # High Availability in PostgreSQL with Patroni -PostgreSQL has been widely adopted as a modern, high-performance transactional database. A highly available PostgreSQL cluster can withstand failures caused by network outages, resource saturation, hardware failures, operating system crashes or unexpected reboots. Such cluster is often a critical component of the enterprise application landscape, where [four nines of availability :octicons-link-external-16:](https://en.wikipedia.org/wiki/High_availability#Percentage_calculation) is a minimum requirement. +Whether you are a small startup or a big enterprise, downtime of your services may cause severe consequences, such as loss of customers, impact on your reputation, and penalties for not meeting the Service Level Agreements (SLAs). That’s why ensuring a highly-available deployment is crucial. -There are several methods to achieve high availability in PostgreSQL. This solution document provides [Patroni](#patroni) - the open-source extension to facilitate and manage the deployment of high availability in PostgreSQL. +But what does it mean, high availability (HA)? And how to achieve it? This document answers these questions. -??? admonition "High availability methods" +After reading this document, you will learn the following: - There are several native methods for achieving high availability with PostgreSQL: +* [what is high availability](#what-is-high-availability) +* the recommended [reference architecture](ha-architecture.md) to achieve it +* how to deploy it using our step-by-step deployment guides for each component. The deployment instructions focus on the minimalistic approach to high availability that we recommend. It also gives instructions how to deploy additional components that you can add when your infrastructure grows. +* how to verify that your high availability deployment works as expected, providing replication and failover with the [testing guidelines](ha-test.md) +* additional components that you can add to address existing limitations on to your infrastructure. An example of such limitations can be the ones on application driver/connectors, or the lack of the connection pooler at the application framework. - - shared disk failover, - - file system replication, - - trigger-based replication, - - statement-based replication, - - logical replication, - - Write-Ahead Log (WAL) shipping, and - - [streaming replication](#streaming-replication) +## What is high availability +High availability (HA) is the ability of the system to operate continuously without the interruption of services. During the outage, the system must be able to transfer the services from the failed component to the healthy ones so that they can take over its responsibility. The system must have sufficient automation to perform this transfer without the need of human intervention, minimizing disruption and avoiding the need for human intervention. - ## Streaming replication +Overall, High availability is about: - Streaming replication is part of Write-Ahead Log shipping, where changes to the WALs are immediately made available to standby replicas. With this approach, a standby instance is always up-to-date with changes from the primary node and can assume the role of primary in case of a failover. +1. Reducing the chance of failures +2. Elimination of single-point-of-failure (SPOF) +3. Automatic detection of failures +4. Automatic action to reduce the impact +### How to achieve it? - ### Why native streaming replication is not enough +A short answer is: add redundancy to your deployment, eliminate a single point of failure (SPOF) and have the mechanism to transfer the services from a failed member to the healthy one. - Although the native streaming replication in PostgreSQL supports failing over to the primary node, it lacks some key features expected from a truly highly-available solution. These include: +* Although the native streaming replication in PostgreSQL supports failing over to the primary node, it lacks some key features expected from a truly highly-available solution. These include: +For a long answer, let's break it down into steps. +#### Step 1. Replication - * No consensus-based promotion of a “leader” node during a failover - * No decent capability for monitoring cluster status - * No automated way to bring back the failed primary node to the cluster - * A manual or scheduled switchover is not easy to manage +First, you should have more than one copy of your data. This means, you need to have several instances of your database where one is the primary instance that accepts reads and writes. Other instances are replicas – they must have an up-to-date copy of the data from the primary and remain in sync with it. They may also accept reads to offload your primary. - To address these shortcomings, there are a multitude of third-party, open-source extensions for PostgreSQL. The challenge for a database administrator here is to select the right utility for the current scenario. +You must deploy these instances on separate hardware (servers or nodes) and use a separate storage for storing the data. This way you eliminate a single point of failure for your database. - Percona Distribution for PostgreSQL solves this challenge by providing the [Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) extension for achieving PostgreSQL high availability. +The minimum number of database nodes is two: one primary and one replica. -## Patroni +The recommended deployment is a three-instance cluster consisting of one primary and two replica nodes. The replicas receive the data via the replication mechanism. -[Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) is a Patroni is an open-source tool that helps to deploy, manage, and monitor highly available PostgreSQL clusters using physical streaming replication. Patroni relies on a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes to store the cluster configuration. +![Primary-replica setup](../_images/diagrams/ha-overview-replication.svg) -### Key benefits of Patroni: +PostgreSQL natively supports logical and streaming replication. To achieve high availability, use streaming replication to ensure an exact copy of data is maintained and is ready to take over, while reducing the delay between primary and replica nodes to prevent data loss. -* Continuous monitoring and automatic failover -* Manual/scheduled switchover with a single command -* Built-in automation for bringing back a failed node to cluster again. -* REST APIs for entire cluster configuration and further tooling. -* Provides infrastructure for transparent application failover -* Distributed consensus for every action and configuration. -* Integration with Linux watchdog for avoiding split-brain syndrome. +#### Step 2. Switchover and Failover -## etcd +You may want to transfer the primary role from one machine to another. This action is called a **manual switchover**. A reason for that could be the following: -As stated before, Patroni uses a distributed configuration store to store the cluster configuration, health and status.The most popular implementation of the distributed configuration store is etcd due to its simplicity, consistency and reliability. Etcd not only stores the cluster data, it also handles the election of a new primary node (a leader in ETCD terminology). +* a planned maintenance on the OS level, like applying quarterly security updates or replacing some of the end-of-life components from the server +* troubleshooting some of the problems, like high network latency. -etcd is deployed as a cluster for fault-tolerance. An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. +Switchover is a manual action performed when you decide to transfer the primary role to another node. The high-availability framework makes this process easier and helps minimize downtime during maintenance, thereby improving overall availability. -The recommended approach is to deploy an odd-sized cluster (e.g. 3, 5 or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with n members, the majority is (n/2)+1. +There could be an unexpected situation where a primary node is down or not responding. Reasons for that can be different, from hardware or network issues to software failures, power outages and the like. In such situations, the high-availability solution should automatically detect the problem, find out a suitable candidate from the remaining nodes and transfer the primary role to the best candidate (promote a new node to become a primary). Such automatic remediation is called **Failover**. -To better illustrate this concept, let's take an example of clusters with 3 nodes and 4 nodes. +![Failover](../_images/diagrams/ha-overview-failover.svg) -In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. +You can do a manual failover when automatic remediation fails, for example, due to: -In a 4-nodes cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. +* a complete network partitioning +* high-availability framework not being able to find a good candidate +* the insufficient number of nodes remaining for a new primary election. -In this solution we use a 3-nodes etcd cluster that resides on the same hosts with PostgreSQL and Patroni. Though +The high-availability framework allows a human operator / administrator to take control and do a manual failover. -!!! admonition "See also" +#### Step 3. Connection routing and load balancing - - [Patroni documentation :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/SETTINGS.html#settings) +Instead of a single node you now have a cluster. How to enable users to connect to the cluster and ensure they always connect to the correct node, especially when the primary node changes? - - Percona Blog: +One option is to configure a DNS resolution that resolves the IPs of all cluster nodes. A drawback here is that only the primary node accepts all requests. When your system grows, so does the load and it may lead to overloading the primary node and result in performance degradation. - - [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios :octicons-link-external-16:](https://www.percona.com/blog/2021/06/11/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/) +You can write your application to send read/write requests to the primary and read-only requests to the secondary nodes. This requires significant programming experience. -## Architecture layout +![Load-balancer](../_images/diagrams/ha-overview-load-balancer.svg) -The following diagram shows the architecture of a three-node PostgreSQL cluster with a single-leader node. +Another option is to use a load-balancing proxy. Instead of connecting directly to the IP address of the primary node, which can change during a failover, you use a proxy that acts as a single point of entry for the entire cluster. This proxy provides the IP address visible for user applications. It also knows which node is currently the primary and directs all incoming write requests to it. At the same time, it can distribute read requests among the replicas to evenly spread the load and improve performance. -![Architecture of the three-node, single primary PostgreSQL cluster](../_images/diagrams/ha-architecture-patroni.png) +To eliminate a single point of failure for a load balancer, we recommend to deploy multiple connection routers/proxies for redundancy. Each application server can have its own connection router whose task is to identify the cluster topology and route the traffic to the current primary node. -### Components +Alternatively you can deploy a redundant load balancer for the whole cluster. The load balancer instances share the public IP address so that it can "float" from one instance to another in the case of a failure. To control the load balancer's state and transfer the IP address to the active instance, you also need the failover solution for load balancers. -The components in this architecture are: +The use of a load balancer is optional. If your application implements the logic of connection routing and load-balancing, it is a highly-recommended approach. -- PostgreSQL nodes -- Patroni - a template for configuring a highly available PostgreSQL cluster. +#### Step 4. Backups -- etcd - a Distributed Configuration store that stores the state of the PostgreSQL cluster. +Even with replication and failover mechanisms in place, it’s crucial to have regular backups of your data. Backups provide a safety net for catastrophic failures that affect both the primary and replica nodes. While replication ensures data is synchronized across multiple nodes, it does not protect against data corruption, accidental deletions, or malicious attacks that can affect all nodes. -- HAProxy - the load balancer for the cluster and is the single point of entry to client applications. +![Backup tool](../_images/diagrams/ha-overview-backup.svg) -- pgBackRest - the backup and restore solution for PostgreSQL +Having regular backups ensures that you can restore your data to a previous state, preserving data integrity and availability even in the worst-case scenarios. Store your backups in separate, secure locations and regularly test them to ensure that you can quickly and accurately restore them when needed. This additional layer of protection is essential to maintaining continuous operation and minimizing data loss. -- Percona Monitoring and Management (PMM) - the solution to monitor the health of your cluster +The backup tool is optional but highly-recommended for data corruption recovery. Additionally, backups protect against human error, when a user can accidentally drop a table or make another mistake. -### How components work together +As a result, you end up with the following components for a minimalistic highly-available deployment: -Each PostgreSQL instance in the cluster maintains consistency with other members through streaming replication. Each instance hosts Patroni - a cluster manager that monitors the cluster health. Patroni relies on the operational etcd cluster to store the cluster configuration and sensitive data about the cluster health there. +* A minimum two-node PostgreSQL cluster with the replication configured among nodes. The recommended minimalistic cluster is a three-node one. +* A solution to manage the cluster and perform automatic failover when the primary node is down. +* (Optional but recommended) A load-balancing proxy that provides a single point of entry to your cluster and distributes the load across cluster nodes. You need at least two instances of a load-balancing proxy and a failover tool to eliminate a single point of failure. +* (Optional but recommended) A backup and restore solution to protect data against loss, corruption and human error. -Patroni periodically sends heartbeat requests with the cluster status to etcd. etcd writes this information to disk and sends the response back to Patroni. If the current primary fails to renew its status as leader within the specified timeout, Patroni updates the state change in etcd, which uses this information to elect the new primary and keep the cluster up and running. +Optionally, you can add a monitoring tool to observe the health of your deployment, receive alerts about performance issues and timely react to them. -The connections to the cluster do not happen directly to the database nodes but are routed via a connection proxy like HAProxy. This proxy determines the active node by querying the Patroni REST API. +### What tools to use? -## Next steps +The PostgreSQL ecosystem offers many tools for high availability, but choosing the right ones can be challenging. At Percona, we have carefully selected and tested open-source tools to ensure they work well together and help you achieve high availability. + +In our [reference architecture](ha-architecture.md) section we recommend a combination of open-source tools, focusing on a minimalistic three-node PostgreSQL cluster. + +Note that the tools are recommended but not mandatory. You can use your own solutions and alternatives if they better meet your business needs. However, in this case, we cannot guarantee their compatibility and smooth operation. -[Deploy on Debian or Ubuntu](ha-setup-apt.md){.md-button} -[Deploy on RHEL or derivatives](ha-setup-yum.md){.md-button} +### Additional reading + +[Measuring high availability](ha-measure.md){.md-button} + +## Next steps +[Architecture :material-arrow-right:](ha-architecture.md){.md-button} diff --git a/docs/solutions/patroni-info.md b/docs/solutions/patroni-info.md new file mode 100644 index 000000000..b88d0cfa7 --- /dev/null +++ b/docs/solutions/patroni-info.md @@ -0,0 +1,84 @@ +# Patroni + +Patroni is an open-source tool designed to manage and automate the high availability (HA) of PostgreSQL clusters. It ensures that your PostgreSQL database remains available even in the event of hardware failures, network issues or other disruptions. Patroni achieves this by using distributed consensus stores like ETCD, Consul, or ZooKeeper to manage cluster state and automate failover processes. We'll use [`etcd`](etcd-info.md) in our architecture. + +## Key benefits of Patroni for high availability + +- Automated failover and promotion of a new primary in case of a failure; +- Prevention and protection from split-brain scenarios (where two nodes believe they are the primary and both accept transactions). Split-brain can lead to serious logical corruptions such as wrong, duplicate data or data loss, and to associated business loss and risk of litigation; +- Simplifying the management of PostgreSQL clusters across multiple data centers; +- Self-healing via automatic restarts of failed PostgreSQL instances or reinitialization of broken replicas. +- Integration with tools like `pgBackRest`, `HAProxy`, and monitoring systems for a complete HA solution. + +## How Patroni works + +Patroni uses the `etcd` distributed consensus store to coordinate the state of a PostgreSQL cluster for the following operations: + +1. Cluster state management: + + - After a user installs and configures Patroni, Patroni takes over the PostgreSQL service administration and configuration; + - Patroni maintains the cluster state data such as PostgreSQL configuration, information about which node is the primary and which are replicas, and their health status. + - Patroni manages PostgreSQL configuration files such as`postgresql.conf` and `pg_hba.conf` dynamically, ensuring consistency across the cluster. + - A Patroni agent runs on each cluster node and communicates with `etcd` and other nodes. + +2. Primary node election: + + - Patroni initiates a primary election process after the cluster is initialized; + - Patroni initiates a failover process if the primary node fails; + - When the old primary is recovered, it rejoins the cluster as a new replica; + - Every new node added to the cluster joins it as a new replica; + - `etcd` and the Raft consensus algorithm ensures that only one node is elected as the new primary, preventing split-brain scenarios. + +3. Automatic failover: + + - If the primary node becomes unavailable, Patroni initiates a new primary election process with the most up-to-date replicas; + - When a node is elected it is automatically promoted to primary; + - Patroni updates the `etcd` consensus store and reconfigures the remaining replicas to follow the new primary. + +4. Health checks: + + - Patroni continuously monitors the health of all PostgreSQL instances; + - If a node fails or becomes unreachable, Patroni takes corrective actions by restarting PostgreSQL or initiating a failover process. + +## Split-brain prevention + +Split-brain is an issue, which occurs when two or more nodes believe they are the primary, leading to data inconsistencies. + +Patroni prevents split-brain by using a three-layer protection and prevention mechanism where the `etcd` distributed locking mechanism plays a key role: + +* At the Patroni layer, a node needs to acquire a leader key in the race before promoting itself as the primary. If the node cannot to renew its leader key, Patroni demotes it to a replica. +* The `etcd` layer uses the Raft consensus algorithm to allow only one node to acquire the leader key. +- At the OS and hardware layers, Patroni uses Linux Watchdog to perform [STONITH](https://en.wikipedia.org/wiki/Fencing_(computing)#STONITH) / fencing and terminate a PostgreSQL instance to prevent a split-brain scenario. + +One important aspect of how Patroni works is that it requires a quorum (the majority) of nodes to agree on the cluster state, preventing isolated nodes from becoming a primary. The quorum strengthens Patroni's capabilities of preventing split-brain. + +## Watchdog + +Patroni can use a watchdog mechanism to improve resilience. But what is watchdog? + +A watchdog is a mechanism that ensures a system can recover from critical failures. In the context of Patroni, a watchdog is used to forcibly restart the node and terminate a failed primary node to prevent split-brain scenarios. + +While Patroni itself is designed for high availability, a watchdog provides an extra layer of protection against system-level failures that Patroni might not be able to detect, such as kernel panics or hardware lockups. If the entire operating system becomes unresponsive, Patroni might not be able to function correctly. The watchdog operates independently so it can detect that the server is unresponsive and reset it, bringing it back to a known good state. + +Watchdog adds an extra layer of safety, because it helps protecting against scenarios where the `etcd` consensus store is unavailable or network partitions occur. + +There are 2 types of watchdogs: + +- Hardware watchdog: A physical device that reboots the server if the operating system becomes unresponsive. +- Software watchdog (also called a softdog): A software-based watchdog timer tha emulates the functionality of a hardware watchdog but is implemented entirely in software. It is part of the Linux kernel's watchdog infrastructure and is useful in systems that lack dedicated hardware watchdog timers. The softdog monitors the system and takes corrective actions such as killing processes or rebooting the node. + +Most of the servers in the cloud nowadays use a softdog. + +## Integration with other tools + +Patroni integrates well with other tools to create a comprehensive high-availability solution. In our architecture, such tools are: + +* HAProxy to check the current topology and route the traffic to both the primary and replica nodes, balancing the load among them, +* pgBackRest to help to ensure robust backup and restore, +* PMM for monitoring. + +Patroni provides hooks that allow you to customize its behavior. You can use hooks to execute custom scripts or commands at various stages of Patroni lifecycle, such as before and after failover, or when a new instance joins the cluster. Thereby you can integrate Patroni with other systems and automate various tasks. For example, use a hook to update the monitoring system when a failover occurs. + +## Next steps + +[HAProxy](haproxy-info.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/pgbackrest-info.md b/docs/solutions/pgbackrest-info.md new file mode 100644 index 000000000..e94d1d9c5 --- /dev/null +++ b/docs/solutions/pgbackrest-info.md @@ -0,0 +1,41 @@ +# pgBackRest + +`pgBackRest` is an advanced backup and restore tool designed specifically for PostgreSQL databases. `pgBackRest` emphasizes simplicity, speed, and scalability. Its architecture is focused on minimizing the time and resources required for both backup and restoration processes. + +`pgBackRest` uses a custom protocol, which allows for more flexibility compared to traditional tools like `tar` and `rsync` and limits the types of connections that are required to perform a backup, thereby increasing security. `pgBackRest` is a simple, but feature-rich, reliable backup and restore system that can seamlessly scale up to the largest databases and workloads. + +## Key features of `pgBackRest` + +1. **Full, differential, and incremental backups (at file or block level)**: `pgBackRest` supports various types of backups, including full, differential, and incremental, providing efficient storage and recovery options. Block-level backups save space by only copying the parts of files that have changed. + +2. **Point-in-Time recovery (PITR)**: `pgBackRest` enables restoring a PostgreSQL database to a specific point in time, crucial for disaster recovery scenarios. + +3. **Parallel backup and restore**: `pgBackRest` can perform backups and restores in parallel, utilizing multiple CPU cores to significantly reduce the time required for these operations. + +4. **Local or remote operation**: A custom protocol allows `pgBackRest` to backup, restore, and archive locally or remotely via TLS/SSH with minimal configuration. This allows for flexible deployment options. + +5. **Backup rotation and archive expiration**: You can set retention policies to manage backup rotation and WAL archive expiration automatically. + +6. **Backup integrity and verification**: `pgBackRest` performs integrity checks on backup files, ensuring they are consistent and reliable for recovery. + +7. **Backup resume**: `pgBackRest` can resume an interrupted backup from the point where it was stopped. Files that were already copied are compared with the checksums in the manifest to ensure integrity. This operation can take place entirely on the repository host, therefore, it reduces load on the PostgreSQL host and saves time since checksum calculation is faster than compressing and retransmitting data. + +8. **Delta restore**: This feature allows pgBackRest to quickly apply incremental changes to an existing database, reducing restoration time. + +9. **Compression and encryption**: `pgBackRest` offers options for compressing and encrypting backup data, enhancing security and reducing storage requirements. + +## How `pgBackRest` works + +`pgBackRest` supports a backup server (or a dedicated repository host in `pgBackRest` terminology). This repository host acts as the centralized backup storage. Multiple PostgreSQL clusters can use the same repository host. + +In addition to a repository host with `pgBackRest` installed, you also need `pgBackRest` agents running on the database nodes. The backup server has the information about a PostgreSQL cluster, where it is located, how to back it up and where to store backup files. This information is defined within a configuration section called a *stanza*. + +The storage location where `pgBackRest` stores backup data and WAL archives is called the repository. It can be a local directory, a remote server, or a cloud storage service like AWS S3, S3-compatible storages or Azure blob storage. `pgBackRest` supports up to 4 repositories, allowing for redundancy and flexibility in backup storage. + +When you create a stanza, it initializes the repository and prepares it for storing backups. During the backup process, `pgBackRest` reads the data from the PostgreSQL cluster and writes it to the repository. It also performs integrity checks and compresses the data if configured. + +Similarly, during the restore process, `pgBackRest` reads the backup data from the repository and writes it to the PostgreSQL data directory. It also verifies the integrity of the restored data. + +## Next steps + +[How components work together :material-arrow-right:](ha-components.md){.md-button} \ No newline at end of file diff --git a/docs/solutions/pgbackrest.md b/docs/solutions/pgbackrest.md index 4481874cb..e921ce709 100644 --- a/docs/solutions/pgbackrest.md +++ b/docs/solutions/pgbackrest.md @@ -1,49 +1,45 @@ # pgBackRest setup -[pgBackRest :octicons-link-external-16:](https://pgbackrest.org/) is a backup tool used to perform PostgreSQL database backup, archiving, restoration, and point-in-time recovery. While it can be used for local backups, this procedure shows how to deploy a [pgBackRest server running on a dedicated host :octicons-link-external-16:](https://pgbackrest.org/user-guide-rhel.html#repo-host) and how to configure PostgreSQL servers to use it for backups and archiving. +[pgBackRest :octicons-link-external-16:](https://pgbackrest.org/) is a backup tool used to perform PostgreSQL database backup, archiving, restoration, and point-in-time recovery. -You also need a backup storage to store the backups. It can either be a remote storage such as AWS S3, S3-compatible storages or Azure blob storage, or a filesystem-based one. +In our solution we deploy a [pgBackRest server on a dedicated host :octicons-link-external-16:](https://pgbackrest.org/user-guide-rhel.html#repo-host) and also deploy pgBackRest on the PostgreSQL servers. Them we configure PostgreSQL servers to use it for backups and archiving. -## Configure backup server +You also need a backup storage to store the backups. It can either be a remote storage such as AWS S3, S3-compatible storages or Azure blob storage, or a filesystem-based one. -To make things easier when working with some templates, run the commands below as the root user. Run the following command to switch to the root user: - -```{.bash data-prompt="$"} -$ sudo su - -``` +## Preparation + +Make sure to complete the [initial setup](ha-init-setup.md) steps. -### Install pgBackRest +## Install pgBackRest -1. Enable the repository with [percona-release :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/index.html) +Install pgBackRest on the following nodes: `node1`, `node2`, `node3`, `backup` + +=== ":material-debian: On Debian/Ubuntu" ```{.bash data-prompt="$"} - $ percona-release setup ppg-16 + $ sudo apt install percona-pgbackrest ``` -2. Install pgBackRest package - - === ":material-debian: On Debian/Ubuntu" +=== ":material-redhat: On RHEL/derivatives" - ```{.bash data-prompt="$"} - $ apt install percona-pgbackrest - ``` + ```{.bash data-prompt="$"} + $ sudo yum install percona-pgbackrest + ``` - === ":material-redhat: On RHEL/derivatives" +## Configure a backup server - ```{.bash data-prompt="$"} - $ yum install percona-pgbackrest - ``` +Do the following steps on the `backup` node. ### Create the configuration file 1. Create environment variables to simplify the config file creation: ```{.bash data-prompt="$"} - export SRV_NAME="bkp-srv" - export NODE1_NAME="node-1" - export NODE2_NAME="node-2" - export NODE3_NAME="node-3" - export CA_PATH="/etc/ssl/certs/pg_ha" + $ export SRV_NAME="backup" + $ export NODE1_NAME="node1" + $ export NODE2_NAME="node2" + $ export NODE3_NAME="node3" + $ export CA_PATH="/etc/ssl/certs/pg_ha" ``` 2. Create the `pgBackRest` repository, *if necessary* @@ -53,25 +49,25 @@ $ sudo su - This directory is usually created during pgBackRest's installation process. If it's not there already, create it as follows: ```{.bash data-prompt="$"} - $ mkdir -p /var/lib/pgbackrest - $ chmod 750 /var/lib/pgbackrest - $ chown postgres:postgres /var/lib/pgbackrest + $ sudo mkdir -p /var/lib/pgbackrest + $ sudo chmod 750 /var/lib/pgbackrest + $ sudo chown postgres:postgres /var/lib/pgbackrest ``` 3. The default `pgBackRest` configuration file location is `/etc/pgbackrest/pgbackrest.conf`, but some systems continue to use the old path, `/etc/pgbackrest.conf`, which remains a valid alternative. If the former is not present in your system, create the latter. - Access the file's parent directory (either `cd /etc/` or `cd /etc/pgbackrest/`), and make a backup copy of it: + Go to the file's parent directory (either `cd /etc/` or `cd /etc/pgbackrest/`), and make a backup copy of it: ```{.bash data-prompt="$"} - $ cp pgbackrest.conf pgbackrest.conf.bak + $ sudo cp pgbackrest.conf pgbackrest.conf.orig ``` - Then use the following command to create a basic configuration file using the environment variables we created in a previous step: +4. Then use the following command to create a basic configuration file using the environment variables we created in a previous step. This example command adds the configuration file at the path `/etc/pgbackrest.conf`. Make sure to specify the correct path for the configuration file on your system: === ":material-debian: On Debian/Ubuntu" ``` - cat < pgbackrest.conf + echo " [global] # Server repo details @@ -96,7 +92,7 @@ $ sudo su - repo1-retention-full=4 # Server general options - process-max=12 + process-max=4 # This depends on the number of CPU resources your server has. The recommended value should equal or be less than the number of CPUs. While more processes can speed up backups, they will also consume additional system resources. log-level-console=info #log-level-file=debug log-level-file=info @@ -120,7 +116,7 @@ $ sudo su - pg1-host=${NODE1_NAME} pg1-host-port=8432 pg1-port=5432 - pg1-path=/var/lib/postgresql/16/main + pg1-path=/var/lib/postgresql/{{pgversion}}/main pg1-host-type=tls pg1-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg1-host-key-file=${CA_PATH}/${SRV_NAME}.key @@ -130,7 +126,7 @@ $ sudo su - pg2-host=${NODE2_NAME} pg2-host-port=8432 pg2-port=5432 - pg2-path=/var/lib/postgresql/16/main + pg2-path=/var/lib/postgresql/{{pgversion}}/main pg2-host-type=tls pg2-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg2-host-key-file=${CA_PATH}/${SRV_NAME}.key @@ -140,19 +136,20 @@ $ sudo su - pg3-host=${NODE3_NAME} pg3-host-port=8432 pg3-port=5432 - pg3-path=/var/lib/postgresql/16/main + pg3-path=/var/lib/postgresql/{{pgversion}}/main pg3-host-type=tls pg3-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg3-host-key-file=${CA_PATH}/${SRV_NAME}.key pg3-host-ca-file=${CA_PATH}/ca.crt pg3-socket-path=/var/run/postgresql - EOF + + " | sudo tee /etc/pgbackrest.conf ``` === ":material-redhat: On RHEL/derivatives" ``` - cat < pgbackrest.conf + echo " [global] # Server repo details @@ -177,7 +174,7 @@ $ sudo su - repo1-retention-full=4 # Server general options - process-max=12 + process-max=4 # This depends on the number of CPU resources your server has. The recommended value should equal or be less than the number of CPUs. While more processes can speed up backups, they will also consume additional system resources. log-level-console=info #log-level-file=debug log-level-file=info @@ -201,7 +198,7 @@ $ sudo su - pg1-host=${NODE1_NAME} pg1-host-port=8432 pg1-port=5432 - pg1-path=/var/lib/pgsql/16/data + pg1-path=/var/lib/postgresql/{{pgversion}}/main pg1-host-type=tls pg1-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg1-host-key-file=${CA_PATH}/${SRV_NAME}.key @@ -211,7 +208,7 @@ $ sudo su - pg2-host=${NODE2_NAME} pg2-host-port=8432 pg2-port=5432 - pg2-path=/var/lib/pgsql/16/data + pg2-path=/var/lib/postgresql/{{pgversion}}/main pg2-host-type=tls pg2-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg2-host-key-file=${CA_PATH}/${SRV_NAME}.key @@ -221,56 +218,70 @@ $ sudo su - pg3-host=${NODE3_NAME} pg3-host-port=8432 pg3-port=5432 - pg3-path=/var/lib/pgsql/16/data + pg3-path=/var/lib/postgresql/{{pgversion}}/main pg3-host-type=tls pg3-host-cert-file=${CA_PATH}/${SRV_NAME}.crt pg3-host-key-file=${CA_PATH}/${SRV_NAME}.key pg3-host-ca-file=${CA_PATH}/ca.crt pg3-socket-path=/var/run/postgresql - EOF + + " | sudo tee /etc/pgbackrest.conf ``` *NOTE*: The option `backup-standby=y` above indicates the backups should be taken from a standby server. If you are operating with a primary only, or if your secondaries are not configured with `pgBackRest`, set this option to `n`. ### Create the certificate files - + +Run the following commands as a root user or with `sudo` privileges + 1. Create the folder to store the certificates: ```{.bash data-prompt="$"} - $ mkdir -p ${CA_PATH} + $ sudo mkdir -p /etc/ssl/certs/pg_ha ``` - -2. Create the certificates and keys + +2. Create the environment variable to simplify further configuration + + ```{.bash data-prompt="$"} + $ export CA_PATH="/etc/ssl/certs/pg_ha" + ``` + +3. Create the CA certificates and keys ```{.bash data-prompt="$"} - $ openssl req -new -x509 -days 365 -nodes -out ${CA_PATH}/ca.crt -keyout ${CA_PATH}/ca.key -subj "/CN=root-ca" + $ sudo openssl req -new -x509 -days 365 -nodes -out ${CA_PATH}/ca.crt -keyout ${CA_PATH}/ca.key -subj "/CN=root-ca" ``` -3. Create the certificate for the backup and the PostgreSQL servers +3. Create the certificate and keys for the backup server ```{.bash data-prompt="$"} - $ for node in ${SRV_NAME} ${NODE1_NAME} ${NODE2_NAME} ${NODE3_NAME} - do - openssl req -new -nodes -out ${CA_PATH}/$node.csr -keyout ${CA_PATH}/$node.key -subj "/CN=$node"; - done + $ sudo openssl req -new -nodes -out ${CA_PATH}/${SRV_NAME}.csr -keyout ${CA_PATH}/${SRV_NAME}.key -subj "/CN=${SRV_NAME}" ``` -4. Sign the certificates with the `root-ca` key +4. Create the certificates and keys for each PostgreSQL node ```{.bash data-prompt="$"} - $ for node in ${SRV_NAME} ${NODE1_NAME} ${NODE2_NAME} ${NODE3_NAME} - do - openssl x509 -req -in ${CA_PATH}/$node.csr -days 365 -CA ${CA_PATH}/ca.crt -CAkey ${CA_PATH}/ca.key -CAcreateserial -out ${CA_PATH}/$node.crt; - done + $ sudo openssl req -new -nodes -out ${CA_PATH}/${NODE1_NAME}.csr -keyout ${CA_PATH}/${NODE1_NAME}.key -subj "/CN=${NODE1_NAME}" + $ sudo openssl req -new -nodes -out ${CA_PATH}/${NODE2_NAME}.csr -keyout ${CA_PATH}/${NODE2_NAME}.key -subj "/CN=${NODE2_NAME}" + $ sudo openssl req -new -nodes -out ${CA_PATH}/${NODE3_NAME}.csr -keyout ${CA_PATH}/${NODE3_NAME}.key -subj "/CN=${NODE3_NAME}" + ``` + +4. Sign all certificates with the `root-ca` key + + ```{.bash data-prompt="$"} + $ sudo openssl x509 -req -in ${CA_PATH}/${SRV_NAME}.csr -days 365 -CA ${CA_PATH}/ca.crt -CAkey ${CA_PATH}/ca.key -CAcreateserial -out ${CA_PATH}/${SRV_NAME}.crt + $ sudo openssl x509 -req -in ${CA_PATH}/${NODE1_NAME}.csr -days 365 -CA ${CA_PATH}/ca.crt -CAkey ${CA_PATH}/ca.key -CAcreateserial -out ${CA_PATH}/${NODE1_NAME}.crt + $ sudo openssl x509 -req -in ${CA_PATH}/${NODE2_NAME}.csr -days 365 -CA ${CA_PATH}/ca.crt -CAkey ${CA_PATH}/ca.key -CAcreateserial -out ${CA_PATH}/${NODE2_NAME}.crt + $ sudo openssl x509 -req -in ${CA_PATH}/${NODE3_NAME}.csr -days 365 -CA ${CA_PATH}/ca.crt -CAkey ${CA_PATH}/ca.key -CAcreateserial -out ${CA_PATH}/${NODE3_NAME}.crt ``` 5. Remove temporary files, set ownership of the remaining files to the `postgres` user, and restrict their access: ```{.bash data-prompt="$"} - $ rm -f ${CA_PATH}/*.csr - $ chown postgres:postgres -R ${CA_PATH} - $ chmod 0600 ${CA_PATH}/* - ``` + $ sudo rm -f ${CA_PATH}/*.csr + $ sudo chown postgres:postgres -R ${CA_PATH} + $ sudo chmod 0600 ${CA_PATH}/* + ``` ### Create the `pgbackrest` daemon service @@ -294,60 +305,71 @@ $ sudo su - [Install] WantedBy=multi-user.target ``` - -2. Reload, start, and enable the service + +2. Make `systemd` aware of the new service: ```{.bash data-prompt="$"} - $ systemctl daemon-reload - $ systemctl start pgbackrest.service - $ systemctl enable pgbackrest.service + $ sudo systemctl daemon-reload + ``` + +3. Enable `pgBackRest`: + + ```{.bash data-prompt="$"} + $ sudo systemctl enable --now pgbackrest.service ``` ## Configure database servers Run the following commands on `node1`, `node2`, and `node3`. -1. Install pgBackRest package +1. Install `pgBackRest` package === ":material-debian: On Debian/Ubuntu" ```{.bash data-prompt="$"} - $ apt install percona-pgbackrest + $ sudo apt install percona-pgbackrest ``` === ":material-redhat: On RHEL/derivatives" ```{.bash data-prompt="$"} - $ yum install percona-pgbackrest - + $ sudo yum install percona-pgbackrest + ``` + 2. Export environment variables to simplify the config file creation: ```{.bash data-prompt="$"} $ export NODE_NAME=`hostname -f` - $ export SRV_NAME="bkp-srv" + $ export SRV_NAME="backup" $ export CA_PATH="/etc/ssl/certs/pg_ha" ``` - + 3. Create the certificates folder: ```{.bash data-prompt="$"} - $ mkdir -p ${CA_PATH} + $ sudo mkdir -p ${CA_PATH} ``` 4. Copy the `.crt`, `.key` certificate files and the `ca.crt` file from the backup server where they were created to every respective node. Then change the ownership to the `postgres` user and restrict their access. Use the following commands to achieve this: ```{.bash data-prompt="$"} - $ scp ${SRV_NAME}:${CA_PATH}/{$NODE_NAME.crt,$NODE_NAME.key,ca.crt} ${CA_PATH}/ - $ chown postgres:postgres -R ${CA_PATH} - $ chmod 0600 ${CA_PATH}/* + $ sudo scp ${SRV_NAME}:${CA_PATH}/{$NODE_NAME.crt,$NODE_NAME.key,ca.crt} ${CA_PATH}/ + $ sudo chown postgres:postgres -R ${CA_PATH} + $ sudo chmod 0600 ${CA_PATH}/* ``` - -5. Edit or create the configuration file which, as explained above, can be either at the `/etc/pgbackrest/pgbackrest.conf` or `/etc/pgbackrest.conf` path: + +5. Make a copy of the configuration file. The path to it can be either `/etc/pgbackrest/pgbackrest.conf` or `/etc/pgbackrest.conf`: + + ```{.bash data-prompt="$"} + $ sudo cp pgbackrest.conf pgbackrest.conf.orig + ``` + +6. Create the configuration file. This example command adds the configuration file at the path `/etc/pgbackrest.conf`. Make sure to specify the correct path for the configuration file on your system: === ":material-debian: On Debian/Ubuntu" ```ini title="pgbackrest.conf" - cat < pgbackrest.conf + echo " [global] repo1-host=${SRV_NAME} repo1-host-user=postgres @@ -357,7 +379,7 @@ Run the following commands on `node1`, `node2`, and `node3`. repo1-host-ca-file=${CA_PATH}/ca.crt # general options - process-max=16 + process-max=6 log-level-console=info log-level-file=debug @@ -369,15 +391,14 @@ Run the following commands on `node1`, `node2`, and `node3`. tls-server-auth=${SRV_NAME}=cluster_1 [cluster_1] - pg1-path=/var/lib/postgresql/16/main - EOF + pg1-path=/var/lib/postgresql/{{pgversion}}/main + " | sudo tee /etc/pgbackrest.conf ``` - === ":material-redhat: On RHEL/derivatives" ```ini title="pgbackrest.conf" - cat < pgbackrest.conf + echo " [global] repo1-host=${SRV_NAME} repo1-host-user=postgres @@ -387,7 +408,7 @@ Run the following commands on `node1`, `node2`, and `node3`. repo1-host-ca-file=${CA_PATH}/ca.crt # general options - process-max=16 + process-max=6 log-level-console=info log-level-file=debug @@ -399,11 +420,11 @@ Run the following commands on `node1`, `node2`, and `node3`. tls-server-auth=${SRV_NAME}=cluster_1 [cluster_1] - pg1-path=/var/lib/pgsql/16/data - EOF + pg1-path=/var/lib/pgsql/{{pgversion}}/data + " | sudo tee /etc/pgbackrest.conf ``` -6. Create the pgbackrest `systemd` unit file at the path `/etc/systemd/system/pgbackrest.service` +7. Create the pgbackrest `systemd` unit file at the path `/etc/systemd/system/pgbackrest.service` ```ini title="/etc/systemd/system/pgbackrest.service" [Unit] @@ -424,71 +445,79 @@ Run the following commands on `node1`, `node2`, and `node3`. WantedBy=multi-user.target ``` -7. Reload, start, and enable the service +8. Reload the `systemd`, the start the service ```{.bash data-prompt="$"} - $ systemctl daemon-reload - $ systemctl start pgbackrest - $ systemctl enable pgbackrest + $ sudo systemctl daemon-reload + $ sudo systemctl enable --now pgbackrest ``` The pgBackRest daemon listens on port `8432` by default: ```{.bash data-prompt="$"} - $ netstat -taunp - Active Internet connections (servers and established) - Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd - tcp 0 0 0.0.0.0:8432 0.0.0.0:* LISTEN 40224/pgbackrest + $ netstat -taunp | grep '8432' ``` -8. If you are using Patroni, change its configuration to use `pgBackRest` for archiving and restoring WAL files. Run this command only on one node, for example, on `node1`: + ??? example "Sample output" + + ```{text .no-copy} + Active Internet connections (servers and established) + Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name + tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd + tcp 0 0 0.0.0.0:8432 0.0.0.0:* LISTEN 40224/pgbackrest + ``` + +9. If you are using Patroni, change its configuration to use `pgBackRest` for archiving and restoring WAL files. Run this command only on one node, for example, on `node1`: ```{.bash data-prompt="$"} $ patronictl -c /etc/patroni/patroni.yml edit-config ``` - - === ":material-debian: On Debian/Ubuntu" - - ```yaml title="/etc/patroni/patroni.yml" - postgresql: - (...) - parameters: - (...) - archive_command: pgbackrest --stanza=cluster_1 archive-push /var/lib/postgresql/{{pgversion}}/main/pg_wal/%f - (...) - recovery_conf: - (...) - restore_command: pgbackrest --config=/etc/pgbackrest.conf --stanza=cluster_1 archive-get %f %p - (...) - ``` - === ":material-redhat: On RHEL/derivatives" + This opens the editor for you. + +10. Change the configuration as follows: + + ```yaml title="/etc/patroni/patroni.yml" + postgresql: + parameters: + archive_command: pgbackrest --stanza=cluster_1 archive-push /var/lib/postgresql/{{pgversion}}/main/pg_wal/%f + archive_mode: true + archive_timeout: 600s + hot_standby: true + logging_collector: 'on' + max_replication_slots: 10 + max_wal_senders: 5 + max_wal_size: 10GB + wal_keep_segments: 10 + wal_level: logical + wal_log_hints: true + recovery_conf: + recovery_target_timeline: latest + restore_command: pgbackrest --config=/etc/pgbackrest.conf --stanza=cluster_1 archive-get %f "%p" + use_pg_rewind: true + use_slots: true + retry_timeout: 10 + slots: + percona_cluster_1: + type: physical + ttl: 30 + ``` - ```yaml title="/etc/patroni/patroni.yml" - postgresql: - (...) - parameters: - archive_command: pgbackrest --stanza=cluster_1 archive-push /var/lib/pgsql/{{pgversion}}/data/pg_wal/%f - (...) - recovery_conf: - restore_command: pgbackrest --config=/etc/pgbackrest.conf --stanza=cluster_1 archive-get %f %p - (...) - ``` - - Reload the changed configurations. Specify either the cluster name or a node name for the following command: +11. Reload the changed configurations. Provide the cluster name or the node name for the following command. In our example we use the `cluster_1` cluster name: ```{.bash data-prompt="$"} - $ patronictl -c /etc/patroni/patroni.yml reload cluster_name node_name + $ patronictl -c /etc/patroni/patroni.yml restart cluster_1 ``` - :material-information: Note: When configuring a PostgreSQL server that is not managed by Patroni to archive/restore WALs from the `pgBackRest` server, edit the server's main configuration file directly and adjust the `archive_command` and `restore_command` variables as shown above. + It may take a while to reload the new configuration. + + *NOTE*: When configuring a PostgreSQL server that is not managed by Patroni to archive/restore WALs from the `pgBackRest` server, edit the server's main configuration file directly and adjust the `archive_command` and `restore_command` variables as shown above. ## Create backups Run the following commands on the **backup server**: -1. Create the stanza. A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc. +1. Create the stanza. A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc. ```{.bash data-prompt="$"} $ sudo -iu postgres pgbackrest --stanza=cluster_1 stanza-create @@ -501,7 +530,7 @@ Run the following commands on the **backup server**: ``` 3. Check backup info - + ```{.bash data-prompt="$"} $ sudo -iu postgres pgbackrest --stanza=cluster_1 info ``` @@ -512,4 +541,6 @@ Run the following commands on the **backup server**: $ sudo -iu postgres pgbackrest --stanza=cluster_1 expire --set= ``` -[Test PostgreSQL cluster](ha-test.md){.md-button} +## Next steps + +[Configure HAProxy :material-arrow-right:](ha-haproxy.md){.md-button} diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 352b491c9..dd2614a84 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -179,11 +179,22 @@ nav: - Solutions: - Overview: solutions.md - High availability: - - 'High availability': 'solutions/high-availability.md' - - 'Deploying on Debian or Ubuntu': 'solutions/ha-setup-apt.md' - - 'Deploying on RHEL or derivatives': 'solutions/ha-setup-yum.md' - - solutions/pgbackrest.md - - solutions/ha-test.md + - 'Overview': 'solutions/high-availability.md' + - solutions/ha-measure.md + - 'Architecture': solutions/ha-architecture.md + - Components: + - 'ETCD': 'solutions/etcd-info.md' + - 'Patroni': 'solutions/patroni-info.md' + - 'HAProxy': 'solutions/haproxy-info.md' + - 'pgBackRest': 'solutions/pgbackrest-info.md' + - solutions/ha-components.md + - Deployment: + - 'Initial setup': 'solutions/ha-init-setup.md' + - 'etcd setup': 'solutions/ha-etcd-config.md' + - 'Patroni setup': 'solutions/ha-patroni.md' + - solutions/pgbackrest.md + - 'HAProxy setup': 'solutions/ha-haproxy.md' + - 'Testing': solutions/ha-test.md - Backup and disaster recovery: - 'Overview': 'solutions/backup-recovery.md' - solutions/dr-pgbackrest-setup.md diff --git a/mkdocs.yml b/mkdocs.yml index 15d93b41b..86a0ec8a1 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -51,11 +51,22 @@ nav: - Solutions: - Overview: solutions.md - High availability: - - 'High availability': 'solutions/high-availability.md' - - 'Deploying on Debian or Ubuntu': 'solutions/ha-setup-apt.md' - - 'Deploying on RHEL or derivatives': 'solutions/ha-setup-yum.md' - - solutions/pgbackrest.md - - solutions/ha-test.md + - 'Overview': 'solutions/high-availability.md' + - solutions/ha-measure.md + - 'Architecture': solutions/ha-architecture.md + - Components: + - 'ETCD': 'solutions/etcd-info.md' + - 'Patroni': 'solutions/patroni-info.md' + - 'HAProxy': 'solutions/haproxy-info.md' + - 'pgBackRest': 'solutions/pgbackrest-info.md' + - solutions/ha-components.md + - Deployment: + - 'Initial setup': 'solutions/ha-init-setup.md' + - 'etcd setup': 'solutions/ha-etcd-config.md' + - 'Patroni setup': 'solutions/ha-patroni.md' + - solutions/pgbackrest.md + - 'HAProxy setup': 'solutions/ha-haproxy.md' + - 'Testing': solutions/ha-test.md - Backup and disaster recovery: - 'Overview': 'solutions/backup-recovery.md' - solutions/dr-pgbackrest-setup.md From a03769ada93339f4ead8c72c22dac9e67dc58c63 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Thu, 10 Jul 2025 21:17:00 +0300 Subject: [PATCH 36/41] PG-1691-Release-Notes-16.9 (#809) * Update variables.yml * pgsm to 2.2.0 updated pgsm version to 2.2.0 * fixed repeating paragraph * update the release date * add update release note correctly * revert pgsm to 2.1.1 for rn 16.9 --- docs/release-notes-v16.9.md | 3 ++- docs/release-notes-v16.9.upd.md | 7 +++++++ docs/release-notes.md | 2 ++ mkdocs.yml | 1 + variables.yml | 1 + 5 files changed, 13 insertions(+), 1 deletion(-) create mode 100644 docs/release-notes-v16.9.upd.md diff --git a/docs/release-notes-v16.9.md b/docs/release-notes-v16.9.md index 124ac1673..32e799920 100644 --- a/docs/release-notes-v16.9.md +++ b/docs/release-notes-v16.9.md @@ -14,7 +14,7 @@ The [Upgrading Percona Distribution for PostgreSQL from 15 to 16](major-upgrade. ## Supplied third-party extensions -Review each extension’s release notes for What’s new, improvements, or bug fixes. The following is the list of extensions available in Percona Distribution for PostgreSQL. +Review each extension’s release notes for What’s new, improvements, or bug fixes. The following is the list of extensions available in Percona Distribution for PostgreSQL. @@ -31,6 +31,7 @@ The following is the list of extensions available in Percona Distribution for Po | [pg_gather](https://github.com/jobinau/pg_gather) | v30 | an SQL script for running the diagnostics of the health of PostgreSQL cluster | | [pgpool2](https://git.postgresql.org/gitweb/?p=pgpool2.git;a=summary) | 4.6.0 | a middleware between PostgreSQL server and client for high availability, connection pooling, and load balancing. | | [pg_repack](https://github.com/reorg/pg_repack) | 1.5.2 | rebuilds PostgreSQL database objects | +| [pg_stat_monitor](https://github.com/percona/pg_stat_monitor) | 2.1.1 | collects and aggregates statistics for PostgreSQL and provides histogram information. | | [pgvector](https://github.com/pgvector/pgvector) | v0.8.0 | A vector similarity search for PostgreSQL | | [PostGIS](https://github.com/postgis/postgis) | 3.3.8 | a spatial extension for PostgreSQL. | | [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 277 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | diff --git a/docs/release-notes-v16.9.upd.md b/docs/release-notes-v16.9.upd.md new file mode 100644 index 000000000..3b77be174 --- /dev/null +++ b/docs/release-notes-v16.9.upd.md @@ -0,0 +1,7 @@ +# Percona Distribution for PostgreSQL 16.9 Update ({{date.16_9_1}}) + +[Installation](installing.md){.md-button} + +--8<-- "release-notes-intro.md" + +This update of Percona Distribution for PostgreSQL includes the new version of [`pg_stat_monitor` 2.2.0 :octicons-link-external-16:](https://docs.percona.com/pg-stat-monitor/release-notes/2.2.0.html) that improves query annotation parsing, enhances SQL error visibility, and fixes diagnostic issues with command types, improving performance. diff --git a/docs/release-notes.md b/docs/release-notes.md index ab7d0b530..bc282ff25 100644 --- a/docs/release-notes.md +++ b/docs/release-notes.md @@ -1,5 +1,7 @@ # Percona Distribution for PostgreSQL release notes +* [Percona Distribution for PostgreSQL 16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) + * [Percona Distribution for PostgreSQL 16.9](release-notes-v16.9.md) ({{date.16_9}}) * [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) diff --git a/mkdocs.yml b/mkdocs.yml index 86a0ec8a1..d8f008506 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -85,6 +85,7 @@ nav: - Uninstall: uninstalling.md - Release Notes: - "Release notes index": "release-notes.md" + - release-notes-v16.9.upd.md - release-notes-v16.9.md - release-notes-v16.8.md - release-notes-v16.6.md diff --git a/variables.yml b/variables.yml index f616153d1..add67c4de 100644 --- a/variables.yml +++ b/variables.yml @@ -8,6 +8,7 @@ pgsmversion: '2.1.1' date: + 16_9_1: 2025-07-10 16_9: 2025-05-29 16_8: 2025-02-27 16_6: 2024-12-03 From 76e407814dbee1f5e728dad2bd0e614ba3e93fe5 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Tue, 15 Jul 2025 15:16:14 +0300 Subject: [PATCH 37/41] reorganize rn structure, improve text and put them into rn folder --- docs/release-notes.md | 23 -------------- .../release-notes-v16.0.md | 0 .../release-notes-v16.0.upd.md | 0 .../release-notes-v16.1.md | 0 .../release-notes-v16.1.upd.md | 0 .../release-notes-v16.2.md | 0 .../release-notes-v16.3.md | 0 .../release-notes-v16.4.md | 0 .../release-notes-v16.6.md | 0 .../release-notes-v16.8.md | 0 .../release-notes-v16.9.md | 0 .../release-notes-v16.9.upd.md | 0 docs/release-notes/release-notes.md | 31 +++++++++++++++++++ mkdocs.yml | 29 +++++++++-------- 14 files changed, 47 insertions(+), 36 deletions(-) delete mode 100644 docs/release-notes.md rename docs/{ => release-notes}/release-notes-v16.0.md (100%) rename docs/{ => release-notes}/release-notes-v16.0.upd.md (100%) rename docs/{ => release-notes}/release-notes-v16.1.md (100%) rename docs/{ => release-notes}/release-notes-v16.1.upd.md (100%) rename docs/{ => release-notes}/release-notes-v16.2.md (100%) rename docs/{ => release-notes}/release-notes-v16.3.md (100%) rename docs/{ => release-notes}/release-notes-v16.4.md (100%) rename docs/{ => release-notes}/release-notes-v16.6.md (100%) rename docs/{ => release-notes}/release-notes-v16.8.md (100%) rename docs/{ => release-notes}/release-notes-v16.9.md (100%) rename docs/{ => release-notes}/release-notes-v16.9.upd.md (100%) create mode 100644 docs/release-notes/release-notes.md diff --git a/docs/release-notes.md b/docs/release-notes.md deleted file mode 100644 index bc282ff25..000000000 --- a/docs/release-notes.md +++ /dev/null @@ -1,23 +0,0 @@ -# Percona Distribution for PostgreSQL release notes - -* [Percona Distribution for PostgreSQL 16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) - -* [Percona Distribution for PostgreSQL 16.9](release-notes-v16.9.md) ({{date.16_9}}) - -* [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) - -* [Percona Distribution for PostgreSQL 16.6](release-notes-v16.6.md) ({{date.16_6}}) - -* [Percona Distribution for PostgreSQL 16.4](release-notes-v16.4.md) ({{date.16_4}}) - -* [Percona Distribution for PostgreSQL 16.3](release-notes-v16.3.md) (2024-06-06) - -* [Percona Distribution for PostgreSQL 16.2](release-notes-v16.2.md) (2024-02-27) - -* [Percona Distribution for PostgreSQL 16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) - -* [Percona Distribution for PostgreSQL 16.1](release-notes-v16.1.md) (2023-11-29) - -* [Percona Distribution for PostgreSQL 16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) - -* [Percona Distribution for PostgreSQL 16](release-notes-v16.0.md) (2023-09-19) diff --git a/docs/release-notes-v16.0.md b/docs/release-notes/release-notes-v16.0.md similarity index 100% rename from docs/release-notes-v16.0.md rename to docs/release-notes/release-notes-v16.0.md diff --git a/docs/release-notes-v16.0.upd.md b/docs/release-notes/release-notes-v16.0.upd.md similarity index 100% rename from docs/release-notes-v16.0.upd.md rename to docs/release-notes/release-notes-v16.0.upd.md diff --git a/docs/release-notes-v16.1.md b/docs/release-notes/release-notes-v16.1.md similarity index 100% rename from docs/release-notes-v16.1.md rename to docs/release-notes/release-notes-v16.1.md diff --git a/docs/release-notes-v16.1.upd.md b/docs/release-notes/release-notes-v16.1.upd.md similarity index 100% rename from docs/release-notes-v16.1.upd.md rename to docs/release-notes/release-notes-v16.1.upd.md diff --git a/docs/release-notes-v16.2.md b/docs/release-notes/release-notes-v16.2.md similarity index 100% rename from docs/release-notes-v16.2.md rename to docs/release-notes/release-notes-v16.2.md diff --git a/docs/release-notes-v16.3.md b/docs/release-notes/release-notes-v16.3.md similarity index 100% rename from docs/release-notes-v16.3.md rename to docs/release-notes/release-notes-v16.3.md diff --git a/docs/release-notes-v16.4.md b/docs/release-notes/release-notes-v16.4.md similarity index 100% rename from docs/release-notes-v16.4.md rename to docs/release-notes/release-notes-v16.4.md diff --git a/docs/release-notes-v16.6.md b/docs/release-notes/release-notes-v16.6.md similarity index 100% rename from docs/release-notes-v16.6.md rename to docs/release-notes/release-notes-v16.6.md diff --git a/docs/release-notes-v16.8.md b/docs/release-notes/release-notes-v16.8.md similarity index 100% rename from docs/release-notes-v16.8.md rename to docs/release-notes/release-notes-v16.8.md diff --git a/docs/release-notes-v16.9.md b/docs/release-notes/release-notes-v16.9.md similarity index 100% rename from docs/release-notes-v16.9.md rename to docs/release-notes/release-notes-v16.9.md diff --git a/docs/release-notes-v16.9.upd.md b/docs/release-notes/release-notes-v16.9.upd.md similarity index 100% rename from docs/release-notes-v16.9.upd.md rename to docs/release-notes/release-notes-v16.9.upd.md diff --git a/docs/release-notes/release-notes.md b/docs/release-notes/release-notes.md new file mode 100644 index 000000000..bfffa6b66 --- /dev/null +++ b/docs/release-notes/release-notes.md @@ -0,0 +1,31 @@ +# Percona Distribution for PostgreSQL release notes + +This page lists all release notes for Percona Distribution for PostgreSQL 16, organized by year and version. Use it to track new features, fixes, and updates across major and minor versions. + +## 2025 + +* [16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) + +* [16.9](release-notes-v16.9.md) ({{date.16_9}}) + +* [16.8](release-notes-v16.8.md) ({{date.16_8}}) + +## 2024 + +* [16.6](release-notes-v16.6.md) ({{date.16_6}}) + +* [16.4](release-notes-v16.4.md) ({{date.16_4}}) + +* [16.3](release-notes-v16.3.md) (2024-06-06) + +* [16.2](release-notes-v16.2.md) (2024-02-27) + +* [16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) + +## 2023 + +* [16.1](release-notes-v16.1.md) (2023-11-29) + +* [16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) + +* [16](release-notes-v16.0.md) (2023-09-19) diff --git a/mkdocs.yml b/mkdocs.yml index d8f008506..b839a860f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -83,19 +83,22 @@ nav: - migration.md - Troubleshooting guide: troubleshooting.md - Uninstall: uninstalling.md - - Release Notes: - - "Release notes index": "release-notes.md" - - release-notes-v16.9.upd.md - - release-notes-v16.9.md - - release-notes-v16.8.md - - release-notes-v16.6.md - - release-notes-v16.4.md - - release-notes-v16.3.md - - release-notes-v16.2.md - - release-notes-v16.1.upd.md - - release-notes-v16.1.md - - release-notes-v16.0.upd.md - - release-notes-v16.0.md + - Release notes: + - "Release notes index": release-notes/release-notes.md + - "2025": + - "16.9 Update": release-notes/release-notes-v16.9.upd.md + - "16.9": release-notes/release-notes-v16.9.md + - "16.8": release-notes/release-notes-v16.8.md + - "2024 (versions 16.6 - 16.1 Update)": + - "16.6": release-notes/release-notes-v16.6.md + - "16.4": release-notes/release-notes-v16.4.md + - "16.3": release-notes/release-notes-v16.3.md + - "16.2": release-notes/release-notes-v16.2.md + - "16.1 Update": release-notes/release-notes-v16.1.upd.md + - "2023 (versions 16.1 - 16.0)": + - "16.1": release-notes/release-notes-v16.1.md + - "16.0 Update": release-notes/release-notes-v16.0.upd.md + - "16.0": release-notes/release-notes-v16.0.md - Reference: - Telemetry: telemetry.md - Licensing: licensing.md From b16363efd95c60d013ef1871e45bbeb4ee17d549 Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Tue, 15 Jul 2025 15:19:10 +0300 Subject: [PATCH 38/41] Revert "reorganize rn structure, improve text and put them into rn folder" This reverts commit 76e407814dbee1f5e728dad2bd0e614ba3e93fe5. --- .../release-notes-v16.0.md | 0 .../release-notes-v16.0.upd.md | 0 .../release-notes-v16.1.md | 0 .../release-notes-v16.1.upd.md | 0 .../release-notes-v16.2.md | 0 .../release-notes-v16.3.md | 0 .../release-notes-v16.4.md | 0 .../release-notes-v16.6.md | 0 .../release-notes-v16.8.md | 0 .../release-notes-v16.9.md | 0 .../release-notes-v16.9.upd.md | 0 docs/release-notes.md | 23 ++++++++++++++ docs/release-notes/release-notes.md | 31 ------------------- mkdocs.yml | 29 ++++++++--------- 14 files changed, 36 insertions(+), 47 deletions(-) rename docs/{release-notes => }/release-notes-v16.0.md (100%) rename docs/{release-notes => }/release-notes-v16.0.upd.md (100%) rename docs/{release-notes => }/release-notes-v16.1.md (100%) rename docs/{release-notes => }/release-notes-v16.1.upd.md (100%) rename docs/{release-notes => }/release-notes-v16.2.md (100%) rename docs/{release-notes => }/release-notes-v16.3.md (100%) rename docs/{release-notes => }/release-notes-v16.4.md (100%) rename docs/{release-notes => }/release-notes-v16.6.md (100%) rename docs/{release-notes => }/release-notes-v16.8.md (100%) rename docs/{release-notes => }/release-notes-v16.9.md (100%) rename docs/{release-notes => }/release-notes-v16.9.upd.md (100%) create mode 100644 docs/release-notes.md delete mode 100644 docs/release-notes/release-notes.md diff --git a/docs/release-notes/release-notes-v16.0.md b/docs/release-notes-v16.0.md similarity index 100% rename from docs/release-notes/release-notes-v16.0.md rename to docs/release-notes-v16.0.md diff --git a/docs/release-notes/release-notes-v16.0.upd.md b/docs/release-notes-v16.0.upd.md similarity index 100% rename from docs/release-notes/release-notes-v16.0.upd.md rename to docs/release-notes-v16.0.upd.md diff --git a/docs/release-notes/release-notes-v16.1.md b/docs/release-notes-v16.1.md similarity index 100% rename from docs/release-notes/release-notes-v16.1.md rename to docs/release-notes-v16.1.md diff --git a/docs/release-notes/release-notes-v16.1.upd.md b/docs/release-notes-v16.1.upd.md similarity index 100% rename from docs/release-notes/release-notes-v16.1.upd.md rename to docs/release-notes-v16.1.upd.md diff --git a/docs/release-notes/release-notes-v16.2.md b/docs/release-notes-v16.2.md similarity index 100% rename from docs/release-notes/release-notes-v16.2.md rename to docs/release-notes-v16.2.md diff --git a/docs/release-notes/release-notes-v16.3.md b/docs/release-notes-v16.3.md similarity index 100% rename from docs/release-notes/release-notes-v16.3.md rename to docs/release-notes-v16.3.md diff --git a/docs/release-notes/release-notes-v16.4.md b/docs/release-notes-v16.4.md similarity index 100% rename from docs/release-notes/release-notes-v16.4.md rename to docs/release-notes-v16.4.md diff --git a/docs/release-notes/release-notes-v16.6.md b/docs/release-notes-v16.6.md similarity index 100% rename from docs/release-notes/release-notes-v16.6.md rename to docs/release-notes-v16.6.md diff --git a/docs/release-notes/release-notes-v16.8.md b/docs/release-notes-v16.8.md similarity index 100% rename from docs/release-notes/release-notes-v16.8.md rename to docs/release-notes-v16.8.md diff --git a/docs/release-notes/release-notes-v16.9.md b/docs/release-notes-v16.9.md similarity index 100% rename from docs/release-notes/release-notes-v16.9.md rename to docs/release-notes-v16.9.md diff --git a/docs/release-notes/release-notes-v16.9.upd.md b/docs/release-notes-v16.9.upd.md similarity index 100% rename from docs/release-notes/release-notes-v16.9.upd.md rename to docs/release-notes-v16.9.upd.md diff --git a/docs/release-notes.md b/docs/release-notes.md new file mode 100644 index 000000000..bc282ff25 --- /dev/null +++ b/docs/release-notes.md @@ -0,0 +1,23 @@ +# Percona Distribution for PostgreSQL release notes + +* [Percona Distribution for PostgreSQL 16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) + +* [Percona Distribution for PostgreSQL 16.9](release-notes-v16.9.md) ({{date.16_9}}) + +* [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) + +* [Percona Distribution for PostgreSQL 16.6](release-notes-v16.6.md) ({{date.16_6}}) + +* [Percona Distribution for PostgreSQL 16.4](release-notes-v16.4.md) ({{date.16_4}}) + +* [Percona Distribution for PostgreSQL 16.3](release-notes-v16.3.md) (2024-06-06) + +* [Percona Distribution for PostgreSQL 16.2](release-notes-v16.2.md) (2024-02-27) + +* [Percona Distribution for PostgreSQL 16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) + +* [Percona Distribution for PostgreSQL 16.1](release-notes-v16.1.md) (2023-11-29) + +* [Percona Distribution for PostgreSQL 16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) + +* [Percona Distribution for PostgreSQL 16](release-notes-v16.0.md) (2023-09-19) diff --git a/docs/release-notes/release-notes.md b/docs/release-notes/release-notes.md deleted file mode 100644 index bfffa6b66..000000000 --- a/docs/release-notes/release-notes.md +++ /dev/null @@ -1,31 +0,0 @@ -# Percona Distribution for PostgreSQL release notes - -This page lists all release notes for Percona Distribution for PostgreSQL 16, organized by year and version. Use it to track new features, fixes, and updates across major and minor versions. - -## 2025 - -* [16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) - -* [16.9](release-notes-v16.9.md) ({{date.16_9}}) - -* [16.8](release-notes-v16.8.md) ({{date.16_8}}) - -## 2024 - -* [16.6](release-notes-v16.6.md) ({{date.16_6}}) - -* [16.4](release-notes-v16.4.md) ({{date.16_4}}) - -* [16.3](release-notes-v16.3.md) (2024-06-06) - -* [16.2](release-notes-v16.2.md) (2024-02-27) - -* [16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) - -## 2023 - -* [16.1](release-notes-v16.1.md) (2023-11-29) - -* [16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) - -* [16](release-notes-v16.0.md) (2023-09-19) diff --git a/mkdocs.yml b/mkdocs.yml index b839a860f..d8f008506 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -83,22 +83,19 @@ nav: - migration.md - Troubleshooting guide: troubleshooting.md - Uninstall: uninstalling.md - - Release notes: - - "Release notes index": release-notes/release-notes.md - - "2025": - - "16.9 Update": release-notes/release-notes-v16.9.upd.md - - "16.9": release-notes/release-notes-v16.9.md - - "16.8": release-notes/release-notes-v16.8.md - - "2024 (versions 16.6 - 16.1 Update)": - - "16.6": release-notes/release-notes-v16.6.md - - "16.4": release-notes/release-notes-v16.4.md - - "16.3": release-notes/release-notes-v16.3.md - - "16.2": release-notes/release-notes-v16.2.md - - "16.1 Update": release-notes/release-notes-v16.1.upd.md - - "2023 (versions 16.1 - 16.0)": - - "16.1": release-notes/release-notes-v16.1.md - - "16.0 Update": release-notes/release-notes-v16.0.upd.md - - "16.0": release-notes/release-notes-v16.0.md + - Release Notes: + - "Release notes index": "release-notes.md" + - release-notes-v16.9.upd.md + - release-notes-v16.9.md + - release-notes-v16.8.md + - release-notes-v16.6.md + - release-notes-v16.4.md + - release-notes-v16.3.md + - release-notes-v16.2.md + - release-notes-v16.1.upd.md + - release-notes-v16.1.md + - release-notes-v16.0.upd.md + - release-notes-v16.0.md - Reference: - Telemetry: telemetry.md - Licensing: licensing.md From bc53dcfdc8ef93dce7ac0dc161d05035e29e415a Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Tue, 15 Jul 2025 16:09:05 +0300 Subject: [PATCH 39/41] Reapply "reorganize rn structure, improve text and put them into rn folder" This reverts commit b16363efd95c60d013ef1871e45bbeb4ee17d549. --- docs/release-notes.md | 23 -------------- .../release-notes-v16.0.md | 0 .../release-notes-v16.0.upd.md | 0 .../release-notes-v16.1.md | 0 .../release-notes-v16.1.upd.md | 0 .../release-notes-v16.2.md | 0 .../release-notes-v16.3.md | 0 .../release-notes-v16.4.md | 0 .../release-notes-v16.6.md | 0 .../release-notes-v16.8.md | 0 .../release-notes-v16.9.md | 0 .../release-notes-v16.9.upd.md | 0 docs/release-notes/release-notes.md | 31 +++++++++++++++++++ mkdocs.yml | 29 +++++++++-------- 14 files changed, 47 insertions(+), 36 deletions(-) delete mode 100644 docs/release-notes.md rename docs/{ => release-notes}/release-notes-v16.0.md (100%) rename docs/{ => release-notes}/release-notes-v16.0.upd.md (100%) rename docs/{ => release-notes}/release-notes-v16.1.md (100%) rename docs/{ => release-notes}/release-notes-v16.1.upd.md (100%) rename docs/{ => release-notes}/release-notes-v16.2.md (100%) rename docs/{ => release-notes}/release-notes-v16.3.md (100%) rename docs/{ => release-notes}/release-notes-v16.4.md (100%) rename docs/{ => release-notes}/release-notes-v16.6.md (100%) rename docs/{ => release-notes}/release-notes-v16.8.md (100%) rename docs/{ => release-notes}/release-notes-v16.9.md (100%) rename docs/{ => release-notes}/release-notes-v16.9.upd.md (100%) create mode 100644 docs/release-notes/release-notes.md diff --git a/docs/release-notes.md b/docs/release-notes.md deleted file mode 100644 index bc282ff25..000000000 --- a/docs/release-notes.md +++ /dev/null @@ -1,23 +0,0 @@ -# Percona Distribution for PostgreSQL release notes - -* [Percona Distribution for PostgreSQL 16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) - -* [Percona Distribution for PostgreSQL 16.9](release-notes-v16.9.md) ({{date.16_9}}) - -* [Percona Distribution for PostgreSQL 16.8](release-notes-v16.8.md) ({{date.16_8}}) - -* [Percona Distribution for PostgreSQL 16.6](release-notes-v16.6.md) ({{date.16_6}}) - -* [Percona Distribution for PostgreSQL 16.4](release-notes-v16.4.md) ({{date.16_4}}) - -* [Percona Distribution for PostgreSQL 16.3](release-notes-v16.3.md) (2024-06-06) - -* [Percona Distribution for PostgreSQL 16.2](release-notes-v16.2.md) (2024-02-27) - -* [Percona Distribution for PostgreSQL 16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) - -* [Percona Distribution for PostgreSQL 16.1](release-notes-v16.1.md) (2023-11-29) - -* [Percona Distribution for PostgreSQL 16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) - -* [Percona Distribution for PostgreSQL 16](release-notes-v16.0.md) (2023-09-19) diff --git a/docs/release-notes-v16.0.md b/docs/release-notes/release-notes-v16.0.md similarity index 100% rename from docs/release-notes-v16.0.md rename to docs/release-notes/release-notes-v16.0.md diff --git a/docs/release-notes-v16.0.upd.md b/docs/release-notes/release-notes-v16.0.upd.md similarity index 100% rename from docs/release-notes-v16.0.upd.md rename to docs/release-notes/release-notes-v16.0.upd.md diff --git a/docs/release-notes-v16.1.md b/docs/release-notes/release-notes-v16.1.md similarity index 100% rename from docs/release-notes-v16.1.md rename to docs/release-notes/release-notes-v16.1.md diff --git a/docs/release-notes-v16.1.upd.md b/docs/release-notes/release-notes-v16.1.upd.md similarity index 100% rename from docs/release-notes-v16.1.upd.md rename to docs/release-notes/release-notes-v16.1.upd.md diff --git a/docs/release-notes-v16.2.md b/docs/release-notes/release-notes-v16.2.md similarity index 100% rename from docs/release-notes-v16.2.md rename to docs/release-notes/release-notes-v16.2.md diff --git a/docs/release-notes-v16.3.md b/docs/release-notes/release-notes-v16.3.md similarity index 100% rename from docs/release-notes-v16.3.md rename to docs/release-notes/release-notes-v16.3.md diff --git a/docs/release-notes-v16.4.md b/docs/release-notes/release-notes-v16.4.md similarity index 100% rename from docs/release-notes-v16.4.md rename to docs/release-notes/release-notes-v16.4.md diff --git a/docs/release-notes-v16.6.md b/docs/release-notes/release-notes-v16.6.md similarity index 100% rename from docs/release-notes-v16.6.md rename to docs/release-notes/release-notes-v16.6.md diff --git a/docs/release-notes-v16.8.md b/docs/release-notes/release-notes-v16.8.md similarity index 100% rename from docs/release-notes-v16.8.md rename to docs/release-notes/release-notes-v16.8.md diff --git a/docs/release-notes-v16.9.md b/docs/release-notes/release-notes-v16.9.md similarity index 100% rename from docs/release-notes-v16.9.md rename to docs/release-notes/release-notes-v16.9.md diff --git a/docs/release-notes-v16.9.upd.md b/docs/release-notes/release-notes-v16.9.upd.md similarity index 100% rename from docs/release-notes-v16.9.upd.md rename to docs/release-notes/release-notes-v16.9.upd.md diff --git a/docs/release-notes/release-notes.md b/docs/release-notes/release-notes.md new file mode 100644 index 000000000..bfffa6b66 --- /dev/null +++ b/docs/release-notes/release-notes.md @@ -0,0 +1,31 @@ +# Percona Distribution for PostgreSQL release notes + +This page lists all release notes for Percona Distribution for PostgreSQL 16, organized by year and version. Use it to track new features, fixes, and updates across major and minor versions. + +## 2025 + +* [16.9 Update](release-notes-v16.9.upd.md) ({{date.16_9_1}}) + +* [16.9](release-notes-v16.9.md) ({{date.16_9}}) + +* [16.8](release-notes-v16.8.md) ({{date.16_8}}) + +## 2024 + +* [16.6](release-notes-v16.6.md) ({{date.16_6}}) + +* [16.4](release-notes-v16.4.md) ({{date.16_4}}) + +* [16.3](release-notes-v16.3.md) (2024-06-06) + +* [16.2](release-notes-v16.2.md) (2024-02-27) + +* [16.1 Update](release-notes-v16.1.upd.md) (2024-01-18) + +## 2023 + +* [16.1](release-notes-v16.1.md) (2023-11-29) + +* [16.0 Update](release-notes-v16.0.upd.md) (2023-11-02) + +* [16](release-notes-v16.0.md) (2023-09-19) diff --git a/mkdocs.yml b/mkdocs.yml index d8f008506..b839a860f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -83,19 +83,22 @@ nav: - migration.md - Troubleshooting guide: troubleshooting.md - Uninstall: uninstalling.md - - Release Notes: - - "Release notes index": "release-notes.md" - - release-notes-v16.9.upd.md - - release-notes-v16.9.md - - release-notes-v16.8.md - - release-notes-v16.6.md - - release-notes-v16.4.md - - release-notes-v16.3.md - - release-notes-v16.2.md - - release-notes-v16.1.upd.md - - release-notes-v16.1.md - - release-notes-v16.0.upd.md - - release-notes-v16.0.md + - Release notes: + - "Release notes index": release-notes/release-notes.md + - "2025": + - "16.9 Update": release-notes/release-notes-v16.9.upd.md + - "16.9": release-notes/release-notes-v16.9.md + - "16.8": release-notes/release-notes-v16.8.md + - "2024 (versions 16.6 - 16.1 Update)": + - "16.6": release-notes/release-notes-v16.6.md + - "16.4": release-notes/release-notes-v16.4.md + - "16.3": release-notes/release-notes-v16.3.md + - "16.2": release-notes/release-notes-v16.2.md + - "16.1 Update": release-notes/release-notes-v16.1.upd.md + - "2023 (versions 16.1 - 16.0)": + - "16.1": release-notes/release-notes-v16.1.md + - "16.0 Update": release-notes/release-notes-v16.0.upd.md + - "16.0": release-notes/release-notes-v16.0.md - Reference: - Telemetry: telemetry.md - Licensing: licensing.md From 0c3a1bc0cebc404e034ddc44b832a8118db6834d Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Wed, 16 Jul 2025 12:53:19 +0300 Subject: [PATCH 40/41] fix links post rn folder creation (#823) --- docs/index.md | 7 +------ docs/release-notes/release-notes-v16.0.md | 4 ++-- docs/release-notes/release-notes-v16.0.upd.md | 7 +++---- docs/release-notes/release-notes-v16.1.md | 8 +++----- docs/release-notes/release-notes-v16.1.upd.md | 2 +- docs/release-notes/release-notes-v16.2.md | 6 ++---- docs/release-notes/release-notes-v16.3.md | 6 ++---- docs/release-notes/release-notes-v16.4.md | 4 ++-- docs/release-notes/release-notes-v16.6.md | 14 +++++++------- docs/release-notes/release-notes-v16.8.md | 13 ++++++------- docs/release-notes/release-notes-v16.9.md | 4 ++-- docs/release-notes/release-notes-v16.9.upd.md | 2 +- 12 files changed, 32 insertions(+), 45 deletions(-) diff --git a/docs/index.md b/docs/index.md index ffaa24341..b468921a9 100644 --- a/docs/index.md +++ b/docs/index.md @@ -47,11 +47,6 @@ Our comprehensive resources will help you overcome challenges, from everyday iss Learn about the releases and changes in the Distribution. -[Release notes :material-arrow-right:]({{release}}.md){.md-button} +[Release notes :material-arrow-right:](release-notes/{{release}}.md){.md-button} - - - - - diff --git a/docs/release-notes/release-notes-v16.0.md b/docs/release-notes/release-notes-v16.0.md index 844c5a685..8e0889a3e 100644 --- a/docs/release-notes/release-notes-v16.0.md +++ b/docs/release-notes/release-notes-v16.0.md @@ -1,7 +1,7 @@ # Percona Distribution for PostgreSQL 16.0 (2023-09-19) -[Installation](installing.md){.md-button} -[Upgrade](major-upgrade.md){.md-button} +[Installation](../installing.md){.md-button} +[Upgrade](../major-upgrade.md){.md-button} We are pleased to announce the launch of Percona Distribution for PostgreSQL 16.0 - a solution with the collection of tools from PostgreSQL community that are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster Recovery, Security, Observability, Spatial data handling, Performance and Scalability and others that enterprises are facing. diff --git a/docs/release-notes/release-notes-v16.0.upd.md b/docs/release-notes/release-notes-v16.0.upd.md index 61f2b8415..80e60b0a1 100644 --- a/docs/release-notes/release-notes-v16.0.upd.md +++ b/docs/release-notes/release-notes-v16.0.upd.md @@ -1,7 +1,6 @@ # Percona Distribution for PostgreSQL 16.0 Update (2023-11-02) -[Installation](installing.md){.md-button} -[Upgrade](major-upgrade.md){.md-button} +[Installation](../installing.md){.md-button} +[Upgrade](../major-upgrade.md){.md-button} - -This update to the release of Percona Distribution for PostgreSQL 16.0 includes the Docker images for x86_64 architectures. It aims to simplify the developers' experience with the Distribution. Refer to the [Docker guide](docker.md) for how to run Percona Distribution for PostgreSQL in Docker. \ No newline at end of file +This update to the release of Percona Distribution for PostgreSQL 16.0 includes the Docker images for x86_64 architectures. It aims to simplify the developers' experience with the Distribution. Refer to the [Docker guide](../docker.md) for how to run Percona Distribution for PostgreSQL in Docker. diff --git a/docs/release-notes/release-notes-v16.1.md b/docs/release-notes/release-notes-v16.1.md index 900f15e4d..a147948cd 100644 --- a/docs/release-notes/release-notes-v16.1.md +++ b/docs/release-notes/release-notes-v16.1.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.1 (2023-11-29) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -8,7 +8,7 @@ This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.1 ## Release Highlights -* Telemetry is now enabled in Percona Distribution for PostgreSQL to fill in the gaps in our understanding of how you use it and help us improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the [Telemetry on Percona Distribution for PostgreSQL](telemetry.md) document. +* Telemetry is now enabled in Percona Distribution for PostgreSQL to fill in the gaps in our understanding of how you use it and help us improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the [Telemetry on Percona Distribution for PostgreSQL](../telemetry.md) document. * The `percona-postgis33` and `percona-pgaudit` packages on YUM-based operating systems are renamed `percona-postgis33_{{pgversion}}` and `percona-pgaudit{{pgversion}}` respectively @@ -41,8 +41,6 @@ Percona Distribution for PostgreSQL also includes the following packages: | RHEL 8 | `etcd` | 3.3.11 | A consistent, distributed key-value store| | | `python3-etcd`| 0.4.5 | A Python client for etcd | - - Percona Distribution for PostgreSQL is also shipped with the [libpq :octicons-link-external-16:](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of library functions that allow client programs to pass queries to the PostgreSQL -backend server and to receive the results of these queries." +backend server and to receive the results of these queries." diff --git a/docs/release-notes/release-notes-v16.1.upd.md b/docs/release-notes/release-notes-v16.1.upd.md index 6f6788c68..7bddf0ef2 100644 --- a/docs/release-notes/release-notes-v16.1.upd.md +++ b/docs/release-notes/release-notes-v16.1.upd.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.1 Update (2024-01-18) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" diff --git a/docs/release-notes/release-notes-v16.2.md b/docs/release-notes/release-notes-v16.2.md index d8ce16b83..152ade38f 100644 --- a/docs/release-notes/release-notes-v16.2.md +++ b/docs/release-notes/release-notes-v16.2.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.2 (2024-02-27) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -40,8 +40,6 @@ Percona Distribution for PostgreSQL also includes the following packages: | RHEL 8 and derivatives| `etcd` | 3.5.12 | A consistent, distributed key-value store| | | `python3-etcd`| 0.4.5 | A Python client for etcd | - - Percona Distribution for PostgreSQL is also shipped with the [libpq :octicons-link-external-16:](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of library functions that allow client programs to pass queries to the PostgreSQL -backend server and to receive the results of these queries." +backend server and to receive the results of these queries." diff --git a/docs/release-notes/release-notes-v16.3.md b/docs/release-notes/release-notes-v16.3.md index 790949c67..deff6a4bb 100644 --- a/docs/release-notes/release-notes-v16.3.md +++ b/docs/release-notes/release-notes-v16.3.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.3 (2024-06-06) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -41,8 +41,6 @@ Percona Distribution for PostgreSQL Red Hat Enterprise Linux 8 and compatible de * `llvm` 16.0.6 packages. This fixes compatibility issues with LLVM from upstream. * supplemental `python3-etcd` packages, which can be used for setting up Patroni clusters. - - Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of library functions that allow client programs to pass queries to the PostgreSQL -backend server and to receive the results of these queries." +backend server and to receive the results of these queries." diff --git a/docs/release-notes/release-notes-v16.4.md b/docs/release-notes/release-notes-v16.4.md index a8de8257e..09756d608 100644 --- a/docs/release-notes/release-notes-v16.4.md +++ b/docs/release-notes/release-notes-v16.4.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.4 ({{date.16_4}}) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -20,7 +20,7 @@ This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.4 * Debian 11 * Debian 12 -* Percona Distribution for PostgreSQL includes the enhanced telemetry feature and provides comprehensive information about how telemetry works, its components and metrics as well as updated methods how to disable telemetry. Read more in [Telemetry and data collection](telemetry.md) +* Percona Distribution for PostgreSQL includes the enhanced telemetry feature and provides comprehensive information about how telemetry works, its components and metrics as well as updated methods how to disable telemetry. Read more in [Telemetry and data collection](../telemetry.md) * Percona Distribution for PostgreSQL includes `pg_stat_monitor` 2.1.0 that provides the ability to [disable the application name tracking for a query](https://docs.percona.com/pg-stat-monitor/configuration.html#pg_stat_monitorpgsm_track_application_names). This way you can optimize query execution performance. diff --git a/docs/release-notes/release-notes-v16.6.md b/docs/release-notes/release-notes-v16.6.md index 26cb3e934..c94d9a3f9 100644 --- a/docs/release-notes/release-notes-v16.6.md +++ b/docs/release-notes/release-notes-v16.6.md @@ -1,16 +1,16 @@ # Percona Distribution for PostgreSQL 16.6 ({{date.16_6}}) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" -This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.6](https://www.postgresql.org/docs/16/release-16-6.html). +This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.6](https://www.postgresql.org/docs/16/release-16-6.html). ## Release Highlights -* This release includes fixes for [CVE-2024-10978](https://www.postgresql.org/support/security/CVE-2024-10978/) and for certain PostgreSQL extensions that break because they depend on the modified Application Binary Interface (ABI). These regressions were introduced in PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, and 12.21. For this reason, the release of Percona Distribution for PostgreSQL 16.5 has been skipped. +* This release includes fixes for [CVE-2024-10978](https://www.postgresql.org/support/security/CVE-2024-10978/) and for certain PostgreSQL extensions that break because they depend on the modified Application Binary Interface (ABI). These regressions were introduced in PostgreSQL 17.1, 16.5, 15.9, 14.14, 13.17, and 12.21. For this reason, the release of Percona Distribution for PostgreSQL 16.5 has been skipped. -* Percona Distribution for PostgreSQL includes [`pgvector` :octicons-link-external-16:](https://github.com/pgvector/pgvector) - an open source extension that enables you to use PostgreSQL as a vector database. It brings vector data type and vector operations (mainly similarity search) to PosgreSQL. You can install `pgvector` from repositories, tarballs, and it is also available as a Docker image. +* Percona Distribution for PostgreSQL includes [`pgvector` :octicons-link-external-16:](https://github.com/pgvector/pgvector) - an open source extension that enables you to use PostgreSQL as a vector database. It brings vector data type and vector operations (mainly similarity search) to PosgreSQL. You can install `pgvector` from repositories, tarballs, and it is also available as a Docker image. * Percona Distribution for PostgreSQL now statically links `llvmjit.so` library for Red Hat Enterprise Linux 8 and 9 and compatible derivatives. This resolves the conflict between the LLVM version required by Percona Distribution for PostgreSQL and the one supplied with the operating system. This also enables you to use the LLVM modules supplied with the operating system for other software you require. @@ -39,8 +39,8 @@ The following is the list of extensions available in Percona Distribution for Po | [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common)| 266 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time.| | [wal2json](https://github.com/eulerto/wal2json) |2.6 | a PostgreSQL logical decoding JSON output plugin| -For Red Hat Enterprise Linux 8 and 9 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. - +For Red Hat Enterprise Linux 8 and 9 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. + Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of library functions that allow client programs to pass queries to the PostgreSQL -backend server and to receive the results of these queries." +backend server and to receive the results of these queries." diff --git a/docs/release-notes/release-notes-v16.8.md b/docs/release-notes/release-notes-v16.8.md index 15541f804..2c2a27c95 100644 --- a/docs/release-notes/release-notes-v16.8.md +++ b/docs/release-notes/release-notes-v16.8.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.8 ({{date.16_8}}) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -14,15 +14,15 @@ This release fixes [CVE-2025-1094](https://www.postgresql.org/support/security/C * Percona Distribution for PostgreSQL Docker image is now based on Universal Base Image (UBI) version 9, which includes the latest security fixes. This makes the image compliant with the Red Hat certification and ensures the seamless work of containers on Red Hat OpenShift Container Platform. -* You no longer have to specify the `{{dockertag}}-multi` tag when you run Percona Distribution for PostgreSQL in Docker. Instead, use the `percona/percona-distribution-postgresql:{{dockertag}}`. Docker automatically identifies the architecture of your operating system and pulls the corresponding image. Refer to [Run in Docker](docker.md) for how to get started. +* You no longer have to specify the `{{dockertag}}-multi` tag when you run Percona Distribution for PostgreSQL in Docker. Instead, use the `percona/percona-distribution-postgresql:{{dockertag}}`. Docker automatically identifies the architecture of your operating system and pulls the corresponding image. Refer to [Run in Docker](../docker.md) for how to get started. ### PostGIS is included into tarballs -We have extended Percona Distribution for PostgreSQL tarballs with PostGIS - an open-source extension to handle spacial data. This way you can install and run PostgreSQL as a geospatial database on hosts without a direct access to the Internet. Learn more about [installing from tarballs](tarball.md) and [Spacial data manipulation](solutions/postgis.md) +We have extended Percona Distribution for PostgreSQL tarballs with PostGIS - an open-source extension to handle spacial data. This way you can install and run PostgreSQL as a geospatial database on hosts without a direct access to the Internet. Learn more about [installing from tarballs](../tarball.md) and [Spacial data manipulation](../solutions/postgis.md) ### Deprecation of meta packages -[Meta-packages for Percona Distribution for PostgreSQL](repo-overview.md#repository-contents) are deprecated and will be removed in future releases. +[Meta-packages for Percona Distribution for PostgreSQL](../repo-overview.md#repository-contents) are deprecated and will be removed in future releases. ## Supplied third-party extensions @@ -49,9 +49,8 @@ The following is the list of extensions available in Percona Distribution for Po | [PostgreSQL Commons](https://salsa.debian.org/postgresql/postgresql-common) | 267 | PostgreSQL database-cluster manager. It provides a structure under which multiple versions of PostgreSQL may be installed and/or multiple clusters maintained at one time. | | [wal2json](https://github.com/eulerto/wal2json) | 2.6 | a PostgreSQL logical decoding JSON output plugin | +For Red Hat Enterprise Linux 8 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. -For Red Hat Enterprise Linux 8 and compatible derivatives, Percona Distribution for PostgreSQL also includes the supplemental `python3-etcd` 0.4.5 packages, which are used for setting up Patroni clusters. - Percona Distribution for PostgreSQL is also shipped with the [libpq](https://www.postgresql.org/docs/16/libpq.html) library. It contains "a set of library functions that allow client programs to pass queries to the PostgreSQL -backend server and to receive the results of these queries." +backend server and to receive the results of these queries." diff --git a/docs/release-notes/release-notes-v16.9.md b/docs/release-notes/release-notes-v16.9.md index 32e799920..65ed3d161 100644 --- a/docs/release-notes/release-notes-v16.9.md +++ b/docs/release-notes/release-notes-v16.9.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.9 ({{date.16_9}}) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" @@ -10,7 +10,7 @@ This release of Percona Distribution for PostgreSQL is based on [PostgreSQL 16.9 ### Updated Major upgrade topic in documentation -The [Upgrading Percona Distribution for PostgreSQL from 15 to 16](major-upgrade.md) guide has been updated with revised steps for the [On Debian and Ubuntu using `apt`](major-upgrade.md/#on-debian-and-ubuntu-using-apt) section, improving clarity and reliability of the upgrade process. +The [Upgrading Percona Distribution for PostgreSQL from 15 to 16](../major-upgrade.md) guide has been updated with revised steps for the [On Debian and Ubuntu using `apt`](../major-upgrade.md/#on-debian-and-ubuntu-using-apt) section, improving clarity and reliability of the upgrade process. ## Supplied third-party extensions diff --git a/docs/release-notes/release-notes-v16.9.upd.md b/docs/release-notes/release-notes-v16.9.upd.md index 3b77be174..b44578904 100644 --- a/docs/release-notes/release-notes-v16.9.upd.md +++ b/docs/release-notes/release-notes-v16.9.upd.md @@ -1,6 +1,6 @@ # Percona Distribution for PostgreSQL 16.9 Update ({{date.16_9_1}}) -[Installation](installing.md){.md-button} +[Installation](../installing.md){.md-button} --8<-- "release-notes-intro.md" From 60b1bf7fcf184381c0ad8156ffbb33ab79e1b79a Mon Sep 17 00:00:00 2001 From: Dragos Andriciuc Date: Fri, 25 Jul 2025 14:28:24 +0300 Subject: [PATCH 41/41] add updated steps for postgis (#831) --- docs/yum.md | 136 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 123 insertions(+), 13 deletions(-) diff --git a/docs/yum.md b/docs/yum.md index 2772af85b..3d7b60409 100644 --- a/docs/yum.md +++ b/docs/yum.md @@ -97,43 +97,95 @@ The following are commands for Red Hat Enterprise Linux 9 and derivatives. For R $ sudo dnf config-manager --set-enabled ol9_codeready_builder ``` -### For PostGIS +### For PostGIS For Red Hat Enterprise Linux 8 and derivatives, replace the operating system version in the following commands accordingly. +=== "RHEL 8" + + Run the following commands: + {.power-number} + + 1. Install DNF plugin utilities + + ```{.bash data-prompt="$"} + $ sudo dnf install dnf-plugins-core + ``` + + 2. Install the EPEL repository + + ```{.bash data-prompt="$"} + $ sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm + ``` + + 3. Enable the CodeReady Builder repository to resolve dependency conflicts + + ```{.bash data-prompt="$"} + $ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-8-rhui-rpms + ``` + + 4. Disable the default PostgreSQL module + + ```{.bash data-prompt="$"} + $ sudo dnf module disable postgresql + ``` + === "RHEL 9" Run the following commands: {.power-number} - 1. Install `epel` repository + 1. Install DNF plugin utilities ```{.bash data-prompt="$"} - $ sudo yum install epel-release + $ sudo dnf install dnf-plugins-core ``` - 2. Enable the codeready builder repository to resolve dependencies conflict. + 2. Install the EPEL repository ```{.bash data-prompt="$"} - $ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-9-x86_64-rpms + $ sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm ``` + + 3. Enable the CodeReady Builder repository to resolve dependency conflicts -=== "Rocky Linux 9" + ```{.bash data-prompt="$"} + $ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-9-rhui-rpms + ``` + +=== "Oracle Linux 8" Run the following commands: {.power-number} - 1. Install `epel` repository + 1. Install the EPEL repository ```{.bash data-prompt="$"} - $ sudo yum install epel-release + $ sudo dnf install -y epel-release ``` - 2. Enable the codeready builder repository to resolve dependencies conflict. + 2. Install DNF plugin utilities ```{.bash data-prompt="$"} $ sudo dnf install dnf-plugins-core - $ sudo dnf config-manager --set-enabled crb + ``` + + 3. Enable the CodeReady Builder repository to resolve dependency conflicts + + ```{.bash data-prompt="$"} + $ sudo dnf config-manager --set-enabled ol8_codeready_builder + ``` + + 4. (Alternative) Install the latest EPEL release + + ```{.bash data-prompt="$"} + $ sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm + ``` + + 5. Disable the default PostgreSQL module + + ```{.bash data-prompt="$"} + $ sudo dnf module disable postgresql ``` === "Oracle Linux 9" @@ -141,18 +193,76 @@ For Red Hat Enterprise Linux 8 and derivatives, replace the operating system ver Run the following commands: {.power-number} - 1. Install `epel` repository + 1. Install the EPEL repository ```{.bash data-prompt="$"} - $ sudo yum install epel-release + $ sudo dnf install -y epel-release ``` - 2. Enable the codeready builder repository to resolve dependencies conflict. + 2. Install DNF plugin utilities + + ```{.bash data-prompt="$"} + $ sudo dnf install dnf-plugins-core + ``` + + 3. Enable the CodeReady Builder repository to resolve dependency conflicts ```{.bash data-prompt="$"} $ sudo dnf config-manager --set-enabled ol9_codeready_builder ``` +=== "Rocky Linux 8" + + Run the following commands: + {.power-number} + + 1. Install the EPEL release package + + ```{.bash data-prompt="$"} + $ sudo dnf install -y epel-release + ``` + + 2. Install DNF plugin utilities + + ```{.bash data-prompt="$"} + $ sudo dnf install dnf-plugins-core + ``` + + 3. Enable the PowerTools repository + + ```{.bash data-prompt="$"} + $ sudo dnf config-manager --set-enabled powertools + ``` + + 4. Disable the default PostgreSQL module + + ```{.bash data-prompt="$"} + $ sudo dnf module disable postgresql + ``` + +=== "Rocky Linux 9" + + Run the following commands: + {.power-number} + + 1. Install the EPEL repository + + ```{.bash data-prompt="$"} + $ sudo dnf install -y epel-release + ``` + + 2. Install DNF plugin utilities + + ```{.bash data-prompt="$"} + $ sudo dnf install dnf-plugins-core + ``` + + 3. Enable the CodeReady Builder repository to resolve dependency conflicts + + ```{.bash data-prompt="$"} + $ sudo dnf config-manager --set-enabled crb + ``` + === "RHEL UBI 9" Run the following commands: pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy