diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/migration/dha_bulk_migration.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/migration/dha_bulk_migration.mdx index 8b7c43c8ec6..f4226841d7a 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/migration/dha_bulk_migration.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/migration/dha_bulk_migration.mdx @@ -157,7 +157,7 @@ cluster: Save it as `pgd-cli-config.yml`. -See also [Installing PGD CLI](/pgd/latest/cli/installing/). +See also [Installing PGD CLI](/pgd/latest/reference/cli/installing/). #### Installing Migration Toolkit diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx index 0b03d06aed0..64ce1358436 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx @@ -4,4 +4,4 @@ navTitle: Distributed High Availability description: The PGD defaults and commands for Distributed high availability on EDB Postgres AI Cloud Service. --- -When running a distributed high-availability cluster on Cloud Service, you can use the [PGD CLI](/pgd/latest/cli/) to manage cluster operations. Examples of these operations include switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster. +When running a distributed high-availability cluster on Cloud Service, you can use the [PGD CLI](/pgd/latest/reference/cli/) to manage cluster operations. Examples of these operations include switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx index 5aa4bbbcb22..07affb881ab 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx @@ -6,12 +6,12 @@ redirects: - /biganimal/latest/using_cluster/pgd_cli_ba/ #generated for BigAnimal URL path removal branch --- -When running a distributed high-availability cluster on Cloud Service, you can use the [PGD CLI](/pgd/latest/cli/) to manage cluster operations. +When running a distributed high-availability cluster on Cloud Service, you can use the [PGD CLI](/pgd/latest/reference/cli/) to manage cluster operations. Examples of these operations include switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster. ## Installing the PGD CLI -To [install the PGD CLI](/pgd/latest/cli/installing/), for Debian and Ubuntu machines, replace `` with your EDB subscription token in the following command: +To [install the PGD CLI](/pgd/latest/reference/cli/installing/), for Debian and Ubuntu machines, replace `` with your EDB subscription token in the following command: ```bash curl -1sLf 'https://downloads.enterprisedb.com//postgres_distributed/setup.deb.sh' | sudo -E bash @@ -31,11 +31,11 @@ sudo yum install edb-pgd5-cli To connect to your distributed high-availability Cloud Service cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/cli/discover_connections/). From your Console: -1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view. -2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. -3. Select your cluster. -4. In the view of your cluster, select the **Connect** tab. -5. Copy the read/write URI from the connection info. This is your connection string. +1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view. +2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. +3. Select your cluster. +4. In the view of your cluster, select the **Connect** tab. +5. Copy the read/write URI from the connection info. This is your connection string. ### Using the PGD CLI with your database connection string @@ -109,4 +109,4 @@ __OUTPUT__ Command executed successfully ``` -See the [PGD CLI command reference](/pgd/latest/cli/command_ref/) for the full range of PGD CLI commands and their descriptions. +See the [PGD CLI command reference](/pgd/latest/reference/cli/command_ref/) for the full range of PGD CLI commands and their descriptions. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_defaults_ba.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_defaults_ba.mdx index 648dd3f7594..cf68a25f1fb 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_defaults_ba.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_defaults_ba.mdx @@ -8,9 +8,9 @@ redirects: ## Commit scope -[Commit scopes](/pgd/latest/commit-scopes/commit-scopes/) in PGD are a set of rules that describe the behavior of the system as transactions are committed. Because they define how transactions are replicated across a distributed database, they have an effect on consistency, durability, and performance. +[Commit scopes](/pgd/latest/reference/commit-scopes/commit-scopes/) in PGD are a set of rules that describe the behavior of the system as transactions are committed. Because they define how transactions are replicated across a distributed database, they have an effect on consistency, durability, and performance. -The actual behavior depends on the kind of commit scope a commit scope's rule uses: [Group Commit](/pgd/latest/commit-scopes/group-commit/), [Commit At Most Once](/pgd/latest/commit-scopes/camo/), [Lag Control](/pgd/latest/commit-scopes/lag-control/), [Synchronous Commit](/pgd/latest/commit-scopes/synchronous_commit/), or a combination of these. +The actual behavior depends on the kind of commit scope a commit scope's rule uses: [Group Commit](/pgd/latest/reference/commit-scopes/group-commit/), [Commit At Most Once](/pgd/latest/reference/commit-scopes/camo/), [Lag Control](/pgd/latest/reference/commit-scopes/lag-control/), [Synchronous Commit](/pgd/latest/reference/commit-scopes/synchronous_commit/), or a combination of these. This flexibility means that selecting a balanced combination of rules can take time. To speed up deployment, Cloud Service's PGD has a preset selection of commit scopes for typical user requirements. These presets don't prevent you from creating and applying your own commit scopes as needed. @@ -137,4 +137,4 @@ SELECT bdr.alter_node_group_option( Commit scopes can be applied per transaction. In that case, they override a Cloud Service preset. !!! -For more information, see [Commit scopes](/pgd/latest/commit-scopes/commit-scopes/) and [Commit scope rules](/pgd/latest/commit-scopes/commit-scope-rules/) in the PGD documentation. +For more information, see [Commit scopes](/pgd/latest/reference/commit-scopes/commit-scopes/) and [Commit scope rules](/pgd/latest/reference/commit-scopes/commit-scope-rules/) in the PGD documentation. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability.mdx index c9610372858..2e68f2d7660 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability.mdx @@ -11,7 +11,7 @@ This configuration provides a true active-active solution as each data group is Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions. -Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes in each group is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/commit-scopes/commit-scopes/). +Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes in each group is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/reference/commit-scopes/commit-scopes/). [PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover. @@ -19,7 +19,7 @@ The witness node/witness group doesn't host data but exists for management purpo !!!Note - Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed set-leader](/pgd/latest/cli/command_ref/group/set-leader) feature, which provides subsecond interruptions during planned lifecycle operations. + Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed set-leader](/pgd/latest/reference/cli/command_ref/group/set-leader) feature, which provides subsecond interruptions during planned lifecycle operations. ## Single data location diff --git a/advocacy_docs/edb-postgres-ai/overview/latest-release-news/2025q1release.mdx b/advocacy_docs/edb-postgres-ai/overview/latest-release-news/2025q1release.mdx index f660c0a4681..5b08ce87ca1 100644 --- a/advocacy_docs/edb-postgres-ai/overview/latest-release-news/2025q1release.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/latest-release-news/2025q1release.mdx @@ -67,11 +67,11 @@ Refer to the [Rubrik Partner Page](https://www.enterprisedb.com/partners/rubrik EDB Postgres AI leads the way for applications that require high availability for critical business continuity. With [EDB Postgres Distributed (PGD)](https://www.enterprisedb.com/products/edb-postgres-distributed), customers can build [geo-distributed](https://www.enterprisedb.com/use-case/geo-distributed) architectures that ensure continuous availability, improve performance by placing data closer to users, and enable safe, zero-downtime software deployments. -The latest version, [PGD 5.7.0](https://www.enterprisedb.com/docs/pgd/latest/), is now generally available. It delivers up to 99.999% availability with improved reliability and streamlined operations through enhanced integration with [third party change data capture (CDC) functionality](https://www.enterprisedb.com/docs/pgd/latest/cdc-failover/). This allows seamless failover of logical slots for common CDC plugins like `test_decoding` and `pgoutput`, eliminating the need for third party subscribers to reseed tables during lead primary changes and ensuring continuous data replication. +The latest version, [PGD 5.7.0](https://www.enterprisedb.com/docs/pgd/latest/), is now generally available. It delivers up to 99.999% availability with improved reliability and streamlined operations through enhanced integration with [third party change data capture (CDC) functionality](https://www.enterprisedb.com/docs/pgd/5.7/cdc-failover/). This allows seamless failover of logical slots for common CDC plugins like `test_decoding` and `pgoutput`, eliminating the need for third party subscribers to reseed tables during lead primary changes and ensuring continuous data replication. -Additionally, the new [`Assess`](https://www.enterprisedb.com/docs/pgd/latest/cli/command_ref/assess/) command in the PGD CLI ensures seamless migrations to PGD. The tool proactively identifies PostgreSQL incompatibilities before upgrades, especially those impacting logical replication, so you can address them before upgrading to PGD. +Additionally, the new `Assess` command in the PGD CLI ensures seamless migrations to PGD. The tool proactively identifies PostgreSQL incompatibilities before upgrades, especially those impacting logical replication, so you can address them before upgrading to PGD. -PGD 5.7.0 also introduces the [`pgd node upgrade`](https://www.enterprisedb.com/docs/pgd/latest/cli/command_ref/node/upgrade/) command, which enables upgrades to the latest versions of PGD and PostgreSQL with a single command, limiting the manual work required for maintenance and reducing complexity and potential errors. These updates collectively enhance the robustness and usability of PGD to provide users with a more reliable and efficient data management experience. +PGD 5.7.0 also introduces the `pgd node upgrade` command, which enables upgrades to the latest versions of PGD and PostgreSQL with a single command, limiting the manual work required for maintenance and reducing complexity and potential errors. These updates collectively enhance the robustness and usability of PGD to provide users with a more reliable and efficient data management experience. Learn more about how [PGD](https://www.enterprisedb.com/products/edb-postgres-distributed) enables high availability for enterprise applications. diff --git a/advocacy_docs/edb-postgres-ai/overview/latest-release-news/index.mdx b/advocacy_docs/edb-postgres-ai/overview/latest-release-news/index.mdx index 6eba5b63541..addc73752d9 100644 --- a/advocacy_docs/edb-postgres-ai/overview/latest-release-news/index.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/latest-release-news/index.mdx @@ -73,11 +73,11 @@ Refer to the [Rubrik Partner Page](https://www.enterprisedb.com/partners/rubrik EDB Postgres AI leads the way for applications that require high availability for critical business continuity. With [EDB Postgres Distributed (PGD)](https://www.enterprisedb.com/products/edb-postgres-distributed), customers can build [geo-distributed](https://www.enterprisedb.com/use-case/geo-distributed) architectures that ensure continuous availability, improve performance by placing data closer to users, and enable safe, zero-downtime software deployments. -The latest version, [PGD 5.7.0](https://www.enterprisedb.com/docs/pgd/latest/), is now generally available. It delivers up to 99.999% availability with improved reliability and streamlined operations through enhanced integration with [third party change data capture (CDC) functionality](https://www.enterprisedb.com/docs/pgd/latest/cdc-failover/). This allows seamless failover of logical slots for common CDC plugins like `test_decoding` and `pgoutput`, eliminating the need for third party subscribers to reseed tables during lead primary changes and ensuring continuous data replication. +The latest version, [PGD 5.7.0](https://www.enterprisedb.com/docs/pgd/5.7/), is now generally available. It delivers up to 99.999% availability with improved reliability and streamlined operations through enhanced integration with third party change data capture (CDC) functionality. This allows seamless failover of logical slots for common CDC plugins like `test_decoding` and `pgoutput`, eliminating the need for third party subscribers to reseed tables during lead primary changes and ensuring continuous data replication. -Additionally, the new [`Assess`](https://www.enterprisedb.com/docs/pgd/latest/cli/command_ref/assess/) command in the PGD CLI ensures seamless migrations to PGD. The tool proactively identifies PostgreSQL incompatibilities before upgrades, especially those impacting logical replication, so you can address them before upgrading to PGD. +Additionally, the new `Assess` command in the PGD CLI ensures seamless migrations to PGD. The tool proactively identifies PostgreSQL incompatibilities before upgrades, especially those impacting logical replication, so you can address them before upgrading to PGD. -PGD 5.7.0 also introduces the [`pgd node upgrade`](https://www.enterprisedb.com/docs/pgd/latest/cli/command_ref/node/upgrade/) command, which enables upgrades to the latest versions of PGD and PostgreSQL with a single command, limiting the manual work required for maintenance and reducing complexity and potential errors. These updates collectively enhance the robustness and usability of PGD to provide users with a more reliable and efficient data management experience. +PGD 5.7.0 also introduces the `pgd node upgrade` command, which enables upgrades to the latest versions of PGD and PostgreSQL with a single command, limiting the manual work required for maintenance and reducing complexity and potential errors. These updates collectively enhance the robustness and usability of PGD to provide users with a more reliable and efficient data management experience. Learn more about how [PGD](https://www.enterprisedb.com/products/edb-postgres-distributed) enables high availability for enterprise applications. diff --git a/gatsby-node.js b/gatsby-node.js index ce1942262cb..e09522d9563 100644 --- a/gatsby-node.js +++ b/gatsby-node.js @@ -108,6 +108,12 @@ exports.onCreateNode = async ({ }); } + if (node.extension === "sh") { + await makeFileNodePublic(node, createNodeId, actions, { + mimeType: "text/plain", + }); + } + // these are a template of sorts, used to generate a function index for a reference section // see tools/automation/generators/refbuilder/refbuilder.js for details if (node.absolutePath.endsWith("index.mdx.src")) { diff --git a/product_docs/docs/livecompare/2/bdr_support.mdx b/product_docs/docs/livecompare/2/bdr_support.mdx index 8419b847762..4cc28b9fe9f 100644 --- a/product_docs/docs/livecompare/2/bdr_support.mdx +++ b/product_docs/docs/livecompare/2/bdr_support.mdx @@ -103,7 +103,7 @@ To enable pglogical metadata fetch instead of PGD, set `logical_replication_mode Using replication sets in PGD, you can configure specific tables to include in the PGD replication. You can also specify the nodes to receive data from these tables by configuring the node to subscribe to the replication set the table belongs to. This setting allows for different architectures such as PGD sharding and the use of PGD witness nodes. -A PGD witness is a regular PGD node that doesn't replicate any DML from other nodes. The purpose of the witness is to provide quorum in Raft Consensus voting. (For details on the PGD witness node, see [Witness nodes](/pgd/latest/nodes/witness_nodes/) in the PGD documentation.) Replication set configuration determines whether the witness replicates DDLs. This means that there are two types of PGD witnesses: +A PGD witness is a regular PGD node that doesn't replicate any DML from other nodes. The purpose of the witness is to provide quorum in Raft Consensus voting. (For details on the PGD witness node, see [Witness nodes](/pgd/latest/reference/nodes/witness_nodes/) in the PGD documentation.) Replication set configuration determines whether the witness replicates DDLs. This means that there are two types of PGD witnesses: - A completely empty node, without any data nor tables - A node that replicates DDL from other nodes, so it has empty tables diff --git a/product_docs/docs/livecompare/3/bdr_support.mdx b/product_docs/docs/livecompare/3/bdr_support.mdx index 8419b847762..4cc28b9fe9f 100644 --- a/product_docs/docs/livecompare/3/bdr_support.mdx +++ b/product_docs/docs/livecompare/3/bdr_support.mdx @@ -103,7 +103,7 @@ To enable pglogical metadata fetch instead of PGD, set `logical_replication_mode Using replication sets in PGD, you can configure specific tables to include in the PGD replication. You can also specify the nodes to receive data from these tables by configuring the node to subscribe to the replication set the table belongs to. This setting allows for different architectures such as PGD sharding and the use of PGD witness nodes. -A PGD witness is a regular PGD node that doesn't replicate any DML from other nodes. The purpose of the witness is to provide quorum in Raft Consensus voting. (For details on the PGD witness node, see [Witness nodes](/pgd/latest/nodes/witness_nodes/) in the PGD documentation.) Replication set configuration determines whether the witness replicates DDLs. This means that there are two types of PGD witnesses: +A PGD witness is a regular PGD node that doesn't replicate any DML from other nodes. The purpose of the witness is to provide quorum in Raft Consensus voting. (For details on the PGD witness node, see [Witness nodes](/pgd/latest/reference/nodes/witness_nodes/) in the PGD documentation.) Replication set configuration determines whether the witness replicates DDLs. This means that there are two types of PGD witnesses: - A completely empty node, without any data nor tables - A node that replicates DDL from other nodes, so it has empty tables diff --git a/product_docs/docs/pgd/3.6/index.mdx b/product_docs/docs/pgd/3.6/index.mdx index 379d661f8db..c6bc1f2e92c 100644 --- a/product_docs/docs/pgd/3.6/index.mdx +++ b/product_docs/docs/pgd/3.6/index.mdx @@ -24,7 +24,7 @@ Two different Postgres distributions can be used: - [EDB Postgres Extended Server](/pge/latest) - PostgreSQL compatible and optimized for replication What Postgres distribution and version is right for you depends on the features you need. -See the feature matrix in [Choosing a Postgres distribution](/pgd/latest/planning/choosing_server/) for detailed comparison. +See the feature matrix in [Choosing a Postgres distribution](/pgd/5.7/planning/choosing_server/) for detailed comparison. ## BDR diff --git a/product_docs/docs/pgd/5.6/appusage/timing.mdx b/product_docs/docs/pgd/5.6/appusage/timing.mdx index 62300ef3c8f..6515d73a0df 100644 --- a/product_docs/docs/pgd/5.6/appusage/timing.mdx +++ b/product_docs/docs/pgd/5.6/appusage/timing.mdx @@ -8,7 +8,7 @@ possible for a client connected to multiple PGD nodes or switching between them to read stale data. A [queue wait -function](/pgd/latest/reference/functions/#bdrwait_for_apply_queue) is provided +function](/pgd/5.6/reference/functions/#bdrwait_for_apply_queue) is provided for clients or proxies to prevent such stale reads. The synchronous replication features of Postgres are available to PGD as well. diff --git a/product_docs/docs/pgd/5.6/backup.mdx b/product_docs/docs/pgd/5.6/backup.mdx index 60af885fca3..3c20928da55 100644 --- a/product_docs/docs/pgd/5.6/backup.mdx +++ b/product_docs/docs/pgd/5.6/backup.mdx @@ -233,7 +233,7 @@ of a single PGD node, optionally plus WAL archives: To clean up leftover PGD metadata: -1. Drop the PGD node using [`bdr.drop_node`](/pgd/latest/reference/functions-internal#bdrdrop_node). +1. Drop the PGD node using [`bdr.drop_node`](/pgd/5.6/reference/functions-internal#bdrdrop_node). 2. Fully stop and restart PostgreSQL (important!). #### Cleanup of replication origins diff --git a/product_docs/docs/pgd/5.6/cli/installing/index.mdx b/product_docs/docs/pgd/5.6/cli/installing/index.mdx index af78ef30dde..9fa0b2fd3db 100644 --- a/product_docs/docs/pgd/5.6/cli/installing/index.mdx +++ b/product_docs/docs/pgd/5.6/cli/installing/index.mdx @@ -2,7 +2,7 @@ title: "Installing PGD CLI" navTitle: "Installing PGD CLI" redirects: - - /pgd/latest/cli/installing_cli + - /pgd/5.6/cli/installing_cli deepToC: true indexCards: simple description: Installing the PGD CLI on various systems. diff --git a/product_docs/docs/pgd/5.6/commit-scopes/camo.mdx b/product_docs/docs/pgd/5.6/commit-scopes/camo.mdx index 63db329783b..346d39e6870 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/camo.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/camo.mdx @@ -2,7 +2,7 @@ title: Commit At Most Once navTitle: Commit At Most Once redirects: - - /pgd/latest/bdr/camo/ + - /pgd/5.6/bdr/camo/ --- Commit scope kind: `CAMO` @@ -43,7 +43,7 @@ To use CAMO, an application must issue an explicit `COMMIT` message as a separat ## Configuration -See the[`CAMO`](/pgd/latest/reference/commit-scopes/#camo) commit scope reference for configuration parameters. +See the[`CAMO`](/pgd/5.6/reference/commit-scopes/#camo) commit scope reference for configuration parameters. ## Confirmation @@ -76,7 +76,7 @@ When the `DEGRADE ON ... TO ASYNC` clause is used in the commit scope, a node de This doesn't allow COMMIT status to be retrieved, but it does let you choose availability over consistency. This mode can tolerate a single-node failure. In case both nodes of a CAMO pair fail, they might choose incongruent commit decisions to maintain availability, leading to data inconsistencies. -For a CAMO partner to switch to ready, it needs to be connected, and the estimated catchup interval needs to drop below the `timeout` value of `TO ASYNC`. You can check the current readiness status of a CAMO partner with [`bdr.is_camo_partner_ready()`](/pgd/latest/reference/functions#bdris_camo_partner_ready), while [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) provides the current estimate of the catchup time. +For a CAMO partner to switch to ready, it needs to be connected, and the estimated catchup interval needs to drop below the `timeout` value of `TO ASYNC`. You can check the current readiness status of a CAMO partner with [`bdr.is_camo_partner_ready()`](/pgd/5.6/reference/functions#bdris_camo_partner_ready), while [`bdr.node_replication_rates`](/pgd/5.6/reference/catalogs-visible#bdrnode_replication_rates) provides the current estimate of the catchup time. The switch from CAMO-protected to asynchronous mode is only ever triggered by an actual CAMO transaction. This is true either because the commit exceeds the `timeout` value of `TO ASYNC` or, in case the CAMO partner is already known, disconnected at the time of commit. This switch is independent of the estimated catchup interval. If the CAMO pair is configured to require the current node to be the write lead of a group as configured through the `enable_proxy_routing` node group option. See [Commit scopes](commit-scopes) for syntax. This can prevent a split brain situation due to an isolated node from switching to asynchronous mode. If `enable_proxy_routing` isn't set for the CAMO group, the origin node switches to asynchronous mode immediately. @@ -85,7 +85,7 @@ The switch from asynchronous mode to CAMO mode depends on the CAMO partner node, the CAMO partner further delays the switch back to CAMO protected mode. Unlike during normal CAMO operation, in asynchronous mode there's no added commit overhead. This can be problematic, as it allows the node to continuously process more transactions than the CAMO pair can normally process. Even if the CAMO partner eventually reconnects and applies transactions, its lag only ever increases -in such a situation, preventing reestablishing the CAMO protection. To artificially throttle transactional throughput, PGD provides the [`bdr.camo_local_mode_delay`](/pgd/latest/reference/pgd-settings#bdrcamo_local_mode_delay) setting, which allows you to delay a `COMMIT` in local mode by an arbitrary amount of time. We recommend measuring commit times in normal CAMO mode during expected workloads and configuring this delay accordingly. The default is 5 ms, which reflects a asynchronous network and a relatively quick CAMO partner response. +in such a situation, preventing reestablishing the CAMO protection. To artificially throttle transactional throughput, PGD provides the [`bdr.camo_local_mode_delay`](/pgd/5.6/reference/pgd-settings#bdrcamo_local_mode_delay) setting, which allows you to delay a `COMMIT` in local mode by an arbitrary amount of time. We recommend measuring commit times in normal CAMO mode during expected workloads and configuring this delay accordingly. The default is 5 ms, which reflects a asynchronous network and a relatively quick CAMO partner response. Consider the choice of whether to allow asynchronous mode in view of the architecture and the availability requirements. The following examples provide some detail. @@ -184,7 +184,7 @@ If it was a bad connection, then you can check on the CAMO partner node to see i If you can't connect to the partner node, there's not a lot you can do. In this case, panic, or take similar actions. -But if you can connect, you can use [`bdr.logical_transaction_status()`](/pgd/latest/reference/functions#bdrlogical_transaction_status) to find out how the transaction did. The code recorded the required values, node_id and xid (the transaction id), just before committing the transaction. +But if you can connect, you can use [`bdr.logical_transaction_status()`](/pgd/5.6/reference/functions#bdrlogical_transaction_status) to find out how the transaction did. The code recorded the required values, node_id and xid (the transaction id), just before committing the transaction. ``` sql = "SELECT bdr.logical_transaction_status($node_id, $xid)"; @@ -224,24 +224,24 @@ must have at least the [bdr_application](../security/pgd-predefined-roles/#bdr_a role assigned to them. !!! -The function [`bdr.is_camo_partner_connected()`](/pgd/latest/reference/functions#bdris_camo_partner_connected) allows checking the connection status of a CAMO partner node configured in pair mode. There currently is no equivalent for CAMO used with Eager Replication. +The function [`bdr.is_camo_partner_connected()`](/pgd/5.6/reference/functions#bdris_camo_partner_connected) allows checking the connection status of a CAMO partner node configured in pair mode. There currently is no equivalent for CAMO used with Eager Replication. -To check that the CAMO partner is ready, use the function [`bdr.is_camo_partner_ready`](/pgd/latest/reference/functions#bdris_camo_partner_ready). Underneath, this triggers the switch to and from local mode. +To check that the CAMO partner is ready, use the function [`bdr.is_camo_partner_ready`](/pgd/5.6/reference/functions#bdris_camo_partner_ready). Underneath, this triggers the switch to and from local mode. -To find out more about the configured CAMO partner, use [`bdr.get_configured_camo_partner()`](/pgd/latest/reference/functions#bdrget_configured_camo_partner). This function returns the local node's CAMO partner. +To find out more about the configured CAMO partner, use [`bdr.get_configured_camo_partner()`](/pgd/5.6/reference/functions#bdrget_configured_camo_partner). This function returns the local node's CAMO partner. You can wait on the CAMO partner to process the queue with the function -[`bdr.wait_for_camo_partner_queue()`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue). +[`bdr.wait_for_camo_partner_queue()`](/pgd/5.6/reference/functions#bdrwait_for_camo_partner_queue). This function is a wrapper of -[`bdr.wait_for_apply_queue`](/pgd/latest/reference/functions#bdrwait_for_apply_queue). +[`bdr.wait_for_apply_queue`](/pgd/5.6/reference/functions#bdrwait_for_apply_queue). The difference is that -[`bdr.wait_for_camo_partner_queue()`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue) +[`bdr.wait_for_camo_partner_queue()`](/pgd/5.6/reference/functions#bdrwait_for_camo_partner_queue) defaults to querying the CAMO partner node. It returns an error if the local node isn't part of a CAMO pair. To check the status of a transaction that was being committed when the node failed, the application must use the function -[`bdr.logical_transaction_status()`](/pgd/latest/reference/functions#bdrlogical_transaction_status). +[`bdr.logical_transaction_status()`](/pgd/5.6/reference/functions#bdrlogical_transaction_status). You pass this function the the node_id and transaction_id of the transaction you want to check on. With CAMO used in pair mode, you can use this function only on a node that's part of a CAMO pair. Along with Eager Replication, you can use it on all nodes. diff --git a/product_docs/docs/pgd/5.6/commit-scopes/commit-scope-rules.mdx b/product_docs/docs/pgd/5.6/commit-scopes/commit-scope-rules.mdx index e636ae25fbd..d3a88f7fd69 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/commit-scope-rules.mdx @@ -12,7 +12,7 @@ Each operation is made up of two or three parts: the commit scope group, an opti commit_scope_group [ confirmation_level ] commit_scope_kind ``` -A full formal syntax diagram is available in the [Commit scopes](/pgd/latest/reference/commit-scopes/#commit-scope-syntax) reference. +A full formal syntax diagram is available in the [Commit scopes](/pgd/5.6/reference/commit-scopes/#commit-scope-syntax) reference. A typical commit scope rule, such as `ANY 2 (group) GROUP COMMIT`, can be broken down into its components. `ANY 2 (group)` is the commit scope group specifying, for the rule, which nodes need to respond and confirm they processed the transaction. In this example, any two nodes from the named group must confirm. diff --git a/product_docs/docs/pgd/5.6/commit-scopes/degrading.mdx b/product_docs/docs/pgd/5.6/commit-scopes/degrading.mdx index 0c3980b09fc..3bdd2812ba5 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/degrading.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/degrading.mdx @@ -22,7 +22,7 @@ Once during the commit, while the commit being processed is waiting for response This mechanism alone is insufficient for the intended behavior, as this alone would mean that every transaction—even those that were certain to degrade due to connectivity issues—must wait for the timeout to expire before degraded mode kicks in, which would severely affect performance in such degrading-cluster scenarios. -To avoid this, the PGD manager process also periodically (every 5s) checks the connectivity and apply rate (the one in [bdr.node_replication_rates](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates)) and if there are commit scopes that would degrade at that point based on the current state of replication, they will be automatically degraded—such that any transaction using that commit scope when processing after that uses the degraded rule instead of waiting for timeout—until the manager process detects that replication is moving swiftly enough again. +To avoid this, the PGD manager process also periodically (every 5s) checks the connectivity and apply rate (the one in [bdr.node_replication_rates](/pgd/5.6/reference/catalogs-visible/#bdrnode_replication_rates)) and if there are commit scopes that would degrade at that point based on the current state of replication, they will be automatically degraded—such that any transaction using that commit scope when processing after that uses the degraded rule instead of waiting for timeout—until the manager process detects that replication is moving swiftly enough again. ## SYNCHRONOUS COMMIT and GROUP COMMIT diff --git a/product_docs/docs/pgd/5.6/commit-scopes/group-commit.mdx b/product_docs/docs/pgd/5.6/commit-scopes/group-commit.mdx index 470ba63294e..380c3fc083c 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/group-commit.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/group-commit.mdx @@ -1,7 +1,7 @@ --- title: Group Commit redirects: - - /pgd/latest/bdr/group-commit/ + - /pgd/5.6/bdr/group-commit/ deepToC: true --- @@ -58,7 +58,7 @@ See the Group Commit section of [Limitations](limitations#group-commit). ## Configuration -`GROUP_COMMIT` supports optional `GROUP COMMIT` parameters, as well as `ABORT ON` and `DEGRADE ON` clauses. For a full description of configuration parameters, see the [GROUP_COMMIT](/pgd/latest/reference/commit-scopes/#group-commit) commit scope reference or for more regarding `DEGRADE ON` options in general, see the [Degrade options](degrading) section. +`GROUP_COMMIT` supports optional `GROUP COMMIT` parameters, as well as `ABORT ON` and `DEGRADE ON` clauses. For a full description of configuration parameters, see the [GROUP_COMMIT](/pgd/5.6/reference/commit-scopes/#group-commit) commit scope reference or for more regarding `DEGRADE ON` options in general, see the [Degrade options](degrading) section. ## Confirmation diff --git a/product_docs/docs/pgd/5.6/commit-scopes/index.mdx b/product_docs/docs/pgd/5.6/commit-scopes/index.mdx index 7c7710869d0..680018df663 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/index.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/index.mdx @@ -20,9 +20,9 @@ navigation: - limitations description: Durability options, commit scopes, and lag control in PGD. redirects: - - /pgd/latest/bdr/durability/ - - /pgd/latest/choosing_durability/ - - /pgd/latest/durability/ + - /pgd/5.6/bdr/durability/ + - /pgd/5.6/choosing_durability/ + - /pgd/5.6/durability/ --- EDB Postgres Distributed (PGD) offers a range of synchronous modes to complement its diff --git a/product_docs/docs/pgd/5.6/commit-scopes/lag-control.mdx b/product_docs/docs/pgd/5.6/commit-scopes/lag-control.mdx index 8bfad64a166..8983b9a493b 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/lag-control.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/lag-control.mdx @@ -1,7 +1,7 @@ --- title: Lag Control redirects: - - /pgd/latest/bdr/lag-control/ + - /pgd/5.6/bdr/lag-control/ --- Commit scope kind: `LAG CONTROL` diff --git a/product_docs/docs/pgd/5.6/commit-scopes/limitations.mdx b/product_docs/docs/pgd/5.6/commit-scopes/limitations.mdx index a67b71db097..b52470f084b 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/limitations.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/limitations.mdx @@ -43,7 +43,7 @@ nodes in a group. If you use this feature, take the following limitations into a ## Eager -[Eager](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. +[Eager](/pgd/5.6/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It also doesn't allow `LISTEN` or `UNLISTEN`. diff --git a/product_docs/docs/pgd/5.6/commit-scopes/synchronous_commit.mdx b/product_docs/docs/pgd/5.6/commit-scopes/synchronous_commit.mdx index 668c0a13808..70e8067cb34 100644 --- a/product_docs/docs/pgd/5.6/commit-scopes/synchronous_commit.mdx +++ b/product_docs/docs/pgd/5.6/commit-scopes/synchronous_commit.mdx @@ -26,7 +26,7 @@ SELECT bdr.create_commit_scope( ## Configuration -`SYNCHRONOUS COMMIT` supports the optional `DEGRADE ON` clause. See the [`SYNCHRONOUS COMMIT`](/pgd/latest/reference/commit-scopes/#synchronous-commit) commit scope reference for specific configuration parameters or see [this section](degrading) regarding Degrade on options. +`SYNCHRONOUS COMMIT` supports the optional `DEGRADE ON` clause. See the [`SYNCHRONOUS COMMIT`](/pgd/5.6/reference/commit-scopes/#synchronous-commit) commit scope reference for specific configuration parameters or see [this section](degrading) regarding Degrade on options. ## Confirmation diff --git a/product_docs/docs/pgd/5.6/compatibility.mdx b/product_docs/docs/pgd/5.6/compatibility.mdx index 14cd8df9c61..e1d72fc538a 100644 --- a/product_docs/docs/pgd/5.6/compatibility.mdx +++ b/product_docs/docs/pgd/5.6/compatibility.mdx @@ -11,12 +11,12 @@ The following table shows the major versions of PostgreSQL and each version of E | Postgres Version | PGD 5 | PGD 4 | |----------------------|------------------------|--------------| -| 17 | [5.6.1+](/pgd/latest/) | | -| 16 | [5.3+](/pgd/latest/) | | -| 15 | [5](/pgd/latest/) | | -| 14 | [5](/pgd/latest/) | [4](/pgd/4/) | -| 13 | [5](/pgd/latest/) | [4](/pgd/4/) | -| 12 | [5](/pgd/latest/) | [4](/pgd/4/) | +| 17 | [5.6.1+](/pgd/5.6/) | | +| 16 | [5.3+](/pgd/5.6/) | | +| 15 | [5](/pgd/5.6/) | | +| 14 | [5](/pgd/5.6/) | [4](/pgd/4/) | +| 13 | [5](/pgd/5.6/) | [4](/pgd/4/) | +| 12 | [5](/pgd/5.6/) | [4](/pgd/4/) | diff --git a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/01_overview_clcd.mdx b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/01_overview_clcd.mdx index 0c810550011..63e91e284ca 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/01_overview_clcd.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/01_overview_clcd.mdx @@ -35,7 +35,7 @@ Applied to the previous example, the result is `(100,100)` on both nodes, despit When thinking about column-level conflict resolution, it can be useful to see tables as vertically partitioned, so that each update affects data in only one slice. This approach eliminates conflicts between changes to different subsets of columns. In fact, vertical partitioning can even be a practical alternative to column-level conflict resolution. -Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. +Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/5.6/reference/conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. ## Special problems for column-level conflict resolution diff --git a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/02_enabling_disabling.mdx b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/02_enabling_disabling.mdx index a145d1d67a7..6f5d5eeae27 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/02_enabling_disabling.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/02_enabling_disabling.mdx @@ -8,11 +8,11 @@ deepToC: true Column-level conflict detection uses the `column_timestamps` type. This type requires any user needing to detect column-level conflicts to have at least the [bdr_application](../../security/pgd-predefined-roles/#bdr_application) role assigned. !!! -The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions/#bdralter_table_conflict_detection) function manages column-level conflict resolution. +The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/5.6/reference/conflict_functions/#bdralter_table_conflict_detection) function manages column-level conflict resolution. ## Using bdr.alter_table_conflict_detection to enable column-level conflict resolution -The [bdr.alter_table_conflict_detection](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions/#bdralter_table_conflict_detection) function takes a table name and column name as its arguments. The column is added to the table as a `column_modify_timestamp` column. The function also adds two triggers (BEFORE INSERT and BEFORE UPDATE) that are responsible for maintaining timestamps in the new column before each change. +The [bdr.alter_table_conflict_detection](https://www.enterprisedb.com/docs/pgd/5.6/reference/conflict_functions/#bdralter_table_conflict_detection) function takes a table name and column name as its arguments. The column is added to the table as a `column_modify_timestamp` column. The function also adds two triggers (BEFORE INSERT and BEFORE UPDATE) that are responsible for maintaining timestamps in the new column before each change. ```sql db=# CREATE TABLE my_app.test_table (id SERIAL PRIMARY KEY, val INT); diff --git a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/03_timestamps.mdx b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/03_timestamps.mdx index 1e20d619aad..30803a62804 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/03_timestamps.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/03_timestamps.mdx @@ -21,7 +21,7 @@ This approach is simple and, for many cases, it's correct, for example, when the For example, if an `UPDATE` affects multiple rows, the clock continues ticking while the `UPDATE` runs. So each row gets a slightly different timestamp, even if they're being modified concurrently by the one `UPDATE`. This behavior, in turn, means that the effects of concurrent changes might get "mixed" in various ways, depending on how the changes performed on different nodes interleaves. -Another possible issue is clock skew. When the clocks on different nodes drift, the timestamps generated by those nodes also drift. This clock skew can induce unexpected behavior such as newer changes being discarded because the timestamps are apparently switched around. However, you can manage clock skew between nodes using the parameters [bdr.maximum_clock_skew](/pgd/latest/reference/pgd-settings/#bdrmaximum_clock_skew) and [bdr.maximum_clock_skew_action](/pgd/latest/reference/pgd-settings/#bdrmaximum_clock_skew_action). +Another possible issue is clock skew. When the clocks on different nodes drift, the timestamps generated by those nodes also drift. This clock skew can induce unexpected behavior such as newer changes being discarded because the timestamps are apparently switched around. However, you can manage clock skew between nodes using the parameters [bdr.maximum_clock_skew](/pgd/5.6/reference/pgd-settings/#bdrmaximum_clock_skew) and [bdr.maximum_clock_skew_action](/pgd/5.6/reference/pgd-settings/#bdrmaximum_clock_skew_action). As the current timestamp is unrelated to the commit timestamp, using it to resolve conflicts means that the result isn't equivalent to the commit order, which means it probably can't be serialized. diff --git a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/index.mdx b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/index.mdx index 5d379171bae..e178877bd1f 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/index.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/column-level-conflicts/index.mdx @@ -2,7 +2,7 @@ navTitle: Column-level conflict resolution title: Column-level conflict detection redirects: - - /pgd/latest/bdr/column-level-conflicts/ + - /pgd/5.6/bdr/column-level-conflicts/ --- By default, conflicts are resolved at row level. When changes from two nodes conflict, either the local or remote tuple is selected and the other is discarded. For example, commit timestamps for the two conflicting changes might be compared and the newer one kept. This approach ensures that all nodes converge to the same result and establishes commit-order-like semantics on the whole cluster. diff --git a/product_docs/docs/pgd/5.6/conflict-management/conflicts/00_conflicts_overview.mdx b/product_docs/docs/pgd/5.6/conflict-management/conflicts/00_conflicts_overview.mdx index da19837bec6..167f2f5dcfb 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/conflicts/00_conflicts_overview.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/conflicts/00_conflicts_overview.mdx @@ -15,7 +15,7 @@ Conflict handling is configurable, as described in [Conflict resolution](04_conf Column-level conflict detection and resolution is available with PGD, as described in [CLCD](../column-level-conflicts). -By default, all conflicts are logged to [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible/#bdrconflict_history). If conflicts are possible, then table owners must monitor for them and analyze how to avoid them or make plans to handle them regularly as an application task. The [LiveCompare](/livecompare/latest) tool is also available to scan regularly for divergence. +By default, all conflicts are logged to [`bdr.conflict_history`](/pgd/5.6/reference/catalogs-visible/#bdrconflict_history). If conflicts are possible, then table owners must monitor for them and analyze how to avoid them or make plans to handle them regularly as an application task. The [LiveCompare](/livecompare/latest) tool is also available to scan regularly for divergence. Some clustering systems use distributed lock mechanisms to prevent concurrent access to data. These can perform reasonably when servers are very close to each other but can't support geographically distributed applications where very low latency is critical for acceptable performance. diff --git a/product_docs/docs/pgd/5.6/conflict-management/conflicts/02_types_of_conflict.mdx b/product_docs/docs/pgd/5.6/conflict-management/conflicts/02_types_of_conflict.mdx index d9f1cca3255..cb6b2a10d3e 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/conflicts/02_types_of_conflict.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/conflicts/02_types_of_conflict.mdx @@ -50,7 +50,7 @@ The deletion tries to preserve the row with the correct `PRIMARY KEY` and delete In case of multiple rows conflicting this way, if the result of conflict resolution is to proceed with the insert operation, some of the data is always deleted. !!! -You can also define a different behavior using a [conflict trigger](/pgd/latest/striggers/#conflict-triggers). +You can also define a different behavior using a [conflict trigger](/pgd/5.6/striggers/#conflict-triggers). ### UPDATE/UPDATE conflicts diff --git a/product_docs/docs/pgd/5.6/conflict-management/conflicts/index.mdx b/product_docs/docs/pgd/5.6/conflict-management/conflicts/index.mdx index 85a6b5ec93e..1df20df56b4 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/conflicts/index.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/conflicts/index.mdx @@ -1,7 +1,7 @@ --- title: Conflicts redirects: - - /pgd/latest/bdr/conflicts/ + - /pgd/5.6/bdr/conflicts/ --- EDB Postgres Distributed is an active/active or multi-master DBMS. If used asynchronously, writes to the same or related rows from multiple different nodes can result in data conflicts when using standard data types. diff --git a/product_docs/docs/pgd/5.6/conflict-management/crdt/index.mdx b/product_docs/docs/pgd/5.6/conflict-management/crdt/index.mdx index 70292e7018c..157875552f0 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/crdt/index.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/crdt/index.mdx @@ -2,7 +2,7 @@ navTitle: CRDTs title: Conflict-free replicated data types redirects: - - /pgd/latest/bdr/crdt/ + - /pgd/5.6/bdr/crdt/ --- Conflict-free replicated data types (CRDTs) support merging values from concurrently modified rows instead of discarding one of the rows as the traditional resolution does. diff --git a/product_docs/docs/pgd/5.6/conflict-management/index.mdx b/product_docs/docs/pgd/5.6/conflict-management/index.mdx index 3d337bf87a2..0c67162687b 100644 --- a/product_docs/docs/pgd/5.6/conflict-management/index.mdx +++ b/product_docs/docs/pgd/5.6/conflict-management/index.mdx @@ -18,4 +18,4 @@ By default, conflicts are resolved at the row level. When changes from two nodes Column-level conflict detection and resolution is available with PGD, described in [CLCD](column-level-conflicts). -If you want to avoid conflicts, you can use [Group Commit](/pgd/latest/commit-scopes/group-commit/) with [Eager conflict resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) or conflict-free data types (CRDTs), described in [CRDT](crdt). You can also use PGD Proxy and route all writes to one write-leader, eliminating the chance for inter-nodal conflicts. +If you want to avoid conflicts, you can use [Group Commit](/pgd/5.6/commit-scopes/group-commit/) with [Eager conflict resolution](/pgd/5.6/commit-scopes/group-commit/#eager-conflict-resolution) or conflict-free data types (CRDTs), described in [CRDT](crdt). You can also use PGD Proxy and route all writes to one write-leader, eliminating the chance for inter-nodal conflicts. diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-locking.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-locking.mdx index d44fc6b9985..c3daeca5d39 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-locking.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-locking.mdx @@ -73,7 +73,7 @@ Witness and subscriber-only nodes aren't eligible to participate. If a DDL statement isn't replicated, no global locks are acquired. -Specify locking behavior with the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) parameter, as +Specify locking behavior with the [`bdr.ddl_locking`](/pgd/5.6/reference/pgd-settings#bdrddl_locking) parameter, as explained in [Executing DDL on PGD systems](ddl-overview#executing-ddl-on-pgd-systems): - `ddl_locking = all` takes global DDL lock and, if needed, takes relation DML lock. diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-managing-with-pgd-replication.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-managing-with-pgd-replication.mdx index c1fe779c19e..0d66318dce8 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-managing-with-pgd-replication.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-managing-with-pgd-replication.mdx @@ -32,7 +32,7 @@ SELECT bdr.run_on_all_nodes($ddl$ $ddl$); ``` -We recommend using the [`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) technique with `CREATE +We recommend using the [`bdr.run_on_all_nodes()`](/pgd/5.6/reference/functions#bdrrun_on_all_nodes) technique with `CREATE INDEX CONCURRENTLY`, noting that DDL replication must be disabled for the whole session because `CREATE INDEX CONCURRENTLY` is a multi-transaction command. Avoid `CREATE INDEX` on production systems @@ -60,10 +60,10 @@ cancel the DDL on the originating node with **Control-C** in psql or with `pg_ca You can't cancel a DDL lock from any other node. You can control how long the global lock takes with optional global locking -timeout settings. [`bdr.global_lock_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_timeout) limits how long the wait for +timeout settings. [`bdr.global_lock_timeout`](/pgd/5.6/reference/pgd-settings#bdrglobal_lock_timeout) limits how long the wait for acquiring the global lock can take before it's canceled. -[`bdr.global_lock_statement_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_statement_timeout) limits the runtime length of any statement -in transaction that holds global locks, and [`bdr.global_lock_idle_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_idle_timeout) sets +[`bdr.global_lock_statement_timeout`](/pgd/5.6/reference/pgd-settings#bdrglobal_lock_statement_timeout) limits the runtime length of any statement +in transaction that holds global locks, and [`bdr.global_lock_idle_timeout`](/pgd/5.6/reference/pgd-settings#bdrglobal_lock_idle_timeout) sets the maximum allowed idle time (time between statements) for a transaction holding any global locks. You can disable all of these timeouts by setting their values to zero. @@ -84,7 +84,7 @@ locks that it holds. If it stays down for a long time or indefinitely, remove the node from the PGD group to release the global locks. This is one reason for executing emergency DDL using the `SET` command as -the bdr_superuser to update the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) value. +the bdr_superuser to update the [`bdr.ddl_locking`](/pgd/5.6/reference/pgd-settings#bdrddl_locking) value. If one of the other nodes goes down after it confirmed the global lock but before the command acquiring it executed, the execution of @@ -102,7 +102,7 @@ command continues normally, and the lock is released. Not all commands can be replicated automatically. Such commands are generally disallowed, unless DDL replication is turned off -by turning [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) off. +by turning [`bdr.ddl_replication`](/pgd/5.6/reference/pgd-settings#bdrddl_replication) off. PGD prevents some DDL statements from running when it's active on a database. This protects the consistency of the system by disallowing diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-overview.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-overview.mdx index aec5291d0ed..b51aa1dfb4e 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-overview.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-overview.mdx @@ -71,7 +71,7 @@ it a useful option when creating a new and empty database schema. These options can be set only by the bdr_superuser, by the superuser, or in the `postgres.conf` configuration file. -When using the [`bdr.replicate_ddl_command`](/pgd/latest/reference/functions#bdrreplicate_ddl_command), you can set this +When using the [`bdr.replicate_ddl_command`](/pgd/5.6/reference/functions#bdrreplicate_ddl_command), you can set this parameter directly with the third argument, using the specified -[`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) setting only for the DDL commands passed to that +[`bdr.ddl_locking`](/pgd/5.6/reference/pgd-settings#bdrddl_locking) setting only for the DDL commands passed to that function. diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-pgd-functions-like-ddl.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-pgd-functions-like-ddl.mdx index 0f9aa5d00e3..d4f8aa46393 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-pgd-functions-like-ddl.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-pgd-functions-like-ddl.mdx @@ -10,13 +10,13 @@ information, see the documentation for the individual functions. Replication set management: -- [`bdr.create_replication_set`](/pgd/latest/reference/repsets-management#bdrcreate_replication_set) -- [`bdr.alter_replication_set`](/pgd/latest/reference/repsets-management#bdralter_replication_set) -- [`bdr.drop_replication_set`](/pgd/latest/reference/repsets-management#bdrdrop_replication_set) -- [`bdr.replication_set_add_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_add_table) -- [`bdr.replication_set_remove_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_remove_table) -- [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) -- [`bdr.replication_set_remove_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) +- [`bdr.create_replication_set`](/pgd/5.6/reference/repsets-management#bdrcreate_replication_set) +- [`bdr.alter_replication_set`](/pgd/5.6/reference/repsets-management#bdralter_replication_set) +- [`bdr.drop_replication_set`](/pgd/5.6/reference/repsets-management#bdrdrop_replication_set) +- [`bdr.replication_set_add_table`](/pgd/5.6/reference/repsets-membership#bdrreplication_set_add_table) +- [`bdr.replication_set_remove_table`](/pgd/5.6/reference/repsets-membership#bdrreplication_set_remove_table) +- [`bdr.replication_set_add_ddl_filter`](/pgd/5.6/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) +- [`bdr.replication_set_remove_ddl_filter`](/pgd/5.6/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) Conflict management: @@ -26,10 +26,10 @@ Conflict management: Sequence management: -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) +- [`bdr.alter_sequence_set_kind`](/pgd/5.6/reference/sequences#bdralter_sequence_set_kind) Stream triggers: -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) +- [`bdr.create_conflict_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) +- [`bdr.create_transform_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) +- [`bdr.drop_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrdrop_trigger) diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-replication-options.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-replication-options.mdx index cd37f8f04ef..14bc2ca8964 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-replication-options.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-replication-options.mdx @@ -3,7 +3,7 @@ title: DDL replication options navTitle: Options --- -The [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) parameter specifies replication behavior. +The [`bdr.ddl_replication`](/pgd/5.6/reference/pgd-settings#bdrddl_replication) parameter specifies replication behavior. `bdr.ddl_replication = on` is the default. This setting replicates DDL to the default replication set, which by default means all nodes. Non-default @@ -12,7 +12,7 @@ replication sets don't replicate DDL unless they have a defined for them. You can also replicate DDL to specific replication sets using the -function [`bdr.replicate_ddl_command()`](/pgd/latest/reference/functions#bdrreplicate_ddl_command). This function can be helpful if you +function [`bdr.replicate_ddl_command()`](/pgd/5.6/reference/functions#bdrreplicate_ddl_command). This function can be helpful if you want to run DDL commands when a node is down. It's also helpful if you want to have indexes or partitions that exist on a subset of nodes or rep sets, for example, all nodes at site1. @@ -26,7 +26,7 @@ SELECT bdr.replicate_ddl_command( ``` While we don't recommend it, you can skip automatic DDL replication and -execute it manually on each node using the [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) configuration +execute it manually on each node using the [`bdr.ddl_replication`](/pgd/5.6/reference/pgd-settings#bdrddl_replication) configuration parameter. ``` diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-role-manipulation.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-role-manipulation.mdx index 37cff6150aa..648e51b2184 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-role-manipulation.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-role-manipulation.mdx @@ -11,7 +11,7 @@ PGD requires that any roles that are referenced by any replicated DDL must exist on all nodes. The roles don't have to have the same grants, password, and so on, but they must exist. -PGD replicates role manipulation statements if [`bdr.role_replication`](/pgd/latest/reference/pgd-settings#bdrrole_replication) is +PGD replicates role manipulation statements if [`bdr.role_replication`](/pgd/5.6/reference/pgd-settings#bdrrole_replication) is enabled (default) and role manipulation statements are run in a PGD-enabled database. diff --git a/product_docs/docs/pgd/5.6/ddl/ddl-workarounds.mdx b/product_docs/docs/pgd/5.6/ddl/ddl-workarounds.mdx index 921110deb78..cd54c360b85 100644 --- a/product_docs/docs/pgd/5.6/ddl/ddl-workarounds.mdx +++ b/product_docs/docs/pgd/5.6/ddl/ddl-workarounds.mdx @@ -130,7 +130,7 @@ The `ALTER TYPE` statement is replicated, but affected tables aren't locked. When you use this DDL, ensure that the statement has successfully executed on all nodes before using the new type. You can achieve this using -the [`bdr.wait_slot_confirm_lsn()`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) function. +the [`bdr.wait_slot_confirm_lsn()`](/pgd/5.6/reference/functions#bdrwait_slot_confirm_lsn) function. This example ensures that the DDL is written to all nodes before using the new value in DML statements: diff --git a/product_docs/docs/pgd/5.6/decoding_worker.mdx b/product_docs/docs/pgd/5.6/decoding_worker.mdx index d73a8c6c171..e3daa03bff5 100644 --- a/product_docs/docs/pgd/5.6/decoding_worker.mdx +++ b/product_docs/docs/pgd/5.6/decoding_worker.mdx @@ -24,8 +24,8 @@ subscribing nodes received data. LCR files are stored under the size of the LCR files varies as replication lag increases, so this process also needs monitoring. The LCRs that aren't required by any of the PGD nodes are cleaned periodically. The interval between two consecutive cleanups is controlled by -[`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is -disabled when [`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval) is 0. +[`bdr.lcr_cleanup_interval`](/pgd/5.6/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is +disabled when [`bdr.lcr_cleanup_interval`](/pgd/5.6/reference/pgd-settings#bdrlcr_cleanup_interval) is 0. ## Disabling @@ -37,11 +37,11 @@ GUCs control the production and use of LCR per node. By default these are `false`. For production and use of LCRs, enable the decoding worker for the PGD group and set these GUCs to `true` on each of the nodes in the PGD group. -- [`bdr.enable_wal_decoder`](/pgd/latest/reference/pgd-settings#bdrenable_wal_decoder) — When `false`, all WAL +- [`bdr.enable_wal_decoder`](/pgd/5.6/reference/pgd-settings#bdrenable_wal_decoder) — When `false`, all WAL senders using LCRs restart to use WAL directly. When `true` along with the PGD group config, a decoding worker process is started to produce LCR and WAL senders that use LCR. -- [`bdr.receive_lcr`](/pgd/latest/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL +- [`bdr.receive_lcr`](/pgd/5.6/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL sender on the publisher node to use LCRs if available. @@ -82,7 +82,7 @@ The WAL decoder always streams the transactions to LCRs but based on downstream To support this feature, the system creates additional streaming files. These files have names in that begin with `STR_TXN_` and `CAS_TXN_` and each streamed transaction creates their own pair. -To enable transaction streaming with the WAL decoder, set the PGD group's `bdr.streaming_mode` set to ‘default’ using [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option). +To enable transaction streaming with the WAL decoder, set the PGD group's `bdr.streaming_mode` set to ‘default’ using [`bdr.alter_node_group_option`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_node_group_option). diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-cloudservice/index.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-cloudservice/index.mdx index 9d17e513485..552384fc36a 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-cloudservice/index.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-cloudservice/index.mdx @@ -2,9 +2,9 @@ title: Deploying and configuring PGD on EDB Postgres AI Cloud Service navTitle: On EDB Cloud Service redirects: - - /pgd/latest/deploy-config/deploy-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/install-admin/admin-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/deploy-config/deploy-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-biganimal/ #generated for pgd deploy-config-planning reorg --- EDB Postgres AI Cloud Service is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account where it's operated by our Postgres experts. EDB Postgres AI Cloud Service makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single and multi-region Always-on clusters. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-kubernetes/index.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-kubernetes/index.mdx index f0a8813151a..9a1b070cd0e 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-kubernetes/index.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-kubernetes/index.mdx @@ -2,8 +2,8 @@ title: Deploying and configuring PGD on Kubernetes navTitle: With Kubernetes redirects: - - /pgd/latest/install-admin/admin-kubernetes/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-kubernetes/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-kubernetes/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-kubernetes/ #generated for pgd deploy-config-planning reorg --- EDB CloudNativePG Global Cluster is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx index 1c9b8d291ff..24376ab0465 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx @@ -3,8 +3,8 @@ title: Step 1 - Provisioning hosts navTitle: Provisioning hosts deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg --- ## Provisioning hosts diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/02-install-postgres.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/02-install-postgres.mdx index dcc6a02b9c8..044c66a186c 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/02-install-postgres.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/02-install-postgres.mdx @@ -3,8 +3,8 @@ title: Step 2 - Installing Postgres navTitle: Installing Postgres deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg --- ## Installing Postgres diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx index 2f908694bab..54b47e96ef8 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx @@ -3,15 +3,15 @@ title: Step 3 - Configuring PGD repositories navTitle: Configuring PGD repositories deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg --- ## Configuring PGD repositories To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages. -Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](/pgd/latest/nodes/overview), such as witness or subscriber-only nodes. +Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](/pgd/5.6/nodes/overview), such as witness or subscriber-only nodes. * Use your EDB account. * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/04-installing-software.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/04-installing-software.mdx index 1384438cf71..39ed6634bde 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/04-installing-software.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/04-installing-software.mdx @@ -3,8 +3,8 @@ title: Step 4 - Installing the PGD software navTitle: Installing PGD software deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg --- ## Installing the PGD software @@ -28,7 +28,7 @@ You must perform these steps on each host before proceeding to the next step. * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

!!! Note The `max_worker_processes` value The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. - To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings). + To calculate the needed value, see [Postgres configuration/settings](/pgd/latest/reference/postgres-configuration/#postgres-settings). The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. !!! * Set a password on the EnterprisedDB/Postgres user. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx index 009e2486ecf..de44cc26671 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx @@ -3,8 +3,8 @@ title: Step 5 - Creating the PGD cluster navTitle: Creating the cluster deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg --- ## Creating the PGD cluster @@ -81,7 +81,7 @@ sudo -iu enterprisedb psql bdrdb ### Create the first node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); @@ -89,7 +89,7 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); #### Create the top-level group -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. +Call the [`bdr.create_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. ``` select bdr.create_node_group('pgd'); @@ -101,7 +101,7 @@ Using subgroups to organize your nodes is preferred, as it allows services like In a larger PGD installation, multiple subgroups can exist. These subgroups provide organizational grouping that enables geographical mapping of clusters and localized resilience. For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and the use of PGD Proxy. -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. +Call the [`bdr.create_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. The subgroup name is the first parameter, and the parent group is the second parameter. This example creates a subgroup `dc1` as a child of `pgd`. @@ -121,7 +121,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the second node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); @@ -129,7 +129,7 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); #### Join the second node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); @@ -146,7 +146,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the third node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); @@ -154,7 +154,7 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); #### Join the third node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/06-check-cluster.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/06-check-cluster.mdx index fc7938ce85c..365e6ee2ff4 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/06-check-cluster.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/06-check-cluster.mdx @@ -3,8 +3,8 @@ title: Step 6 - Checking the cluster navTitle: Checking the cluster deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg --- ## Checking the cluster diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx index 0bee3e06b9a..5505c0c5243 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx @@ -3,8 +3,8 @@ title: Step 7 - Configure proxies navTitle: Configure proxies deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg --- ## Configure proxies @@ -21,9 +21,9 @@ It's best practice to configure PGD Proxy for clusters to enable this behavior. To set up a proxy, you need to first prepare the cluster and subgroup the proxies will be working with by: -* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. -* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. -* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. +* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. +* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/5.6/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/5.6/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. +* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. * Create a pgdproxy user on the cluster with a password or other authentication. ### Configure each host as a proxy @@ -53,7 +53,7 @@ SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true'); SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true'); ``` -You can use the [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: +You can use the [`bdr.node_group_summary`](/pgd/5.6/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: ```sql SELECT node_group_name, enable_proxy_routing, enable_raft @@ -80,7 +80,7 @@ SELECT bdr.create_proxy('pgd-proxy-two','dc1'); SELECT bdr.create_proxy('pgd-proxy-three','dc1'); ``` -You can use the [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: +You can use the [`bdr.proxy_config_summary`](/pgd/5.6/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: ```sql SELECT proxy_name, node_group_name diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx index ca05ecab43a..bc29265e4ff 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx @@ -3,8 +3,8 @@ title: Step 8 - Using PGD CLI navTitle: Using PGD CLI deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg --- ## Using PGD CLI diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/index.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/index.mdx index 8fff69ab309..1f21981b24d 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/index.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-manual/deploying/index.mdx @@ -11,8 +11,8 @@ navigation: - 07-configure-proxies - 08-using-pgd-cli redirects: - - /pgd/latest/install-admin/admin-manual/installing/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-manual/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-manual/installing/ #generated for pgd deploy-config-planning reorg --- EDB offers automated PGD deployment using Trusted Postgres Architect (TPA) because it's generally more reliable than manual processes. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/01-configuring.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/01-configuring.mdx index bedd73aec82..0519a62c15d 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/01-configuring.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/01-configuring.mdx @@ -2,8 +2,8 @@ title: Configuring a PGD cluster with TPA navTitle: Configuring redirects: - - /pgd/latest/install-admin/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg --- The `tpaexec configure` command generates a simple YAML configuration file to describe a cluster, based on the options you select. The configuration is ready for immediate use, and you can modify it to better suit your needs. Editing the configuration file is the usual way to make any configuration changes to your cluster both before and after it's created. diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/02-deploying.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/02-deploying.mdx index 51a0136d452..3931bf1f622 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/02-deploying.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/02-deploying.mdx @@ -2,8 +2,8 @@ title: Provisioning, deploying, and testing navTitle: Deploying redirects: - - /pgd/latest/install-admin/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg --- ## Provision diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/index.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/index.mdx index 31738207df9..c83176d1d59 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/index.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/deploying/index.mdx @@ -4,15 +4,15 @@ navTitle: Deploying with TPA description: > Detailed reference and examples for using TPA to configure and deploy PGD redirects: - - /pgd/latest/tpa/ - - /pgd/latest/deployments/tpaexec/using_tpaexec/ - - /pgd/latest/tpa/using_tpa/ + - /pgd/5.6/tpa/ + - /pgd/5.6/deployments/tpaexec/using_tpaexec/ + - /pgd/5.6/tpa/using_tpa/ - ../deployments/tpaexec - ../deployments/tpaexec/installing_tpaexec - ../deployments/using_tpa/ - ../tpa - - /pgd/latest/install-admin/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg --- The standard way of automatically deploying EDB Postgres Distributed in a self-managed setting is to use EDB's deployment tool: [Trusted Postgres Architect](/tpa/latest/) (TPA). @@ -22,11 +22,11 @@ This applies to physical and virtual machines, both self-hosted and in the cloud !!! Note Get started with TPA and PGD quickly - If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. + If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/5.6/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. - If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. + If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/5.6/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. - If you want to run on your own Linux systems or VMs, you can also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](/pgd/latest/quickstart/quick_start_linux). + If you want to run on your own Linux systems or VMs, you can also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](/pgd/5.6/quickstart/quick_start_linux). ## Prerequisite: Install TPA diff --git a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/index.mdx b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/index.mdx index e69081153d9..7cb74cf1196 100644 --- a/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/index.mdx +++ b/product_docs/docs/pgd/5.6/deploy-config/deploy-tpa/index.mdx @@ -2,8 +2,8 @@ title: Deployment and management with TPA navTitle: Using TPA redirects: - - /pgd/latest/install-admin/admin-tpa/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-tpa/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-tpa/ #generated for pgd deploy-config-planning reorg --- TPA (Trusted Postgres Architect) is a standard automated way of installing PGD and Postgres on physical and virtual machines, diff --git a/product_docs/docs/pgd/5.6/index.mdx b/product_docs/docs/pgd/5.6/index.mdx index 3f41f908605..8a46597ab13 100644 --- a/product_docs/docs/pgd/5.6/index.mdx +++ b/product_docs/docs/pgd/5.6/index.mdx @@ -45,7 +45,7 @@ navigation: pdf: true directoryDefaults: version: "5.6.1" - displayBanner: 'Warning: You are not reading the most recent version of this documentation.
Documentation improvements are made only to the latest version.
As per semantic versioning, PGD minor releases remain backward compatible and may include important bug fixes and enhancements.
We recommend upgrading the latest minor release as soon as possible.
If you want up-to-date information, read the latest PGD documentation.' + displayBanner: 'Warning: You are not reading the most recent version of this documentation.
Documentation improvements are made only to the latest version.
As per semantic versioning, PGD minor releases remain backward compatible and may include important bug fixes and enhancements.
We recommend upgrading the latest minor release as soon as possible.
If you want up-to-date information, read the latest PGD documentation.' --- @@ -70,7 +70,7 @@ Read about why PostgreSQL is better when it’s distributed with EDB Postgres Di By default, EDB Postgres Distributed uses asynchronous replication, applying changes on the peer nodes only after the local commit. You can configure additional levels of synchronicity between different nodes, groups of nodes, or all nodes by configuring -[Synchronous Commit](/pgd/latest/commit-scopes/synchronous_commit/), [Group Commit](commit-scopes/group-commit) (optionally with [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution)), or [CAMO](commit-scopes/camo). +[Synchronous Commit](/pgd/5.6/commit-scopes/synchronous_commit/), [Group Commit](commit-scopes/group-commit) (optionally with [Eager Conflict Resolution](/pgd/5.6/commit-scopes/group-commit/#eager-conflict-resolution)), or [CAMO](commit-scopes/camo). ## Compatibility diff --git a/product_docs/docs/pgd/5.6/known_issues.mdx b/product_docs/docs/pgd/5.6/known_issues.mdx index 92f945eac79..071fe7efdc8 100644 --- a/product_docs/docs/pgd/5.6/known_issues.mdx +++ b/product_docs/docs/pgd/5.6/known_issues.mdx @@ -38,7 +38,7 @@ Adding or removing a pair doesn't require a restart of Postgres or even a reload - Transactions using Eager Replication can't yet execute DDL. The TRUNCATE command is allowed. -- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/latest/reference/pgd-settings#generic-replication). +- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/5.6/reference/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/5.6/reference/pgd-settings#generic-replication). - There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that's concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. @@ -47,7 +47,7 @@ Make sure that any transactions using a specific commit scope have finished befo - The [PGD CLI](cli) can return stale data on the state of the cluster if it's still connecting to nodes that were previously parted from the cluster. Edit the [`pgd-cli-config.yml`](cli/configuring_cli/#using-a-configuration-file) file, or change your [`--dsn`](cli/configuring_cli/#using-database-connection-strings-in-the-command-line) settings to ensure only active nodes in the cluster are listed for connection. -To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/latest/reference/functions#bdralter_commit_scope). +To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/5.6/reference/functions#bdralter_commit_scope). - DDL run in serializable transactions can face the error: `ERROR: could not serialize access due to read/write dependencies among transactions`. A workaround is to run the DDL outside serializable transactions. diff --git a/product_docs/docs/pgd/5.6/monitoring/sql.mdx b/product_docs/docs/pgd/5.6/monitoring/sql.mdx index 4f5d18f198f..3cd7c776fd0 100644 --- a/product_docs/docs/pgd/5.6/monitoring/sql.mdx +++ b/product_docs/docs/pgd/5.6/monitoring/sql.mdx @@ -74,7 +74,7 @@ node_seq_id | 3 node_local_dbname | postgres ``` -Also, the table [`bdr.node_catchup_info`](/pgd/latest/reference/catalogs-visible/#bdrnode_catchup_info) gives information +Also, the table [`bdr.node_catchup_info`](/pgd/5.6/reference/catalogs-visible/#bdrnode_catchup_info) gives information on the catch-up state, which can be relevant to joining nodes or parting nodes. When a node is parted, some nodes in the cluster might not receive @@ -94,7 +94,7 @@ The `catchup_state` can be one of the following: The manager worker is responsible for many background tasks, including the managing of all the other workers. As such it is important to know what it's doing, especially in cases where it might seem stuck. -Accordingly, the [`bdr.stat_worker`](/pgd/latest/reference/catalogs-visible/#bdrstat_worker) view provides per worker statistics for PGD workers, including manager workers. With respect to ensuring manager workers do not get stuck, the current task they are executing would be reported in their `query` field prefixed by "pgd manager:". +Accordingly, the [`bdr.stat_worker`](/pgd/5.6/reference/catalogs-visible/#bdrstat_worker) view provides per worker statistics for PGD workers, including manager workers. With respect to ensuring manager workers do not get stuck, the current task they are executing would be reported in their `query` field prefixed by "pgd manager:". The `worker_backend_state` field for manager workers also reports whether the manager is idle or busy. @@ -104,15 +104,15 @@ Routing is a critical part of PGD for ensuring a seemless application experience Monitoring all of these is important for noticing issues, debugging issues, as well as informing more optimal configurations. Accoringly, there are two main views for monitoring statistics to do with routing: -- [`bdr.stat_routing_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_routing_state) for monitoring the state of the connection routing with PGD Proxy uses to route the connections. -- [`bdr.stat_routing_candidate_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_routing_candidate_state) for information about routing candidate nodes from the point of view of the Raft leader (the view is empty on other nodes). +- [`bdr.stat_routing_state`](/pgd/5.6/reference/catalogs-visible/#bdrstat_routing_state) for monitoring the state of the connection routing with PGD Proxy uses to route the connections. +- [`bdr.stat_routing_candidate_state`](/pgd/5.6/reference/catalogs-visible/#bdrstat_routing_candidate_state) for information about routing candidate nodes from the point of view of the Raft leader (the view is empty on other nodes). ## Monitoring Replication Peers You use two main views for monitoring of replication activity: -- [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) for monitoring outgoing replication -- [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible/#bdrsubscription_summary) for monitoring incoming replication +- [`bdr.node_slots`](/pgd/5.6/reference/catalogs-visible/#bdrnode_slots) for monitoring outgoing replication +- [`bdr.subscription_summary`](/pgd/5.6/reference/catalogs-visible/#bdrsubscription_summary) for monitoring incoming replication You can also obtain most of the information provided by `bdr.node_slots` by querying the standard PostgreSQL replication monitoring views @@ -128,9 +128,9 @@ something is down or disconnected. See [Replication slots](../node_management/re You can use another view for monitoring of outgoing replication activity: -- [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates) for monitoring outgoing replication +- [`bdr.node_replication_rates`](/pgd/5.6/reference/catalogs-visible/#bdrnode_replication_rates) for monitoring outgoing replication -The [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates) view gives an overall picture of the outgoing +The [`bdr.node_replication_rates`](/pgd/5.6/reference/catalogs-visible/#bdrnode_replication_rates) view gives an overall picture of the outgoing replication activity along with the catchup estimates for peer nodes, specifically. @@ -163,10 +163,10 @@ at which the peer is consuming data from the local node. The `replay_lag` when a node reconnects to the cluster is immediately set to zero. This information will be fixed in a future release. As a workaround, we recommend using the `catchup_interval` column that refers to the time required for the peer node to catch up to the -local node data. The other fields are also available from the [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) +local node data. The other fields are also available from the [`bdr.node_slots`](/pgd/5.6/reference/catalogs-visible/#bdrnode_slots) view. -Administrators can query [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) for outgoing replication from the +Administrators can query [`bdr.node_slots`](/pgd/5.6/reference/catalogs-visible/#bdrnode_slots) for outgoing replication from the local node. It shows information about replication status of all other nodes in the group that are known to the current node as well as any additional replication slots created by PGD on the current node. @@ -283,13 +283,13 @@ sub_slot_name | bdr_postgres_bdrgroup_node1 subscription_status | replicating ``` -You can further monitor subscriptions by monitoring subscription summary statistics through [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible/#bdrstat_subscription), and by monitoring the subscription replication receivers and subscription replication writers, using [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible/#bdrstat_writer), respectively. +You can further monitor subscriptions by monitoring subscription summary statistics through [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible/#bdrstat_subscription), and by monitoring the subscription replication receivers and subscription replication writers, using [`bdr.stat_receiver`](/pgd/5.6/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/5.6/reference/catalogs-visible/#bdrstat_writer), respectively. ### Monitoring WAL senders using LCR If the [decoding worker](../decoding_worker/) is enabled, you can monitor information about the current logical change record (LCR) file for each WAL sender -using the function [`bdr.wal_sender_stats()`](/pgd/latest/reference/functions/#bdrwal_sender_stats). For example: +using the function [`bdr.wal_sender_stats()`](/pgd/5.6/reference/functions/#bdrwal_sender_stats). For example: ``` postgres=# SELECT * FROM bdr.wal_sender_stats(); @@ -306,7 +306,7 @@ This is the case if the decoding worker isn't enabled or the WAL sender is serving a [logical standby](../nodes/logical_standby_nodes/). Also, you can monitor information about the decoding worker using the function -[`bdr.get_decoding_worker_stat()`](/pgd/latest/reference/functions/#bdrget_decoding_worker_stat). For example: +[`bdr.get_decoding_worker_stat()`](/pgd/5.6/reference/functions/#bdrget_decoding_worker_stat). For example: ``` postgres=# SELECT * FROM bdr.get_decoding_worker_stat(); @@ -365,9 +365,9 @@ Commit scopes are our durability and consistency configuration framework. As suc Accordingly, these two views show relevant statistics about commit scopes: -- [bdr.stat_commit_scope](/pgd/latest/reference/catalogs-visible/#bdrstat_commit_scope) for cumulative statistics for each commit scope. +- [bdr.stat_commit_scope](/pgd/5.6/reference/catalogs-visible/#bdrstat_commit_scope) for cumulative statistics for each commit scope. -- [bdr.stat_commit_scope_state](/pgd/latest/reference/catalogs-visible/#bdrstat_commit_scope_state) for information about the current use of commit scopes by backend processes. +- [bdr.stat_commit_scope_state](/pgd/5.6/reference/catalogs-visible/#bdrstat_commit_scope_state) for information about the current use of commit scopes by backend processes. ## Monitoring global locks @@ -384,7 +384,7 @@ There are currently two types of global locks: You can create either or both entry types for the same transaction, depending on the type of DDL operation and the value of the `bdr.ddl_locking` setting. -Global locks held on the local node are visible in the [`bdr.global_locks`](/pgd/latest/reference/catalogs-visible/#bdrglobal_locks) view. +Global locks held on the local node are visible in the [`bdr.global_locks`](/pgd/5.6/reference/catalogs-visible/#bdrglobal_locks) view. This view shows the type of the lock. For relation locks, it shows the relation that's being locked, the PID holding the lock (if local), and whether the lock was globally granted. In case @@ -406,7 +406,7 @@ relation | someschema.sometable pid | 15534 ``` -See [Catalogs](/pgd/latest/reference/catalogs-visible/) for details on all fields, including lock +See [Catalogs](/pgd/5.6/reference/catalogs-visible/) for details on all fields, including lock timing information. ## Monitoring conflicts @@ -421,7 +421,7 @@ row-level security to ensure they're visible only by owners of replicated tables. Owners should expect conflicts and analyze them to see which, if any, might be considered as problems to resolve. -For monitoring purposes, use [`bdr.conflict_history_summary`](/pgd/latest/reference/catalogs-visible#bdrconflict_history_summary), which doesn't +For monitoring purposes, use [`bdr.conflict_history_summary`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history_summary), which doesn't contain user data. This example shows a query to count the number of conflicts seen in the current day using an efficient query plan: @@ -437,8 +437,8 @@ WHERE local_time > date_trunc('day', current_timestamp) PGD collects statistics about replication apply, both for each subscription and for each table. -Two monitoring views exist: [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) for subscription statistics -and [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) for relation statistics. These views both provide: +Two monitoring views exist: [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible#bdrstat_subscription) for subscription statistics +and [`bdr.stat_relation`](/pgd/5.6/reference/catalogs-visible#bdrstat_relation) for relation statistics. These views both provide: - Number of INSERTs/UPDATEs/DELETEs/TRUNCATEs replicated - Block accesses and cache hit ratio @@ -447,18 +447,18 @@ and [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relatio - Number of in-progress transactions streamed to writers - Number of in-progress streamed transactions committed/aborted -For relations only, [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) also includes: +For relations only, [`bdr.stat_relation`](/pgd/5.6/reference/catalogs-visible#bdrstat_relation) also includes: - Total time spent processing replication for the relation - Total lock wait time to acquire lock (if any) for the relation (only) -For subscriptions only, [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) includes: +For subscriptions only, [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible#bdrstat_subscription) includes: - Number of COMMITs/DDL replicated for the subscription - Number of times this subscription has connected upstream Tracking of these statistics is controlled by the PGD GUCs -[`bdr.track_subscription_apply`](/pgd/latest/reference/pgd-settings#bdrtrack_subscription_apply) and [`bdr.track_relation_apply`](/pgd/latest/reference/pgd-settings#bdrtrack_relation_apply), +[`bdr.track_subscription_apply`](/pgd/5.6/reference/pgd-settings#bdrtrack_subscription_apply) and [`bdr.track_relation_apply`](/pgd/5.6/reference/pgd-settings#bdrtrack_relation_apply), respectively. The following shows the example output from these: @@ -480,9 +480,9 @@ nddl | 2 In this case, the subscription connected three times to the upstream, inserted 10 rows, and performed two DDL commands inside five transactions. -You can reset the stats counters for these views to zero using the functions [`bdr.reset_subscription_stats`](/pgd/latest/reference/functions-internal#bdrreset_subscription_stats) and [`bdr.reset_relation_stats`](/pgd/latest/reference/functions-internal#bdrreset_relation_stats). +You can reset the stats counters for these views to zero using the functions [`bdr.reset_subscription_stats`](/pgd/5.6/reference/functions-internal#bdrreset_subscription_stats) and [`bdr.reset_relation_stats`](/pgd/5.6/reference/functions-internal#bdrreset_relation_stats). -PGD also monitors statistics regarding subscription replication receivers and subscription replication writers for each subscription, using [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible/#bdrstat_writer), respectively. +PGD also monitors statistics regarding subscription replication receivers and subscription replication writers for each subscription, using [`bdr.stat_receiver`](/pgd/5.6/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/5.6/reference/catalogs-visible/#bdrstat_writer), respectively. ## Standard PostgreSQL statistics views @@ -524,8 +524,8 @@ PGD allows running different Postgres versions as well as different BDR extension versions across the nodes in the same cluster. This capability is useful for upgrading. -The view [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) uses the function -[`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) to retrieve Postgres and BDR extension versions from all +The view [`bdr.group_versions_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_versions_details) uses the function +[`bdr.run_on_all_nodes()`](/pgd/5.6/reference/functions#bdrrun_on_all_nodes) to retrieve Postgres and BDR extension versions from all nodes at the same time. For example: ```sql @@ -550,7 +550,7 @@ For monitoring purposes, we recommend the following alert levels: when compared to other nodes The described behavior is implemented in the function -[`bdr.monitor_group_versions()`](/pgd/latest/reference/functions#bdrmonitor_group_versions), which uses PGD version +[`bdr.monitor_group_versions()`](/pgd/5.6/reference/functions#bdrmonitor_group_versions), which uses PGD version information returned from the view `bdr.group_version_details` to provide a cluster-wide version check. For example: @@ -577,8 +577,8 @@ follows: - PGD group replication slot doesn't advance LSN and thus keeps WAL files on disk. -The view [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) uses the functions -[`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) and [`bdr.get_raft_status()`](/pgd/latest/reference/functions#bdrget_raft_status) to retrieve Raft +The view [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) uses the functions +[`bdr.run_on_all_nodes()`](/pgd/5.6/reference/functions#bdrrun_on_all_nodes) and [`bdr.get_raft_status()`](/pgd/5.6/reference/functions#bdrget_raft_status) to retrieve Raft consensus status from all nodes at the same time. For example: ```sql @@ -645,8 +645,8 @@ monitoring alert levels are defined as follows: than the node set as RAFT_LEADER The described behavior is implemented in the function -[`bdr.monitor_group_raft()`](/pgd/latest/reference/functions#bdrmonitor_group_raft), which uses Raft consensus status -information returned from the view [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) +[`bdr.monitor_group_raft()`](/pgd/5.6/reference/functions#bdrmonitor_group_raft), which uses Raft consensus status +information returned from the view [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) to provide a cluster-wide Raft check. For example: ```sql @@ -656,7 +656,7 @@ node_group_name | status | message mygroup | OK | Raft Consensus is working correctly ``` -Two further views that can give a finer-grained look at the state of Raft consensus are [`bdr.stat_raft_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_raft_state), which provides the state of the Raft consensus on the local node, and [`bdr.stat_raft_followers_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_raft_followers_state), which provides a view when on the Raft leader (it is empty on other nodes) regarding the state of the followers of that Raft leader. +Two further views that can give a finer-grained look at the state of Raft consensus are [`bdr.stat_raft_state`](/pgd/5.6/reference/catalogs-visible/#bdrstat_raft_state), which provides the state of the Raft consensus on the local node, and [`bdr.stat_raft_followers_state`](/pgd/5.6/reference/catalogs-visible/#bdrstat_raft_followers_state), which provides a view when on the Raft leader (it is empty on other nodes) regarding the state of the followers of that Raft leader. ## Monitoring replication slots @@ -681,7 +681,7 @@ FROM pg_replication_slots ORDER BY slot_name; Peer slot names follow the convention `bdr___`, while the PGD group slot name follows the convention `bdr__`. You can access the group slot using the function -[`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). +[`bdr.local_group_slot_name()`](/pgd/5.6/reference/functions#bdrlocal_group_slot_name). Peer replication slots must be active on all nodes at all times. If a peer replication slot isn't active, then it might mean either: @@ -698,7 +698,7 @@ maintains this slot and advances its LSN when all other peers already consumed the corresponding transactions. Consequently, it's not necessary to monitor the status of the group slot. -The function [`bdr.monitor_local_replslots()`](/pgd/latest/reference/functions#bdrmonitor_local_replslots) provides a summary of whether all +The function [`bdr.monitor_local_replslots()`](/pgd/5.6/reference/functions#bdrmonitor_local_replslots) provides a summary of whether all PGD node replication slots are working as expected. This summary is also available on subscriber-only nodes that are operating as subscriber-only group leaders in a PGD cluster when [optimized topology](../nodes/subscriber_only/optimizing-so) is enabled. For example: ```sql @@ -724,6 +724,6 @@ One of the following status summaries is returned: By default, PGD transactions are committed only to the local node. In that case, a transaction's `COMMIT` is processed quickly. PGD's [Commit Scopes](../commit-scopes/commit-scopes) feature offers a range of synchronous transaction commit scopes that allow you to balance durability, consistency, and performance for your particular queries. -You can monitor these transactions by examining the [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) catalog. The processes report different `wait_event` states as a transaction is committed. This monitoring only covers transactions in progress and doesn't provide historical timing information. +You can monitor these transactions by examining the [`bdr.stat_activity`](/pgd/5.6/reference/catalogs-visible#bdrstat_activity) catalog. The processes report different `wait_event` states as a transaction is committed. This monitoring only covers transactions in progress and doesn't provide historical timing information. diff --git a/product_docs/docs/pgd/5.6/node_management/creating_and_joining.mdx b/product_docs/docs/pgd/5.6/node_management/creating_and_joining.mdx index 91d36338d36..fc9a4e404a5 100644 --- a/product_docs/docs/pgd/5.6/node_management/creating_and_joining.mdx +++ b/product_docs/docs/pgd/5.6/node_management/creating_and_joining.mdx @@ -18,7 +18,7 @@ format, like `host=myhost port=5432 dbname=mydb`, or URI format, like `postgresql://myhost:5432/mydb`. The SQL function -[`bdr.create_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) +[`bdr.create_node_group()`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node_group) creates the PGD group from the local node. Doing so activates PGD on that node and allows other nodes to join the PGD group, which consists of only one node at that point. At the time of creation, you must specify the connection string for @@ -26,11 +26,11 @@ other nodes to use to connect to this node. Once the node group is created, every further node can join the PGD group using the -[`bdr.join_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group()`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) function. Alternatively, use the command line utility -[bdr_init_physical](/pgd/latest/reference/nodes/#bdr_init_physical) to create a +[bdr_init_physical](/pgd/5.6/reference/nodes/#bdr_init_physical) to create a new node, using `pg_basebackup`. If using `pg_basebackup`, the bdr_init_physical utility can optionally specify the base backup of only the target database. The earlier behavior was to back up the entire database cluster. With this utility, @@ -62,7 +62,7 @@ more details, see [Connections and roles](../security/role-management#connection Optionally, you can skip the schema synchronization using the `synchronize_structure` parameter of the -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) function. In this case, the schema must already exist on the newly joining node. We recommend that you select the source node that has the best connection (logically close, ideally with low latency and high bandwidth) @@ -73,7 +73,7 @@ Coordinate the join procedure using the Raft consensus algorithm, which requires most existing nodes to be online and reachable. The logical join procedure (which uses the -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) function) performs data sync doing `COPY` operations and uses multiple writers (parallel apply) if those are enabled. @@ -99,6 +99,6 @@ If this is necessary, run LiveCompare on the newly joined node to correct any data divergence once all nodes are available and caught up. `pg_dump` can fail when there's concurrent DDL activity on the source node -because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) uses pg_dump +because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) uses pg_dump internally, it might fail if there's concurrent DDL activity on the source node. Retrying the join works in that case. diff --git a/product_docs/docs/pgd/5.6/node_management/creating_nodes.mdx b/product_docs/docs/pgd/5.6/node_management/creating_nodes.mdx index e2fbbee1bc3..d46c68f8ff0 100644 --- a/product_docs/docs/pgd/5.6/node_management/creating_nodes.mdx +++ b/product_docs/docs/pgd/5.6/node_management/creating_nodes.mdx @@ -13,7 +13,7 @@ That means, in the most general terms, you can create a PGD node by installing P ## Which Postgres version? -PGD is built on top of Postgres, so the distribution and version of Postgres you use for your PGD nodes is important. The version of Postgres you use must be compatible with the version of PGD you are using. You can find the compatibility matrix in the [release notes](/pgd/latest/rel_notes). Features and functionality in PGD may depend on the distribution of Postgres you are using. The [EDB Postgres Advanced Server](/epas/latest/) is the recommended distribution for PGD. PGD also supports [EDB Postgres Extended Server](/pge/latest/) and [Community Postgres](https://www.postgresql.org/). You can find out what features are available in each distribution in the Planning section's [Choosing a server](../planning/choosing_server) page. +PGD is built on top of Postgres, so the distribution and version of Postgres you use for your PGD nodes is important. The version of Postgres you use must be compatible with the version of PGD you are using. You can find the compatibility matrix in the [release notes](/pgd/5.6/rel_notes). Features and functionality in PGD may depend on the distribution of Postgres you are using. The [EDB Postgres Advanced Server](/epas/latest/) is the recommended distribution for PGD. PGD also supports [EDB Postgres Extended Server](/pge/latest/) and [Community Postgres](https://www.postgresql.org/). You can find out what features are available in each distribution in the Planning section's [Choosing a server](../planning/choosing_server) page. ## Installing Postgres @@ -35,7 +35,7 @@ This process is specific to PGD and involves configuring the Postgres instance t * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

!!! Note The `max_worker_processes` value The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. - To calculate the needed value, see [Postgres configuration/settings](/pgd/latest/postgres-configuration/#postgres-settings). + To calculate the needed value, see [Postgres configuration/settings](/pgd/5.6/postgres-configuration/#postgres-settings). The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. !!! * Set a password on the EnterprisedDB/Postgres user. diff --git a/product_docs/docs/pgd/5.6/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5.6/node_management/heterogeneous_clusters.mdx index 62a3feed9c2..f28f10488a0 100644 --- a/product_docs/docs/pgd/5.6/node_management/heterogeneous_clusters.mdx +++ b/product_docs/docs/pgd/5.6/node_management/heterogeneous_clusters.mdx @@ -22,7 +22,7 @@ join the cluster. Don't run any DDLs that might not be available on the older versions and vice versa. A node joining with a different major PostgreSQL release can't use -physical backup taken with [`bdr_init_physical`](/pgd/latest/reference/nodes#bdr_init_physical), and the node must join +physical backup taken with [`bdr_init_physical`](/pgd/5.6/reference/nodes#bdr_init_physical), and the node must join using the logical join method. Using this method is necessary because the major PostgreSQL releases aren't on-disk compatible with each other. diff --git a/product_docs/docs/pgd/5.6/node_management/maintainance_with_proxies.mdx b/product_docs/docs/pgd/5.6/node_management/maintainance_with_proxies.mdx index 242d1436ab6..5101440baa0 100644 --- a/product_docs/docs/pgd/5.6/node_management/maintainance_with_proxies.mdx +++ b/product_docs/docs/pgd/5.6/node_management/maintainance_with_proxies.mdx @@ -39,7 +39,7 @@ select node_name from bdr.node; ``` !!! Tip -For more details, see the [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) table. +For more details, see the [`bdr.node`](/pgd/5.6/reference/catalogs-visible#bdrnode) table. !!! This command lists just the node names. If you need to know the group they are a member of, use: @@ -49,7 +49,7 @@ select node_name, node_group_name from bdr.node_summary; ``` !!! Tip -For more details, see the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) table. +For more details, see the [`bdr.node_summary`](/pgd/5.6/reference/catalogs-visible#bdrnode_summary) table. !!! ## Finding the write leader diff --git a/product_docs/docs/pgd/5.6/node_management/node_recovery.mdx b/product_docs/docs/pgd/5.6/node_management/node_recovery.mdx index b05ac8daaea..69efdd1ed35 100644 --- a/product_docs/docs/pgd/5.6/node_management/node_recovery.mdx +++ b/product_docs/docs/pgd/5.6/node_management/node_recovery.mdx @@ -7,7 +7,7 @@ PGD is designed to recover from node restart or node disconnection. The disconnected node rejoins the group by reconnecting to each peer node and then replicating any missing data from that node. -When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) with +When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/5.6/reference/catalogs-visible#bdrnode_slots) with `bdr.node_slots.state = catchup` and begins replicating missing data. Catching up continues for a period of time that depends on the amount of missing data from each peer node and will likely increase diff --git a/product_docs/docs/pgd/5.6/node_management/removing_nodes_and_groups.mdx b/product_docs/docs/pgd/5.6/node_management/removing_nodes_and_groups.mdx index 1ba218e14c8..70375e74a19 100644 --- a/product_docs/docs/pgd/5.6/node_management/removing_nodes_and_groups.mdx +++ b/product_docs/docs/pgd/5.6/node_management/removing_nodes_and_groups.mdx @@ -10,9 +10,9 @@ permanently. If you permanently shut down a node and don't tell the other nodes, then performance suffers and eventually the whole system stops working. -Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) +Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/5.6/reference/nodes-management-interfaces#bdrpart_node) function. You must specify the node name (as passed during node creation) -to remove a node. You can call the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) function from any active +to remove a node. You can call the [`bdr.part_node()`](/pgd/5.6/reference/nodes-management-interfaces#bdrpart_node) function from any active node in the PGD group, including the node that you're removing. Just like the join procedure, parting is done using Raft consensus and requires a @@ -26,7 +26,7 @@ most recent node to allow them to catch up any missing data. A parted node still is known to PGD but doesn't consume resources. A node might be added again under the same name as a parted node. In rare cases, you might want to clear all metadata of a parted -node by using the function [`bdr.drop_node()`](/pgd/latest/reference/functions-internal#bdrdrop_node). +node by using the function [`bdr.drop_node()`](/pgd/5.6/reference/functions-internal#bdrdrop_node). ## Removing a whole PGD group diff --git a/product_docs/docs/pgd/5.6/node_management/replication_slots.mdx b/product_docs/docs/pgd/5.6/node_management/replication_slots.mdx index 8fbe149f7ff..d734c6c9c1a 100644 --- a/product_docs/docs/pgd/5.6/node_management/replication_slots.mdx +++ b/product_docs/docs/pgd/5.6/node_management/replication_slots.mdx @@ -42,7 +42,7 @@ The group slot is an internal slot used by PGD primarily to track the oldest safe position that any node in the PGD group (including all logical standbys) has caught up to, for any outbound replication from this node. -The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). +The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/5.6/reference/functions#bdrlocal_group_slot_name). The group slot can: diff --git a/product_docs/docs/pgd/5.6/node_management/viewing_topology.mdx b/product_docs/docs/pgd/5.6/node_management/viewing_topology.mdx index e9732a872ff..9b5d54f288a 100644 --- a/product_docs/docs/pgd/5.6/node_management/viewing_topology.mdx +++ b/product_docs/docs/pgd/5.6/node_management/viewing_topology.mdx @@ -26,7 +26,7 @@ pgd show-groups The following simple query lists all the PGD node groups of which the current node is a member. It currently returns only one row from -[`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary). +[`bdr.local_node_summary`](/pgd/5.6/reference/catalogs-visible#bdrlocal_node_summary). ```sql SELECT node_group_name @@ -85,7 +85,7 @@ pgd show-nodes | grep group_b ### Using SQL You can extract the list of all nodes in a given node group (such as `mygroup`) -from the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary)` view. For example: +from the [`bdr.node_summary`](/pgd/5.6/reference/catalogs-visible#bdrnode_summary)` view. For example: ```sql SELECT node_name AS name diff --git a/product_docs/docs/pgd/5.6/nodes/logical_standby_nodes.mdx b/product_docs/docs/pgd/5.6/nodes/logical_standby_nodes.mdx index a18d28fe430..44cc1e4ce39 100644 --- a/product_docs/docs/pgd/5.6/nodes/logical_standby_nodes.mdx +++ b/product_docs/docs/pgd/5.6/nodes/logical_standby_nodes.mdx @@ -14,17 +14,17 @@ A master node can have zero, one, or more logical standby nodes. location is always preferred. Logical standby nodes are nodes that are held in a state of continual recovery, -constantly updating until they're required. This behavior is similar to how Postgres physical standbys operate while using logical replication for better performance. [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` +constantly updating until they're required. This behavior is similar to how Postgres physical standbys operate while using logical replication for better performance. [`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` option to make the node stay in halfway-joined as a logical standby node. Logical standby nodes receive changes but don't send changes made locally to other nodes. Later, if you want, use -[`bdr.promote_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpromote_node) +[`bdr.promote_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrpromote_node) to move the logical standby into a full, normal send/receive node. A logical standby is sent data by one source node, defined by the DSN in -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group). +[`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group). Changes from all other nodes are received from this one source node, minimizing bandwidth between multiple sites. diff --git a/product_docs/docs/pgd/5.6/nodes/subscriber_only/creating-so.mdx b/product_docs/docs/pgd/5.6/nodes/subscriber_only/creating-so.mdx index b95a5b31985..323b953891a 100644 --- a/product_docs/docs/pgd/5.6/nodes/subscriber_only/creating-so.mdx +++ b/product_docs/docs/pgd/5.6/nodes/subscriber_only/creating-so.mdx @@ -28,7 +28,7 @@ This creates a Subscriber-only group named `sogroup` which is a child of the `to ## Adding a node to a new Subscriber-only group manually -You can now initialize a new data node and then add it to the Subscriber-only group. Create a data node and configure the bdr extension on it as you would for any other data node. If you deployed manually, see the [manual install guide](/pgd/latest/deploy-config/deploy-manual/deploying/04-installing-software/) for instructions on how to install and deploy a data node. +You can now initialize a new data node and then add it to the Subscriber-only group. Create a data node and configure the bdr extension on it as you would for any other data node. If you deployed manually, see the [manual install guide](/pgd/5.6/deploy-config/deploy-manual/deploying/04-installing-software/) for instructions on how to install and deploy a data node. You now have to create this new node as a `subscriber-only` node. To do this, log into the new node and run the following SQL command: diff --git a/product_docs/docs/pgd/5.6/nodes/subscriber_only/optimizing-so.mdx b/product_docs/docs/pgd/5.6/nodes/subscriber_only/optimizing-so.mdx index fbe4bd5416f..7ddaa7c19dc 100644 --- a/product_docs/docs/pgd/5.6/nodes/subscriber_only/optimizing-so.mdx +++ b/product_docs/docs/pgd/5.6/nodes/subscriber_only/optimizing-so.mdx @@ -56,7 +56,7 @@ The subscriber-only node and group form the building block for PGD tree topologi By default, PGD 5.6 forces the full mesh topology. This means the optimization described here is off. To enable the optimized topology, you must have your data nodes in subgroups, with proxy routing enabled on the subgroups. -You can then set the GUC [`bdr.force_full_mesh`](/pgd/latest/reference/pgd-settings#bdrforce_full_mesh) to `off` to allow the optimization to be activated. +You can then set the GUC [`bdr.force_full_mesh`](/pgd/5.6/reference/pgd-settings#bdrforce_full_mesh) to `off` to allow the optimization to be activated. !!! Note This GUC needs to be set in the `postgresql.conf` file on each data node and each node restarted for the change to take effect. diff --git a/product_docs/docs/pgd/5.6/parallelapply.mdx b/product_docs/docs/pgd/5.6/parallelapply.mdx index 726f0b79f61..1964444d8cb 100644 --- a/product_docs/docs/pgd/5.6/parallelapply.mdx +++ b/product_docs/docs/pgd/5.6/parallelapply.mdx @@ -13,9 +13,9 @@ subscription and improves replication performance. ### Configuring Parallel Apply Two variables control Parallel Apply in PGD 5: -[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) +[`bdr.max_writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) +[`bdr.writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain @@ -26,18 +26,18 @@ bdr.writers_per_subscription = 2 This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to eight writers for a subscription. -Changing [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) +Changing [`bdr.max_writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrmax_writers_per_subscription) requires a server restart to take effect. You can change -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) +[`bdr.writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by: 1. Halting the subscription using - [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). + [`bdr.alter_subscription_disable`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_subscription_disable). 1. Setting the new value. 1. Resuming the subscription using - [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). + [`bdr.alter_subscription_enable`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_subscription_enable). First though, establish the name of the subscription using `select * from @@ -61,7 +61,7 @@ Parallel Apply is always on by default and, for most operations, we recommend le ### Monitoring Parallel Apply To support Parallel Apply's deadlock mitigation, PGD 5.2 adds columns to -[`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription). +[`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible#bdrstat_subscription). The new columns are `nprovisional_waits`, `ntuple_waits`, and `ncommmit_waits`. These are metrics that indicate how well Parallel Apply is managing what previously would have been deadlocks. They don't reflect overall system @@ -77,7 +77,7 @@ are counted in `ncommit_waits`. ### Disabling Parallel Apply -To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/5.6/reference/pgd-settings#bdrwriters_per_subscription) to `1`. ### Deadlock mitigation diff --git a/product_docs/docs/pgd/5.6/planning/architectures.mdx b/product_docs/docs/pgd/5.6/planning/architectures.mdx index 45171806f3f..6d9091c1b95 100644 --- a/product_docs/docs/pgd/5.6/planning/architectures.mdx +++ b/product_docs/docs/pgd/5.6/planning/architectures.mdx @@ -1,11 +1,11 @@ --- title: "Choosing your architecture" redirects: - - /pgd/latest/architectures/bronze/ - - /pgd/latest/architectures/gold/ - - /pgd/latest/architectures/platinum/ - - /pgd/latest/architectures/silver/ - - /pgd/latest/architectures/ + - /pgd/5.6/architectures/bronze/ + - /pgd/5.6/architectures/gold/ + - /pgd/5.6/architectures/platinum/ + - /pgd/5.6/architectures/silver/ + - /pgd/5.6/architectures/ --- Always-on architectures reflect EDB’s Trusted Postgres architectures. They diff --git a/product_docs/docs/pgd/5.6/planning/choosing_server.mdx b/product_docs/docs/pgd/5.6/planning/choosing_server.mdx index 77fd46b098f..172740ff553 100644 --- a/product_docs/docs/pgd/5.6/planning/choosing_server.mdx +++ b/product_docs/docs/pgd/5.6/planning/choosing_server.mdx @@ -1,7 +1,7 @@ --- title: "Choosing a Postgres distribution" redirects: - - /pgd/latest/choosing_server/ + - /pgd/5.6/choosing_server/ --- EDB Postgres Distributed can be deployed with three different Postgres distributions: PostgreSQL, EDB Postgres Extended Server, or EDB Postgres Advanced Server. The availability of particular EDB Postgres Distributed features depends on the Postgres distribution being used. Therefore, it's essential to adopt the Postgres distribution best suited to your business needs. For example, if having the Commit At Most Once (CAMO) feature is mission critical to your use case, don't adopt open source PostgreSQL, which doesn't have the core capabilities required to handle CAMO. @@ -10,28 +10,28 @@ The following table lists features of EDB Postgres Distributed that are dependen | Feature | PostgreSQL | EDB Postgres Extended | EDB Postgres Advanced | | ----------------------------------------------------------------------------------------------------------------------- | ---------- | --------------------- | --------------------- | -| [Rolling application and database upgrades](/pgd/latest/upgrades/) | Y | Y | Y | -| [Row-level last-update wins conflict resolution](/pgd/latest/conflict-management/conflicts/) | Y | Y | Y | -| [DDL replication](/pgd/latest/ddl/) | Y | Y | Y | -| [Granular DDL Locking](/pgd/latest/ddl/ddl-locking/) | Y | Y | Y | -| [Streaming of large transactions](/pgd/latest/transaction-streaming/) | v14+ | v13+ | v14+ | -| [Distributed sequences](/pgd/latest/sequences/#pgd-global-sequences) | Y | Y | Y | -| [Subscriber-only nodes](/pgd/latest/nodes/subscriber_only/) | Y | Y | Y | -| [Monitoring](/pgd/latest/monitoring/) | Y | Y | Y | -| [OpenTelemetry support](/pgd/latest/monitoring/otel/) | Y | Y | Y | -| [Parallel apply](/pgd/latest/parallelapply) | Y | Y | Y | -| [Conflict-free replicated data types (CRDTs)](/pgd/latest/conflict-management/crdt/) | Y | Y | Y | -| [Column-level conflict resolution](/pgd/latest/conflict-management/column-level-conflicts/) | Y | Y | Y | -| [Transform triggers](/pgd/latest/striggers/#transform-triggers) | Y | Y | Y | -| [Conflict triggers](/pgd/latest/striggers/#conflict-triggers) | Y | Y | Y | -| [Asynchronous replication](/pgd/latest/commit-scopes/) | Y | Y | Y | -| [Legacy synchronous replication](/pgd/latest/commit-scopes/legacy-sync/) | Y | Y | Y | -| [Group Commit](/pgd/latest/commit-scopes/group-commit/) | N | Y | 14+ | -| [Commit At Most Once (CAMO)](/pgd/latest/commit-scopes/camo/) | N | Y | 14+ | -| [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) | N | Y | 14+ | -| [Lag Control](/pgd/latest/commit-scopes/lag-control/) | N | Y | 14+ | -| [Decoding Worker](/pgd/latest/decoding_worker) | N | 13+ | 14+ | -| [Lag tracker](/pgd/latest/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | +| [Rolling application and database upgrades](/pgd/5.6/upgrades/) | Y | Y | Y | +| [Row-level last-update wins conflict resolution](/pgd/5.6/conflict-management/conflicts/) | Y | Y | Y | +| [DDL replication](/pgd/5.6/ddl/) | Y | Y | Y | +| [Granular DDL Locking](/pgd/5.6/ddl/ddl-locking/) | Y | Y | Y | +| [Streaming of large transactions](/pgd/5.6/transaction-streaming/) | v14+ | v13+ | v14+ | +| [Distributed sequences](/pgd/5.6/sequences/#pgd-global-sequences) | Y | Y | Y | +| [Subscriber-only nodes](/pgd/5.6/nodes/subscriber_only/) | Y | Y | Y | +| [Monitoring](/pgd/5.6/monitoring/) | Y | Y | Y | +| [OpenTelemetry support](/pgd/5.6/monitoring/otel/) | Y | Y | Y | +| [Parallel apply](/pgd/5.6/parallelapply) | Y | Y | Y | +| [Conflict-free replicated data types (CRDTs)](/pgd/5.6/conflict-management/crdt/) | Y | Y | Y | +| [Column-level conflict resolution](/pgd/5.6/conflict-management/column-level-conflicts/) | Y | Y | Y | +| [Transform triggers](/pgd/5.6/striggers/#transform-triggers) | Y | Y | Y | +| [Conflict triggers](/pgd/5.6/striggers/#conflict-triggers) | Y | Y | Y | +| [Asynchronous replication](/pgd/5.6/commit-scopes/) | Y | Y | Y | +| [Legacy synchronous replication](/pgd/5.6/commit-scopes/legacy-sync/) | Y | Y | Y | +| [Group Commit](/pgd/5.6/commit-scopes/group-commit/) | N | Y | 14+ | +| [Commit At Most Once (CAMO)](/pgd/5.6/commit-scopes/camo/) | N | Y | 14+ | +| [Eager Conflict Resolution](/pgd/5.6/commit-scopes/group-commit/#eager-conflict-resolution) | N | Y | 14+ | +| [Lag Control](/pgd/5.6/commit-scopes/lag-control/) | N | Y | 14+ | +| [Decoding Worker](/pgd/5.6/decoding_worker) | N | 13+ | 14+ | +| [Lag tracker](/pgd/5.6/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | | [Missing partition conflict](../reference/conflicts/#target_table_note) | N | Y | 14+ | | [No need for UPDATE Trigger on tables with TOAST](../conflict-management/conflicts/02_types_of_conflict/#toast-support-details) | N | Y | 14+ | | [Automatically hold back FREEZE](../conflict-management/conflicts/03_conflict_detection/#origin-conflict-detection) | N | Y | 14+ | diff --git a/product_docs/docs/pgd/5.6/planning/deployments.mdx b/product_docs/docs/pgd/5.6/planning/deployments.mdx index d5116ed8e8a..f8c09038a8c 100644 --- a/product_docs/docs/pgd/5.6/planning/deployments.mdx +++ b/product_docs/docs/pgd/5.6/planning/deployments.mdx @@ -2,7 +2,7 @@ title: "Choosing your deployment method" indexCards: simple redirects: -- /pgd/latest/deployments +- /pgd/5.6/deployments --- You can deploy and install EDB Postgres Distributed products using the following methods: diff --git a/product_docs/docs/pgd/5.6/planning/index.mdx b/product_docs/docs/pgd/5.6/planning/index.mdx index 3415e3e20b6..40dfdfa7ac4 100644 --- a/product_docs/docs/pgd/5.6/planning/index.mdx +++ b/product_docs/docs/pgd/5.6/planning/index.mdx @@ -3,11 +3,11 @@ title: Planning your PGD deployment navTitle: Planning description: Understand the requirements of your application and the capabilities of PGD to plan your deployment. navigation: - - architectures - - choosing_server - - deployments - - other_considerations - - limitations +- architectures +- choosing_server +- deployments +- other_considerations +- limitations --- Planning your PGD deployment involves understanding the requirements of your application and the capabilities of PGD. This section provides an overview of the key considerations for planning your PGD deployment. diff --git a/product_docs/docs/pgd/5.6/planning/limitations.mdx b/product_docs/docs/pgd/5.6/planning/limitations.mdx index c2c490c0c42..f1f183e76b7 100644 --- a/product_docs/docs/pgd/5.6/planning/limitations.mdx +++ b/product_docs/docs/pgd/5.6/planning/limitations.mdx @@ -1,7 +1,7 @@ --- title: "Limitations" redirects: -- /pgd/latest/limitations +- /pgd/5.6/limitations --- Take these EDB Postgres Distributed (PGD) design limitations @@ -71,37 +71,14 @@ Also, there are limitations on interoperability with legacy synchronous replicat interoperability with explicit two-phase commit, and unsupported combinations within commit scope rules. -See [Durability limitations](/pgd/latest/commit-scopes/limitations/) for a full +See [Durability limitations](/pgd/5.6/commit-scopes/limitations/) for a full and current listing. ## Mixed PGD versions -While PGD was developed to [enable rolling upgrades of -PGD](/pgd/latest/upgrades) by allowing mixed versions of PGD to operate during -the upgrade process, we expect users to run mixed versions only during upgrades -and for users to complete their upgrades as quickly as possible. We also -recommend that you test any rolling upgrade process in a non-production -environment before attempting it in production. - -When a node is upgraded, it returns to the cluster and communicates with the -other nodes in the cluster using the lowest version of the inter-node protocol -that is supported by all the other nodes in the cluster. This means that the -upgraded node will be able to communicate with all other nodes in the cluster, -but it will not be able to take advantage of any new features or improvements -that were introduced in the newer version of PGD. - -That will stay the case until all nodes in the cluster have been upgraded to the -same newer version. The longer you run mixed versions, the longer you will be -without the benefits of the new version, and the longer you will be exposed to -any potential interoperability issues that might arise from running mixed -versions. Mixed version clusters are not supported for extended periods of time. - -Therefore, once an PGD cluster upgrade has begun, you should complete the whole -cluster upgrade as quickly as possible. - -We don't support running mixed versions of PGD except during an upgrade, and we don't support clusters running mixed versions even while being upgraded, for extended periods. - -For more information on rolling upgrades and mixed versions, see [Rolling upgrade considerations](/pgd/latest/upgrades/manual_overview#rolling-upgrade-considerations). +PGD was developed to [enable rolling upgrades of PGD](/pgd/5.6/upgrades) by allowing mixed versions of PGD to operate during the upgrade process. +We expect users to run mixed versions only during upgrades and, once an upgrade starts, that they complete that upgrade. +We don't support running mixed versions of PGD except during an upgrade. ## Other limitations diff --git a/product_docs/docs/pgd/5.6/planning/other_considerations.mdx b/product_docs/docs/pgd/5.6/planning/other_considerations.mdx index 7c1025cab20..963a763a1b2 100644 --- a/product_docs/docs/pgd/5.6/planning/other_considerations.mdx +++ b/product_docs/docs/pgd/5.6/planning/other_considerations.mdx @@ -1,14 +1,14 @@ --- title: "Other considerations" redirects: -- /pgd/latest/other_considerations +- /pgd/5.6/other_considerations --- Review these other considerations when planning your deployment. ## Data consistency -Read about [Conflicts](/pgd/latest/conflict-management/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. +Read about [Conflicts](/pgd/5.6/conflict-management/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. ## Deployment @@ -32,4 +32,4 @@ EDB Postgres Distributed is designed to operate with nodes in multiple timezones Synchronize server clocks using NTP or other solutions. -Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can affect origin conflict detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides row-version conflict detection, as described in [Conflict detection](/pgd/latest/conflict-management/conflicts/). +Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can affect origin conflict detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides row-version conflict detection, as described in [Conflict detection](/pgd/5.6/conflict-management/conflicts/). diff --git a/product_docs/docs/pgd/5.6/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5.6/quickstart/quick_start_aws.mdx index c120a37cfe9..e37d550c1c3 100644 --- a/product_docs/docs/pgd/5.6/quickstart/quick_start_aws.mdx +++ b/product_docs/docs/pgd/5.6/quickstart/quick_start_aws.mdx @@ -4,9 +4,9 @@ navTitle: "Deploying on AWS" description: > A quick demonstration of deploying a PGD architecture using TPA on Amazon EC2 redirects: - - /pgd/latest/deployments/tpaexec/quick_start/ - - /pgd/latest/tpa/quick_start/ - - /pgd/latest/quick_start_aws/ + - /pgd/5.6/deployments/tpaexec/quick_start/ + - /pgd/5.6/tpa/quick_start/ + - /pgd/5.6/quick_start_aws/ --- diff --git a/product_docs/docs/pgd/5.6/quickstart/quick_start_cloud.mdx b/product_docs/docs/pgd/5.6/quickstart/quick_start_cloud.mdx index 2e01eee8bf3..26068f9a9e7 100644 --- a/product_docs/docs/pgd/5.6/quickstart/quick_start_cloud.mdx +++ b/product_docs/docs/pgd/5.6/quickstart/quick_start_cloud.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Azure and Google" description: > A quick guide to deploying a PGD architecture using TPA on Azure and Google clouds redirects: - - /pgd/latest/quick_start_cloud/ + - /pgd/5.6/quick_start_cloud/ hideToC: True --- diff --git a/product_docs/docs/pgd/5.6/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5.6/quickstart/quick_start_docker.mdx index 61e9c0b885e..69ffe00ebc5 100644 --- a/product_docs/docs/pgd/5.6/quickstart/quick_start_docker.mdx +++ b/product_docs/docs/pgd/5.6/quickstart/quick_start_docker.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Docker" description: > A quick demonstration of deploying a PGD architecture using TPA on Docker redirects: - - /pgd/latest/quick_start_docker/ + - /pgd/5.6/quick_start_docker/ --- diff --git a/product_docs/docs/pgd/5.6/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5.6/quickstart/quick_start_linux.mdx index 1e5fbe0f393..c7ff9ccf2ef 100644 --- a/product_docs/docs/pgd/5.6/quickstart/quick_start_linux.mdx +++ b/product_docs/docs/pgd/5.6/quickstart/quick_start_linux.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Linux hosts" description: > A quick demonstration of deploying a PGD architecture using TPA on Linux hosts redirects: - - /pgd/latest/quick_start_bare/ + - /pgd/5.6/quick_start_bare/ --- ## Introducing TPA and PGD diff --git a/product_docs/docs/pgd/5.6/reference/catalogs-internal.mdx b/product_docs/docs/pgd/5.6/reference/catalogs-internal.mdx index df6820f9b89..0fe425eb082 100644 --- a/product_docs/docs/pgd/5.6/reference/catalogs-internal.mdx +++ b/product_docs/docs/pgd/5.6/reference/catalogs-internal.mdx @@ -71,7 +71,7 @@ node. Specifically, it tracks: * Node joins (to the cluster) * Raft state changes (that is, whenever the node changes its role in the consensus protocol - leader, follower, or candidate to leader); see [Monitoring Raft consensus](../monitoring/sql#monitoring-raft-consensus) -* Whenever a worker has errored out (see [bdr.workers](/pgd/latest/reference/catalogs-visible/#bdrworkers) +* Whenever a worker has errored out (see [bdr.workers](/pgd/5.6/reference/catalogs-visible/#bdrworkers) and [Monitoring PGD replication workers](../monitoring/sql#monitoring-pgd-replication-workers)) #### `bdr.event_history` columns @@ -94,7 +94,7 @@ as textual representations rather than integers. ### `bdr.local_leader_change` -This is a local cache of the recent portion of leader change history. It has the same fields as [`bdr.leader`](/pgd/latest/reference/catalogs-visible#bdrleader), except that it is an ordered set of (node_group_id, leader_kind, generation) instead of a map tracking merely the current version. +This is a local cache of the recent portion of leader change history. It has the same fields as [`bdr.leader`](/pgd/5.6/reference/catalogs-visible#bdrleader), except that it is an ordered set of (node_group_id, leader_kind, generation) instead of a map tracking merely the current version. diff --git a/product_docs/docs/pgd/5.6/reference/catalogs-visible.mdx b/product_docs/docs/pgd/5.6/reference/catalogs-visible.mdx index 44355815073..be2d2b9363e 100644 --- a/product_docs/docs/pgd/5.6/reference/catalogs-visible.mdx +++ b/product_docs/docs/pgd/5.6/reference/catalogs-visible.mdx @@ -969,7 +969,7 @@ A view containing all the necessary info about the replication subscription rece | sub_slot_name | name | Replication slot name used by the receiver | source_name | name | Source node for this receiver (the one it connects to), this is normally the same as the origin node, but is different for forward mode subscriptions | origin_name | name | The origin node for this receiver (the one it receives forwarded changes from), this is normally the same as the source node, but is different for forward mode subscriptions -| subscription_mode | char | Mode of the subscription, see [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible/#bdrsubscription_summary) for more details +| subscription_mode | char | Mode of the subscription, see [`bdr.subscription_summary`](/pgd/5.6/reference/catalogs-visible/#bdrsubscription_summary) for more details | sub_replication_sets| text[] | Replication sets this receiver is subscribed to | sub_apply_delay | interval | Apply delay interval | receive_lsn | pg_lsn | LSN of the last change received so far diff --git a/product_docs/docs/pgd/5.6/reference/commit-scopes.mdx b/product_docs/docs/pgd/5.6/reference/commit-scopes.mdx index 1dfcedd54a9..c202d0a7cf4 100644 --- a/product_docs/docs/pgd/5.6/reference/commit-scopes.mdx +++ b/product_docs/docs/pgd/5.6/reference/commit-scopes.mdx @@ -7,13 +7,13 @@ rootisheading: false deepToC: true --- -Commit scopes are rules that determine how transaction commits and conflicts are handled within a PGD system. You can read more about them in [Commit Scopes](/pgd/latest/commit-scopes/). +Commit scopes are rules that determine how transaction commits and conflicts are handled within a PGD system. You can read more about them in [Commit Scopes](/pgd/5.6/commit-scopes/). You can manipulate commit scopes using the following functions: -- [`bdr.create_commit_scope`](/pgd/latest/reference/functions#bdrcreate_commit_scope) -- [`bdr.alter_commit_scope`](/pgd/latest/reference/functions#bdralter_commit_scope) -- [`bdr.drop_commit_scope`](/pgd/latest/reference/functions#bdrdrop_commit_scope) +- [`bdr.create_commit_scope`](/pgd/5.6/reference/functions#bdrcreate_commit_scope) +- [`bdr.alter_commit_scope`](/pgd/5.6/reference/functions#bdralter_commit_scope) +- [`bdr.drop_commit_scope`](/pgd/5.6/reference/functions#bdrdrop_commit_scope) ## Commit scope syntax @@ -55,7 +55,7 @@ Where `node_group` is the name of a PGD data node group. The `commit_scope_degrade_operation` is either the same commit scope kind with a less restrictive commit scope group as the overall rule being defined, or is asynchronous (`ASYNC`). -For instance, [you can degrade](/pgd/latest/commit-scopes/degrading/) from an `ALL SYNCHRONOUS COMMIT` to a `MAJORITY SYNCHRONOUS COMMIT` or a `MAJORITY SYNCHRONOUS COMMIT` to an `ANY 3 SYNCHRONOUS COMMIT` or even an `ANY 3 SYNCHRONOUS COMMIT` to an `ANY 2 SYNCHRONOUS COMMIT`. You can also degrade from `SYNCHRONOUS COMMIT` to `ASYNC`. However, you cannot degrade from `SYNCHRONOUS COMMIT` to `GROUP COMMIT` or the other way around, regardless of the commit scope groups involved. +For instance, [you can degrade](/pgd/5.6/commit-scopes/degrading/) from an `ALL SYNCHRONOUS COMMIT` to a `MAJORITY SYNCHRONOUS COMMIT` or a `MAJORITY SYNCHRONOUS COMMIT` to an `ANY 3 SYNCHRONOUS COMMIT` or even an `ANY 3 SYNCHRONOUS COMMIT` to an `ANY 2 SYNCHRONOUS COMMIT`. You can also degrade from `SYNCHRONOUS COMMIT` to `ASYNC`. However, you cannot degrade from `SYNCHRONOUS COMMIT` to `GROUP COMMIT` or the other way around, regardless of the commit scope groups involved. It is also possible to combine rules using `AND`, each with their own degradation clause: diff --git a/product_docs/docs/pgd/5.6/reference/functions-internal.mdx b/product_docs/docs/pgd/5.6/reference/functions-internal.mdx index b275c29e62c..8ef4c635e9c 100644 --- a/product_docs/docs/pgd/5.6/reference/functions-internal.mdx +++ b/product_docs/docs/pgd/5.6/reference/functions-internal.mdx @@ -177,7 +177,7 @@ Use of this internal function is limited to: * When you're instructed to by EDB Technical Support. * Where you're specifically instructed to in the documentation. -Use [`bdr.part_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) to remove a node from a PGD group. That function sets the node to `PARTED` state and enables reuse of the node name. +Use [`bdr.part_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrpart_node) to remove a node from a PGD group. That function sets the node to `PARTED` state and enables reuse of the node name. !!! @@ -492,40 +492,40 @@ Internal function intended for use by PGD-CLI. ### `bdr.stat_get_activity` -Internal function underlying view `bdr.stat_activity`. Do not use directly. Use the [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) view instead. +Internal function underlying view `bdr.stat_activity`. Do not use directly. Use the [`bdr.stat_activity`](/pgd/5.6/reference/catalogs-visible#bdrstat_activity) view instead. ### `bdr.worker_role_id_name` -Internal helper function used when generating view `bdr.worker_tasks`. Do not use directly. Use the [`bdr.worker_tasks`](/pgd/latest/reference/catalogs-visible#bdrworker_tasks) view instead. +Internal helper function used when generating view `bdr.worker_tasks`. Do not use directly. Use the [`bdr.worker_tasks`](/pgd/5.6/reference/catalogs-visible#bdrworker_tasks) view instead. ### `bdr.lag_history` -Internal function used when generating view `bdr.node_replication_rates`. Do not use directly. Use the [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) view instead. +Internal function used when generating view `bdr.node_replication_rates`. Do not use directly. Use the [`bdr.node_replication_rates`](/pgd/5.6/reference/catalogs-visible#bdrnode_replication_rates) view instead. ### `bdr.get_raft_instance_by_nodegroup` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) view instead. ### `bdr.monitor_camo_on_all_nodes` -Internal function used when generating view `bdr.group_camo_details`. Do not use directly. Use the [`bdr.group_camo_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_camo_details) view instead. +Internal function used when generating view `bdr.group_camo_details`. Do not use directly. Use the [`bdr.group_camo_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_camo_details) view instead. ### `bdr.monitor_raft_details_on_all_nodes` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) view instead. ### `bdr.monitor_replslots_details_on_all_nodes` -Internal function used when generating view `bdr.group_replslots_details`. Do not use directly. Use the [`bdr.group_replslots_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_replslots_details) view instead. +Internal function used when generating view `bdr.group_replslots_details`. Do not use directly. Use the [`bdr.group_replslots_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_replslots_details) view instead. ### `bdr.monitor_subscription_details_on_all_nodes` -Internal function used when generating view `bdr.group_subscription_summary`. Do not use directly. Use the [`bdr.group_subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrgroup_subscription_summary) view instead. +Internal function used when generating view `bdr.group_subscription_summary`. Do not use directly. Use the [`bdr.group_subscription_summary`](/pgd/5.6/reference/catalogs-visible#bdrgroup_subscription_summary) view instead. ### `bdr.monitor_version_details_on_all_nodes` -Internal function used when generating view `bdr.group_versions_details`. Do not use directly. Use the [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) view instead. +Internal function used when generating view `bdr.group_versions_details`. Do not use directly. Use the [`bdr.group_versions_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_versions_details) view instead. ### `bdr.node_group_member_info` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. \ No newline at end of file +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) view instead. \ No newline at end of file diff --git a/product_docs/docs/pgd/5.6/reference/functions.mdx b/product_docs/docs/pgd/5.6/reference/functions.mdx index 20308f1d296..6701b927f8e 100644 --- a/product_docs/docs/pgd/5.6/reference/functions.mdx +++ b/product_docs/docs/pgd/5.6/reference/functions.mdx @@ -281,7 +281,7 @@ If a slot is dropped concurrently, the wait ends for that slot. If a node is currently down and isn't updating its slot, then the wait continues. You might want to set `statement_timeout` to complete earlier in that case. -If you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so), we recommend using [`bdr.wait_node_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_node_confirm_lsn) instead. +If you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so), we recommend using [`bdr.wait_node_confirm_lsn`](/pgd/5.6/reference/functions#bdrwait_node_confirm_lsn) instead. ) #### Synopsis @@ -312,7 +312,7 @@ If no LSN is supplied, the current wal_flush_lsn (using the `pg_current_wal_flus Supplying a node name parameter tells the function to wait for that node to pass the LSN. If no node name is supplied (by passing NULL), the function waits until all the nodes pass the LSN. -We recommend using this function if you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so) instead of [`bdr.wait_slot_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn). +We recommend using this function if you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so) instead of [`bdr.wait_slot_confirm_lsn`](/pgd/5.6/reference/functions#bdrwait_slot_confirm_lsn). This is because in an Optimized Topology, not all nodes have replication slots, so the function `bdr.wait_slot_confirm_lsn` might not work as expected. `bdr.wait_node_confirm_lsn` is designed to work with nodes that don't have replication slots, using alternative strategies to determine the progress of a node. @@ -433,7 +433,7 @@ bdr.replicate_ddl_command(ddl_cmd text, | --------- | ----------- | | `ddl_cmd` | DDL command to execute. | | `replication_sets` | An array of replication set names to apply the `ddlcommand` to. If NULL (or the function is passed only the `ddlcommand`), this parameter is set to the active PGD groups's default replication set. | -| `ddl_locking` | A string that sets the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) value while replicating. Defaults to the GUC value for `bdr.ddl_locking` on the local system that's running `replicate_ddl_command`. | +| `ddl_locking` | A string that sets the [`bdr.ddl_locking`](/pgd/5.6/reference/pgd-settings#bdrddl_locking) value while replicating. Defaults to the GUC value for `bdr.ddl_locking` on the local system that's running `replicate_ddl_command`. | | `execute_locally` | A Boolean that determines whether the DDL command executes locally. Defaults to true. | #### Notes @@ -1054,7 +1054,7 @@ bdr.lag_control() | Column name | Description | |----------------------------|---------------------------------------------------------------------------------------------------------------------------| -| `commit_scope_id` | OID of the commit scope (see [`bdr.commit_scopes`](/pgd/latest/reference/catalogs-visible#bdrcommit_scopes)). | +| `commit_scope_id` | OID of the commit scope (see [`bdr.commit_scopes`](/pgd/5.6/reference/catalogs-visible#bdrcommit_scopes)). | | `sessions` | Number of sessions referencing the lag control entry. | | `current_commit_delay` | Current runtime commit delay, in fractional milliseconds. | | `maximum_commit_delay` | Configured maximum commit delay, in fractional milliseconds. | @@ -1174,7 +1174,7 @@ The client must be prepared to retry the function call on error. ### `bdr.add_commit_scope` -**Deprecated**. Use [`bdr.create_commit_scope`](/pgd/latest/reference/functions#bdrcreate_commit_scope) instead. Previously, this function was used to add a commit scope to a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. +**Deprecated**. Use [`bdr.create_commit_scope`](/pgd/5.6/reference/functions#bdrcreate_commit_scope) instead. Previously, this function was used to add a commit scope to a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. ### `bdr.create_commit_scope` @@ -1194,7 +1194,7 @@ bdr.create_commit_scope( #### Note -`bdr.create_commit_scope` replaces the deprecated [`bdr.add_commit_scope`](/pgd/latest/reference/functions#bdradd_commit_scope) function. Unlike `add_commit_scope`, it does not silently overwrite existing commit scopes when the same name is used. Instead, an error is reported. +`bdr.create_commit_scope` replaces the deprecated [`bdr.add_commit_scope`](/pgd/5.6/reference/functions#bdradd_commit_scope) function. Unlike `add_commit_scope`, it does not silently overwrite existing commit scopes when the same name is used. Instead, an error is reported. ### `bdr.alter_commit_scope` @@ -1226,4 +1226,4 @@ bdr.drop_commit_scope( ### `bdr.remove_commit_scope` -**Deprecated**. Use [`bdr.drop_commit_scope`](/pgd/latest/reference/functions#bdrdrop_commit_scope) instead. Previously, this function was used to remove a commit scope from a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. +**Deprecated**. Use [`bdr.drop_commit_scope`](/pgd/5.6/reference/functions#bdrdrop_commit_scope) instead. Previously, this function was used to remove a commit scope from a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. diff --git a/product_docs/docs/pgd/5.6/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5.6/reference/nodes-management-interfaces.mdx index 669df80c07e..56458f9be2e 100644 --- a/product_docs/docs/pgd/5.6/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5.6/reference/nodes-management-interfaces.mdx @@ -317,7 +317,7 @@ bdr.join_node_group ( If `wait_for_completion` is specified as `false`, the function call returns as soon as the joining procedure starts. You can see the progress of the join in -the log files and the [`bdr.event_summary`](/pgd/latest/reference/catalogs-internal#bdrevent_summary) +the log files and the [`bdr.event_summary`](/pgd/5.6/reference/catalogs-internal#bdrevent_summary) information view. You can call the function [`bdr.wait_for_join_completion()`](#bdrwait_for_join_completion) after `bdr.join_node_group()` to wait for the join operation to complete. It can emit progress information if called with `verbose_progress` set to `true`. diff --git a/product_docs/docs/pgd/5.6/reference/pgd-settings.mdx b/product_docs/docs/pgd/5.6/reference/pgd-settings.mdx index 4c2bf954370..3130f39dcfb 100644 --- a/product_docs/docs/pgd/5.6/reference/pgd-settings.mdx +++ b/product_docs/docs/pgd/5.6/reference/pgd-settings.mdx @@ -489,15 +489,15 @@ archival, and rotation to prevent disk space exhaustion. ### `bdr.track_subscription_apply` -Tracks apply statistics for each subscription with [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) (default is `on`). +Tracks apply statistics for each subscription with [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible#bdrstat_subscription) (default is `on`). ### `bdr.track_relation_apply` -Tracks apply statistics for each relation with [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) (default is `off`). +Tracks apply statistics for each relation with [`bdr.stat_relation`](/pgd/5.6/reference/catalogs-visible#bdrstat_relation) (default is `off`). ### `bdr.track_apply_lock_timing` -Tracks lock timing when tracking statistics for relations with [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) (default is `off`). +Tracks lock timing when tracking statistics for relations with [`bdr.stat_relation`](/pgd/5.6/reference/catalogs-visible#bdrstat_relation) (default is `off`). ## Decoding worker diff --git a/product_docs/docs/pgd/5.6/rel_notes/pgd_5.4.0_rel_notes.mdx b/product_docs/docs/pgd/5.6/rel_notes/pgd_5.4.0_rel_notes.mdx index 2ba18fc19cf..bd294263980 100644 --- a/product_docs/docs/pgd/5.6/rel_notes/pgd_5.4.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5.6/rel_notes/pgd_5.4.0_rel_notes.mdx @@ -17,7 +17,7 @@ We recommend that all users of PGD 5 upgrade to PGD 5.4. See [PGD/TPA upgrades]( Highlights of this 5.4.0 release include improvements to: * Group Commit, aiming to optimize performance by minimizing the effect of a node's downtime and simplifying overall operating of PGD clusters. -* `apply_delay`, enabling the creation of a delayed read-only [replica](/pgd/latest/nodes/subscriber_only/overview/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. +* `apply_delay`, enabling the creation of a delayed read-only [replica](/pgd/5.6/nodes/subscriber_only/overview/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. ## Compatibility diff --git a/product_docs/docs/pgd/5.6/repsets.mdx b/product_docs/docs/pgd/5.6/repsets.mdx index afcf3cf7bab..19009009ffb 100644 --- a/product_docs/docs/pgd/5.6/repsets.mdx +++ b/product_docs/docs/pgd/5.6/repsets.mdx @@ -17,7 +17,7 @@ In other words, by default, all user tables are replicated between all nodes. ## Using replication sets -You can create replication sets using [`bdr.create_replication_set`](/pgd/latest/reference/repsets-management#bdrcreate_replication_set), +You can create replication sets using [`bdr.create_replication_set`](/pgd/5.6/reference/repsets-management#bdrcreate_replication_set), specifying whether to include insert, update, delete, or truncate actions. One option lets you add existing tables to the set, and a second option defines whether to add tables when they're @@ -33,12 +33,12 @@ Once the node is joined, you can still remove tables from the replication set, but you must add new tables using a resync operation. By default, a newly defined replication set doesn't replicate DDL or PGD -administration function calls. Use [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) +administration function calls. Use [`bdr.replication_set_add_ddl_filter`](/pgd/5.6/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) to define the commands to replicate. PGD creates replication set definitions on all nodes. Each node can then be defined to publish or subscribe to each replication set using -[`bdr.alter_node_replication_sets`](/pgd/latest/reference/repsets-management#bdralter_node_replication_sets). +[`bdr.alter_node_replication_sets`](/pgd/5.6/reference/repsets-management#bdralter_node_replication_sets). You can use functions to alter these definitions later or to drop the replication set. @@ -146,7 +146,7 @@ of replication set A that replicates only INSERT actions and replication set B t replicates only UPDATE actions. Both INSERT and UPDATE actions are replicated if the target node is also subscribed to both replication set A and B. -You can control membership using [`bdr.replication_set_add_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_add_table) and [`bdr.replication_set_remove_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_remove_table). +You can control membership using [`bdr.replication_set_add_table`](/pgd/5.6/reference/repsets-membership#bdrreplication_set_add_table) and [`bdr.replication_set_remove_table`](/pgd/5.6/reference/repsets-membership#bdrreplication_set_remove_table). ## Listing replication sets @@ -245,7 +245,7 @@ filter, the regular expression applied to the command tag and to the role name: SELECT * FROM bdr.ddl_replication; ``` -You can use [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) and [`bdr.replication_set_remove_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) to manipulate DDL filters. +You can use [`bdr.replication_set_add_ddl_filter`](/pgd/5.6/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) and [`bdr.replication_set_remove_ddl_filter`](/pgd/5.6/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) to manipulate DDL filters. They're considered to be `DDL` and are therefore subject to DDL replication and global locking. diff --git a/product_docs/docs/pgd/5.6/routing/administering.mdx b/product_docs/docs/pgd/5.6/routing/administering.mdx index fe989098721..7610765d870 100644 --- a/product_docs/docs/pgd/5.6/routing/administering.mdx +++ b/product_docs/docs/pgd/5.6/routing/administering.mdx @@ -22,7 +22,7 @@ The switchover operation is not a guaranteed operation. If, due to a timeout or You can perform a switchover operation that explicitly changes the node that's the write leader to another node. -Use the [`bdr.routing_leadership_transfer()`](/pgd/latest/reference/routing#bdrrouting_leadership_transfer) function. +Use the [`bdr.routing_leadership_transfer()`](/pgd/5.6/reference/routing#bdrrouting_leadership_transfer) function. For example, to switch the write leader to node `node1` in group `group1`, use the following SQL command: diff --git a/product_docs/docs/pgd/5.6/routing/configuration.mdx b/product_docs/docs/pgd/5.6/routing/configuration.mdx index 3e11ee0964c..957bc3bd4aa 100644 --- a/product_docs/docs/pgd/5.6/routing/configuration.mdx +++ b/product_docs/docs/pgd/5.6/routing/configuration.mdx @@ -8,7 +8,7 @@ navTitle: "Configuration" Configuring the routing is done either through SQL interfaces or through PGD CLI. -You can enable routing decisions by calling the [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option) function. +You can enable routing decisions by calling the [`bdr.alter_node_group_option()`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_node_group_option) function. For example: ```text @@ -27,7 +27,7 @@ Additional group-level options affect the routing decisions: ## Node-level configuration -Set per-node configuration of routing using [`bdr.alter_node_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option). The +Set per-node configuration of routing using [`bdr.alter_node_option()`](/pgd/5.6/reference/nodes-management-interfaces#bdralter_node_option). The available options that affect routing are: - `route_dsn` — The dsn used by proxy to connect to this node. @@ -45,7 +45,7 @@ You can configure the proxies using SQL interfaces. ### Creating and dropping proxy configurations -You can add a proxy configuration using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy). +You can add a proxy configuration using [`bdr.create_proxy`](/pgd/5.6/reference/routing#bdrcreate_proxy). For example, `SELECT bdr.create_proxy('region1-proxy1', 'region1-group');` creates the default configuration for a proxy named `region1-proxy1` in the PGD group `region1-group`. @@ -56,7 +56,7 @@ Dropping a proxy deactivates it. ### Altering proxy configurations -You can configure options for each proxy using the [`bdr.alter_proxy_option()`](/pgd/latest/reference/routing#bdralter_proxy_option) function. +You can configure options for each proxy using the [`bdr.alter_proxy_option()`](/pgd/5.6/reference/routing#bdralter_proxy_option) function. The available options are: diff --git a/product_docs/docs/pgd/5.6/routing/index.mdx b/product_docs/docs/pgd/5.6/routing/index.mdx index 27308c310ab..ae297f30029 100644 --- a/product_docs/docs/pgd/5.6/routing/index.mdx +++ b/product_docs/docs/pgd/5.6/routing/index.mdx @@ -15,16 +15,16 @@ navigation: Managing application connections is an important part of high availability. PGD Proxy offers a way to manage connections to the EDB Postgres Distributed cluster. It acts as a proxy layer between the client application and the Postgres database. -* [PGD Proxy overview](/pgd/latest/routing/proxy) provides an overview of the PGD Proxy, its processes, and how it interacts with the EDB Postgres Distributed cluster. +* [PGD Proxy overview](/pgd/5.6/routing/proxy) provides an overview of the PGD Proxy, its processes, and how it interacts with the EDB Postgres Distributed cluster. -* [Installing the PGD Proxy service](/pgd/latest/routing/installing_proxy) covers installation of the PGD Proxy service on a host. +* [Installing the PGD Proxy service](/pgd/5.6/routing/installing_proxy) covers installation of the PGD Proxy service on a host. -* [Configuring PGD Proxy](/pgd/latest/routing/configuration) details the three levels (group, node, and proxy) of configuration on a cluster that control how the PGD Proxy service behaves. +* [Configuring PGD Proxy](/pgd/5.6/routing/configuration) details the three levels (group, node, and proxy) of configuration on a cluster that control how the PGD Proxy service behaves. -* [Administering PGD Proxy](/pgd/latest/routing/administering) shows how to switch the write leader and manage the PGD Proxy. +* [Administering PGD Proxy](/pgd/5.6/routing/administering) shows how to switch the write leader and manage the PGD Proxy. -* [Monitoring PGD Proxy](/pgd/latest/routing/monitoring) looks at how to monitor PGD Proxy through the cluster and at a service level. +* [Monitoring PGD Proxy](/pgd/5.6/routing/monitoring) looks at how to monitor PGD Proxy through the cluster and at a service level. -* [Read-only routing](/pgd/latest/routing/readonly) explains how the read-only routing feature in PGD Proxy enables read scalability. +* [Read-only routing](/pgd/5.6/routing/readonly) explains how the read-only routing feature in PGD Proxy enables read scalability. -* [Raft](/pgd/latest/routing/raft) provides an overview of the Raft consensus mechanism used to coordinate PGD Proxy. +* [Raft](/pgd/5.6/routing/raft) provides an overview of the Raft consensus mechanism used to coordinate PGD Proxy. diff --git a/product_docs/docs/pgd/5.6/routing/monitoring.mdx b/product_docs/docs/pgd/5.6/routing/monitoring.mdx index d8383e276be..d92962a6cbf 100644 --- a/product_docs/docs/pgd/5.6/routing/monitoring.mdx +++ b/product_docs/docs/pgd/5.6/routing/monitoring.mdx @@ -9,11 +9,11 @@ You cam monitor proxies at the cluster and group level or at the process level. ### Using SQL -The current configuration of every group is visible in the [`bdr.node_group_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_group_routing_config_summary) view. +The current configuration of every group is visible in the [`bdr.node_group_routing_config_summary`](/pgd/5.6/reference/catalogs-internal#bdrnode_group_routing_config_summary) view. -The [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary) view shows current per-node routing configuration. +The [`bdr.node_routing_config_summary`](/pgd/5.6/reference/catalogs-internal#bdrnode_routing_config_summary) view shows current per-node routing configuration. -[`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) shows per-proxy configuration. +[`bdr.proxy_config_summary`](/pgd/5.6/reference/catalogs-internal#bdrproxy_config_summary) shows per-proxy configuration. ## Monitoring at the process level diff --git a/product_docs/docs/pgd/5.6/routing/proxy.mdx b/product_docs/docs/pgd/5.6/routing/proxy.mdx index 3ea91560070..f3a2f54a88a 100644 --- a/product_docs/docs/pgd/5.6/routing/proxy.mdx +++ b/product_docs/docs/pgd/5.6/routing/proxy.mdx @@ -69,7 +69,7 @@ Upon starting, PGD Proxy connects to one of the endpoints given in the local con - Proxy options like listen address, listen port. - Routing details including the current write leader in default mode, read nodes in read-only mode, or both in any mode. -The endpoints given in the config file are used only at startup. After that, actual endpoints are taken from the PGD catalog's `route_dsn` field in [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary). +The endpoints given in the config file are used only at startup. After that, actual endpoints are taken from the PGD catalog's `route_dsn` field in [`bdr.node_routing_config_summary`](/pgd/5.6/reference/catalogs-internal#bdrnode_routing_config_summary). PGD manages write leader election. PGD Proxy interacts with PGD to get write leader change events notifications on Postgres notify/listen channels and routes client traffic to the current write leader. PGD Proxy disconnects all existing client connections on write leader change or when write leader is unavailable. Write leader election is a Raft-backed activity and is subject to Raft leader availability. PGD Proxy closes the new client connections if the write leader is unavailable. diff --git a/product_docs/docs/pgd/5.6/scaling.mdx b/product_docs/docs/pgd/5.6/scaling.mdx index 49bb625b580..d29985e5d01 100644 --- a/product_docs/docs/pgd/5.6/scaling.mdx +++ b/product_docs/docs/pgd/5.6/scaling.mdx @@ -19,7 +19,7 @@ your search_path, you need to schema qualify the name of each function. ## Auto creation of partitions -PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) +PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/5.6/reference/autopartition#bdrautopartition) function to create or alter the definition of automatic range partitioning for a table. If no definition exists, it's created. Otherwise, later executions will alter the definition. @@ -42,7 +42,7 @@ case, all partitions are managed locally on each node. Managing partitions locally is useful when the partitioned table isn't a replicated table. In that case, you might not need or want to have all partitions on all nodes. For example, the built-in -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +[`bdr.conflict_history`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history) table isn't a replicated table. It's managed by AutoPartition locally. Each node creates partitions for this table locally and drops them once they're old enough. @@ -145,7 +145,7 @@ upper bound. ## Stopping automatic creation of partitions Use -[`bdr.drop_autopartition()`](/pgd/latest/reference/autopartition#bdrdrop_autopartition) +[`bdr.drop_autopartition()`](/pgd/5.6/reference/autopartition#bdrdrop_autopartition) to drop the autopartitioning rule for the given relation. All pending work items for the relation are deleted, and no new work items are created. @@ -155,7 +155,7 @@ Partition creation is an asynchronous process. AutoPartition provides a set of functions to wait for the partition to be created, locally or on all nodes. Use -[`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions) +[`bdr.autopartition_wait_for_partitions()`](/pgd/5.6/reference/autopartition#bdrautopartition_wait_for_partitions) to wait for the creation of partitions on the local node. The function takes the partitioned table name and a partition key column value and waits until the partition that holds that value is created. @@ -164,14 +164,14 @@ The function waits only for the partitions to be created locally. It doesn't guarantee that the partitions also exist on the remote nodes. To wait for the partition to be created on all PGD nodes, use the -[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) +[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/5.6/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) function. This function internally checks local as well as all remote nodes and waits until the partition is created everywhere. ## Finding a partition Use the -[`bdr.autopartition_find_partition()`](/pgd/latest/reference/autopartition#bdrautopartition_find_partition) +[`bdr.autopartition_find_partition()`](/pgd/5.6/reference/autopartition#bdrautopartition_find_partition) function to find the partition for the given partition key value. If a partition to hold that value doesn't exist, then the function returns NULL. Otherwise it returns the Oid of the partition. @@ -179,10 +179,10 @@ of the partition. ## Enabling or disabling autopartitioning Use -[`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) +[`bdr.autopartition_enable()`](/pgd/5.6/reference/autopartition#bdrautopartition_enable) to enable autopartitioning on the given table. If autopartitioning is already enabled, then no action occurs. Similarly, use -[`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) +[`bdr.autopartition_disable()`](/pgd/5.6/reference/autopartition#bdrautopartition_disable) to disable autopartitioning on the given table. ## Restrictions on EDB Postgres Advanced Server-native automatic partitioning diff --git a/product_docs/docs/pgd/5.6/security/pgd-predefined-roles.mdx b/product_docs/docs/pgd/5.6/security/pgd-predefined-roles.mdx index 97e74fae2a0..e358af4d02b 100644 --- a/product_docs/docs/pgd/5.6/security/pgd-predefined-roles.mdx +++ b/product_docs/docs/pgd/5.6/security/pgd-predefined-roles.mdx @@ -25,71 +25,71 @@ This role provides read access to most of the tables, views, and functions that `SELECT` privilege on: -- [`bdr.autopartition_partitions`](/pgd/latest/reference/catalogs-internal#bdrautopartition_partitions) -- [`bdr.autopartition_rules`](/pgd/latest/reference/catalogs-internal#bdrautopartition_rules) -- [`bdr.ddl_epoch`](/pgd/latest/reference/catalogs-internal#bdrddl_epoch) -- [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) -- [`bdr.global_consensus_journal_details`](/pgd/latest/reference/catalogs-visible#bdrglobal_consensus_journal_details) -- [`bdr.global_lock`](/pgd/latest/reference/catalogs-visible#bdrglobal_lock) -- [`bdr.global_locks`](/pgd/latest/reference/catalogs-visible#bdrglobal_locks) -- [`bdr.group_camo_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_camo_details) -- [`bdr.local_consensus_state`](/pgd/latest/reference/catalogs-visible#bdrlocal_consensus_state) -- [`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary) -- [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) -- [`bdr.node_catchup_info`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info) -- [`bdr.node_catchup_info_details`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info_details) -- [`bdr.node_conflict_resolvers`](/pgd/latest/reference/catalogs-visible#bdrnode_conflict_resolvers) -- [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group) -- [`bdr.node_local_info`](/pgd/latest/reference/catalogs-visible#bdrnode_local_info) -- [`bdr.node_peer_progress`](/pgd/latest/reference/catalogs-visible#bdrnode_peer_progress) -- [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) -- [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) -- [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) -- [`bdr.replication_sets`](/pgd/latest/reference/catalogs-visible#bdrreplication_sets) +- [`bdr.autopartition_partitions`](/pgd/5.6/reference/catalogs-internal#bdrautopartition_partitions) +- [`bdr.autopartition_rules`](/pgd/5.6/reference/catalogs-internal#bdrautopartition_rules) +- [`bdr.ddl_epoch`](/pgd/5.6/reference/catalogs-internal#bdrddl_epoch) +- [`bdr.ddl_replication`](/pgd/5.6/reference/pgd-settings#bdrddl_replication) +- [`bdr.global_consensus_journal_details`](/pgd/5.6/reference/catalogs-visible#bdrglobal_consensus_journal_details) +- [`bdr.global_lock`](/pgd/5.6/reference/catalogs-visible#bdrglobal_lock) +- [`bdr.global_locks`](/pgd/5.6/reference/catalogs-visible#bdrglobal_locks) +- [`bdr.group_camo_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_camo_details) +- [`bdr.local_consensus_state`](/pgd/5.6/reference/catalogs-visible#bdrlocal_consensus_state) +- [`bdr.local_node_summary`](/pgd/5.6/reference/catalogs-visible#bdrlocal_node_summary) +- [`bdr.node`](/pgd/5.6/reference/catalogs-visible#bdrnode) +- [`bdr.node_catchup_info`](/pgd/5.6/reference/catalogs-visible#bdrnode_catchup_info) +- [`bdr.node_catchup_info_details`](/pgd/5.6/reference/catalogs-visible#bdrnode_catchup_info_details) +- [`bdr.node_conflict_resolvers`](/pgd/5.6/reference/catalogs-visible#bdrnode_conflict_resolvers) +- [`bdr.node_group`](/pgd/5.6/reference/catalogs-visible#bdrnode_group) +- [`bdr.node_local_info`](/pgd/5.6/reference/catalogs-visible#bdrnode_local_info) +- [`bdr.node_peer_progress`](/pgd/5.6/reference/catalogs-visible#bdrnode_peer_progress) +- [`bdr.node_replication_rates`](/pgd/5.6/reference/catalogs-visible#bdrnode_replication_rates) +- [`bdr.node_slots`](/pgd/5.6/reference/catalogs-visible#bdrnode_slots) +- [`bdr.node_summary`](/pgd/5.6/reference/catalogs-visible#bdrnode_summary) +- [`bdr.replication_sets`](/pgd/5.6/reference/catalogs-visible#bdrreplication_sets) - `bdr.replication_status` -- [`bdr.sequences`](/pgd/latest/reference/catalogs-visible#bdrsequences) -- [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) -- [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) -- [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) _deprecated_ -- [`bdr.state_journal_details`](/pgd/latest/reference/catalogs-visible#) -- [`bdr.subscription`](/pgd/latest/reference/catalogs-visible#bdrsubscription) -- [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrsubscription_summary) -- [`bdr.tables`](/pgd/latest/reference/catalogs-visible#bdrtables) -- [`bdr.taskmgr_local_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_local_work_queue) -- [`bdr.taskmgr_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_work_queue) -- [`bdr.worker_errors`](/pgd/latest/reference/catalogs-visible#) _deprecated_ -- [`bdr.workers`](/pgd/latest/reference/catalogs-visible#bdrworkers) -- [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters) +- [`bdr.sequences`](/pgd/5.6/reference/catalogs-visible#bdrsequences) +- [`bdr.stat_activity`](/pgd/5.6/reference/catalogs-visible#bdrstat_activity) +- [`bdr.stat_relation`](/pgd/5.6/reference/catalogs-visible#bdrstat_relation) +- [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible#bdrstat_subscription) _deprecated_ +- [`bdr.state_journal_details`](/pgd/5.6/reference/catalogs-visible#) +- [`bdr.subscription`](/pgd/5.6/reference/catalogs-visible#bdrsubscription) +- [`bdr.subscription_summary`](/pgd/5.6/reference/catalogs-visible#bdrsubscription_summary) +- [`bdr.tables`](/pgd/5.6/reference/catalogs-visible#bdrtables) +- [`bdr.taskmgr_local_work_queue`](/pgd/5.6/reference/catalogs-visible#bdrtaskmgr_local_work_queue) +- [`bdr.taskmgr_work_queue`](/pgd/5.6/reference/catalogs-visible#bdrtaskmgr_work_queue) +- [`bdr.worker_errors`](/pgd/5.6/reference/catalogs-visible#) _deprecated_ +- [`bdr.workers`](/pgd/5.6/reference/catalogs-visible#bdrworkers) +- [`bdr.writers`](/pgd/5.6/reference/catalogs-visible#bdrwriters) - `bdr.xid_peer_progress` EXECUTE privilege on: - `bdr.bdr_edition` _deprecated_ -- [`bdr.bdr_version`](/pgd/latest/reference/functions#bdrbdr_version) -- [`bdr.bdr_version_num`](/pgd/latest/reference/functions#bdrbdr_version_num) -- [`bdr.decode_message_payload`](/pgd/latest/reference/functions-internal#bdrdecode_message_payload) -- [`bdr.get_consensus_status`](/pgd/latest/reference/functions#bdrget_consensus_status) -- [`bdr.get_decoding_worker_stat`](/pgd/latest/reference/functions#bdrget_decoding_worker_stat) -- [`bdr.get_global_locks`](/pgd/latest/reference/functions-internal#bdrget_global_locks) -- [`bdr.get_min_required_replication_slots`](/pgd/latest/reference/functions-internal#bdrget_min_required_replication_slots) -- [`bdr.get_min_required_worker_processes`](/pgd/latest/reference/functions-internal#bdrget_min_required_worker_processes) -- [`bdr.get_raft_status`](/pgd/latest/reference/functions#bdrget_raft_status) -- [`bdr.get_relation_stats`](/pgd/latest/reference/functions#bdrget_relation_stats) -- [`bdr.get_slot_flush_timestamp`](/pgd/latest/reference/functions-internal#bdrget_slot_flush_timestamp) +- [`bdr.bdr_version`](/pgd/5.6/reference/functions#bdrbdr_version) +- [`bdr.bdr_version_num`](/pgd/5.6/reference/functions#bdrbdr_version_num) +- [`bdr.decode_message_payload`](/pgd/5.6/reference/functions-internal#bdrdecode_message_payload) +- [`bdr.get_consensus_status`](/pgd/5.6/reference/functions#bdrget_consensus_status) +- [`bdr.get_decoding_worker_stat`](/pgd/5.6/reference/functions#bdrget_decoding_worker_stat) +- [`bdr.get_global_locks`](/pgd/5.6/reference/functions-internal#bdrget_global_locks) +- [`bdr.get_min_required_replication_slots`](/pgd/5.6/reference/functions-internal#bdrget_min_required_replication_slots) +- [`bdr.get_min_required_worker_processes`](/pgd/5.6/reference/functions-internal#bdrget_min_required_worker_processes) +- [`bdr.get_raft_status`](/pgd/5.6/reference/functions#bdrget_raft_status) +- [`bdr.get_relation_stats`](/pgd/5.6/reference/functions#bdrget_relation_stats) +- [`bdr.get_slot_flush_timestamp`](/pgd/5.6/reference/functions-internal#bdrget_slot_flush_timestamp) - `bdr.get_sub_progress_timestamp` -- [`bdr.get_subscription_stats`](/pgd/latest/reference/functions#bdrget_subscription_stats) -- [`bdr.lag_control`](/pgd/latest/reference/functions#bdrlag_control) -- [`bdr.lag_history`](/pgd/latest/reference/functions-internal#bdrlag_history) -- [`bdr.node_catchup_state_name`](/pgd/latest/reference/functions-internal#bdrnode_catchup_state_name) -- [`bdr.node_kind_name`](/pgd/latest/reference/functions-internal#bdrnode_kind_name) -- [`bdr.peer_state_name`](/pgd/latest/reference/functions-internal#bdrpeer_state_name) -- [`bdr.pglogical_proto_version_ranges`](/pgd/latest/reference/functions-internal#bdrpglogical_proto_version_ranges) -- [`bdr.show_subscription_status`](/pgd/latest/reference/functions-internal#bdrshow_subscription_status) -- [`bdr.show_workers`](/pgd/latest/reference/functions-internal#bdrshow_workers) -- [`bdr.show_writers`](/pgd/latest/reference/functions-internal#bdrshow_writers) -- [`bdr.stat_get_activity`](/pgd/latest/reference/functions-internal#bdrstat_get_activity) -- [`bdr.wal_sender_stats`](/pgd/latest/reference/functions#bdrwal_sender_stats) -- [`bdr.worker_role_id_name`](/pgd/latest/reference/functions-internal#bdrworker_role_id_name) +- [`bdr.get_subscription_stats`](/pgd/5.6/reference/functions#bdrget_subscription_stats) +- [`bdr.lag_control`](/pgd/5.6/reference/functions#bdrlag_control) +- [`bdr.lag_history`](/pgd/5.6/reference/functions-internal#bdrlag_history) +- [`bdr.node_catchup_state_name`](/pgd/5.6/reference/functions-internal#bdrnode_catchup_state_name) +- [`bdr.node_kind_name`](/pgd/5.6/reference/functions-internal#bdrnode_kind_name) +- [`bdr.peer_state_name`](/pgd/5.6/reference/functions-internal#bdrpeer_state_name) +- [`bdr.pglogical_proto_version_ranges`](/pgd/5.6/reference/functions-internal#bdrpglogical_proto_version_ranges) +- [`bdr.show_subscription_status`](/pgd/5.6/reference/functions-internal#bdrshow_subscription_status) +- [`bdr.show_workers`](/pgd/5.6/reference/functions-internal#bdrshow_workers) +- [`bdr.show_writers`](/pgd/5.6/reference/functions-internal#bdrshow_writers) +- [`bdr.stat_get_activity`](/pgd/5.6/reference/functions-internal#bdrstat_get_activity) +- [`bdr.wal_sender_stats`](/pgd/5.6/reference/functions#bdrwal_sender_stats) +- [`bdr.worker_role_id_name`](/pgd/5.6/reference/functions-internal#bdrworker_role_id_name) ### bdr_monitor @@ -101,24 +101,24 @@ All privileges from [`bdr_read_all_stats`](#bdr_read_all_stats) plus the followi `SELECT` privilege on: -- [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) -- [`bdr.group_replslots_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_replslots_details) -- [`bdr.group_subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrgroup_subscription_summary) -- [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) +- [`bdr.group_raft_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_raft_details) +- [`bdr.group_replslots_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_replslots_details) +- [`bdr.group_subscription_summary`](/pgd/5.6/reference/catalogs-visible#bdrgroup_subscription_summary) +- [`bdr.group_versions_details`](/pgd/5.6/reference/catalogs-visible#bdrgroup_versions_details) - `bdr.raft_instances` `EXECUTE` privilege on: -- [`bdr.get_raft_instance_by_nodegroup`](/pgd/latest/reference/functions-internal#bdrget_raft_instance_by_nodegroup) -- [`bdr.monitor_camo_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_camo_on_all_nodes) -- [`bdr.monitor_group_raft`](/pgd/latest/reference/functions#bdrmonitor_group_raft) -- [`bdr.monitor_group_versions`](/pgd/latest/reference/functions#bdrmonitor_group_versions) -- [`bdr.monitor_local_replslots`](/pgd/latest/reference/functions#bdrmonitor_local_replslots) -- [`bdr.monitor_raft_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes) -- [`bdr.monitor_replslots_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes) -- [`bdr.monitor_subscription_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes) -- [`bdr.monitor_version_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_version_details_on_all_nodes) -- [`bdr.node_group_member_info`](/pgd/latest/reference/functions-internal#bdrnode_group_member_info) +- [`bdr.get_raft_instance_by_nodegroup`](/pgd/5.6/reference/functions-internal#bdrget_raft_instance_by_nodegroup) +- [`bdr.monitor_camo_on_all_nodes`](/pgd/5.6/reference/functions-internal#bdrmonitor_camo_on_all_nodes) +- [`bdr.monitor_group_raft`](/pgd/5.6/reference/functions#bdrmonitor_group_raft) +- [`bdr.monitor_group_versions`](/pgd/5.6/reference/functions#bdrmonitor_group_versions) +- [`bdr.monitor_local_replslots`](/pgd/5.6/reference/functions#bdrmonitor_local_replslots) +- [`bdr.monitor_raft_details_on_all_nodes`](/pgd/5.6/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes) +- [`bdr.monitor_replslots_details_on_all_nodes`](/pgd/5.6/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes) +- [`bdr.monitor_subscription_details_on_all_nodes`](/pgd/5.6/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes) +- [`bdr.monitor_version_details_on_all_nodes`](/pgd/5.6/reference/functions-internal#bdrmonitor_version_details_on_all_nodes) +- [`bdr.node_group_member_info`](/pgd/5.6/reference/functions-internal#bdrnode_group_member_info) ### bdr_application @@ -130,28 +130,28 @@ This role is designed for applications that require access to PGD features, obje - All functions for column_timestamps datatypes - All functions for CRDT datatypes -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) -- [`bdr.get_configured_camo_partner`](/pgd/latest/reference/functions#bdrget_configured_camo_partner) -- [`bdr.global_lock_table`](/pgd/latest/reference/functions#bdrglobal_lock_table) -- [`bdr.is_camo_partner_connected`](/pgd/latest/reference/functions#bdris_camo_partner_connected) -- [`bdr.is_camo_partner_ready`](/pgd/latest/reference/functions#bdris_camo_partner_ready) -- [`bdr.logical_transaction_status`](/pgd/latest/reference/functions#bdrlogical_transaction_status) +- [`bdr.alter_sequence_set_kind`](/pgd/5.6/reference/sequences#bdralter_sequence_set_kind) +- [`bdr.create_conflict_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) +- [`bdr.create_transform_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) +- [`bdr.drop_trigger`](/pgd/5.6/reference/streamtriggers/interfaces#bdrdrop_trigger) +- [`bdr.get_configured_camo_partner`](/pgd/5.6/reference/functions#bdrget_configured_camo_partner) +- [`bdr.global_lock_table`](/pgd/5.6/reference/functions#bdrglobal_lock_table) +- [`bdr.is_camo_partner_connected`](/pgd/5.6/reference/functions#bdris_camo_partner_connected) +- [`bdr.is_camo_partner_ready`](/pgd/5.6/reference/functions#bdris_camo_partner_ready) +- [`bdr.logical_transaction_status`](/pgd/5.6/reference/functions#bdrlogical_transaction_status) - `bdr.ri_fkey_trigger` -- [`bdr.seq_nextval`](/pgd/latest/reference/functions-internal#bdrseq_nextval) -- [`bdr.seq_currval`](/pgd/latest/reference/functions-internal#bdrseq_currval) -- [`bdr.seq_lastval`](/pgd/latest/reference/functions-internal#bdrseq_lastval) -- [`bdr.trigger_get_committs`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs) -- [`bdr.trigger_get_conflict_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type) -- [`bdr.trigger_get_origin_node_id`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id) -- [`bdr.trigger_get_row`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_row) -- [`bdr.trigger_get_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_type) -- [`bdr.trigger_get_xid`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid) -- [`bdr.wait_for_camo_partner_queue`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue) -- [`bdr.wait_slot_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) -- [`bdr.wait_node_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_node_confirm_lsn) +- [`bdr.seq_nextval`](/pgd/5.6/reference/functions-internal#bdrseq_nextval) +- [`bdr.seq_currval`](/pgd/5.6/reference/functions-internal#bdrseq_currval) +- [`bdr.seq_lastval`](/pgd/5.6/reference/functions-internal#bdrseq_lastval) +- [`bdr.trigger_get_committs`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs) +- [`bdr.trigger_get_conflict_type`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type) +- [`bdr.trigger_get_origin_node_id`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id) +- [`bdr.trigger_get_row`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_row) +- [`bdr.trigger_get_type`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_type) +- [`bdr.trigger_get_xid`](/pgd/5.6/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid) +- [`bdr.wait_for_camo_partner_queue`](/pgd/5.6/reference/functions#bdrwait_for_camo_partner_queue) +- [`bdr.wait_slot_confirm_lsn`](/pgd/5.6/reference/functions#bdrwait_slot_confirm_lsn) +- [`bdr.wait_node_confirm_lsn`](/pgd/5.6/reference/functions#bdrwait_node_confirm_lsn) Many of these functions require additional privileges before you can use them. For example, you must be the table owner to successfully execute @@ -161,7 +161,7 @@ specific function. ### bdr_read_all_conflicts PGD logs conflicts into the -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +[`bdr.conflict_history`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history) table. Conflicts are visible only to table owners, so no extra privileges are required for the owners to read the conflict history. @@ -170,4 +170,4 @@ you can optionally grant the role `bdr_read_all_conflicts` to that user. #### Privileges -An explicit policy is set on [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) that allows this role to read the `bdr.conflict_history` table. +An explicit policy is set on [`bdr.conflict_history`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history) that allows this role to read the `bdr.conflict_history` table. diff --git a/product_docs/docs/pgd/5.6/security/role-management.mdx b/product_docs/docs/pgd/5.6/security/role-management.mdx index cc32a697155..f387d3e8da2 100644 --- a/product_docs/docs/pgd/5.6/security/role-management.mdx +++ b/product_docs/docs/pgd/5.6/security/role-management.mdx @@ -12,12 +12,12 @@ Remember that a user in Postgres terms is simply a role with login privileges. If you do create a role or user in a non-PGD, unreplicated database, it's especially important that you do not make an object in the PGD-replicated database rely on that role. It will break the replication process, as PGD cannot replicate a role that is not in the PGD-replicated database. -You can disable this automatic replication behavior by turning off the [`bdr.role_replication`](https://www.enterprisedb.com/docs/pgd/latest/reference/pgd-settings/#bdrrole_replication) setting, but we don't recommend that. +You can disable this automatic replication behavior by turning off the [`bdr.role_replication`](https://www.enterprisedb.com/docs/pgd/5.6/reference/pgd-settings/#bdrrole_replication) setting, but we don't recommend that. ## Roles for new nodes -New PGD nodes that are added using [`bdr_init_physical`](https://www.enterprisedb.com/docs/pgd/latest/reference/nodes/#bdr_init_physical) will automatically replicate the roles from other nodes of the PGD cluster. +New PGD nodes that are added using [`bdr_init_physical`](https://www.enterprisedb.com/docs/pgd/5.6/reference/nodes/#bdr_init_physical) will automatically replicate the roles from other nodes of the PGD cluster. If a PGD node is joined to a PGD group manually, without using `bdr_init_physical`, existing roles aren't copied to the newly joined node. This is intentional behavior to ensure that access isn't accidentally granted. @@ -37,7 +37,7 @@ When joining a new node, the “No unreplicated roles” rule also applies. If a ## Connections and roles When allocating a new PGD node, the user supplied in the DSN for the `local_dsn` -argument of [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) and the `join_target_dsn` of [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +argument of [`bdr.create_node`](/pgd/5.6/reference/nodes-management-interfaces#bdrcreate_node) and the `join_target_dsn` of [`bdr.join_node_group`](/pgd/5.6/reference/nodes-management-interfaces#bdrjoin_node_group) are used frequently to refer to, create, and manage database objects. PGD is carefully written to prevent privilege escalation attacks even when using diff --git a/product_docs/docs/pgd/5.6/security/roles.mdx b/product_docs/docs/pgd/5.6/security/roles.mdx index f058d2567bb..aebeee9b706 100644 --- a/product_docs/docs/pgd/5.6/security/roles.mdx +++ b/product_docs/docs/pgd/5.6/security/roles.mdx @@ -12,7 +12,7 @@ PGD are split across the following predefined roles. | [**bdr_read_all_stats**](pgd-predefined-roles/#bdr_read_all_stats) | The role having read-only access to the tables, views, and functions, sufficient to understand the state of PGD. | | [**bdr_monitor**](pgd-predefined-roles/#bdr_monitor) | Includes the privileges of bdr_read_all_stats, with some extra privileges for monitoring. | | [**bdr_application**](pgd-predefined-roles/#bdr_application) | The minimal privileges required by applications running PGD. | - | [**bdr_read_all_conflicts**](pgd-predefined-roles/#bdr_read_all_conflicts) | Can view all conflicts in [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history). | + | [**bdr_read_all_conflicts**](pgd-predefined-roles/#bdr_read_all_conflicts) | Can view all conflicts in [`bdr.conflict_history`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history). | These roles are named to be analogous to PostgreSQL's `pg_` [predefined @@ -25,9 +25,9 @@ role has. Managing PGD doesn't require that administrators have access to user data. Arrangements for securing information about conflicts are discussed in -[Logging conflicts to a table](/pgd/latest/reference/conflict_functions#logging-conflicts-to-a-table). +[Logging conflicts to a table](/pgd/5.6/reference/conflict_functions#logging-conflicts-to-a-table). -You can monitor conflicts using the [`bdr.conflict_history_summary`](/pgd/latest/reference/catalogs-visible#bdrconflict_history_summary) view. +You can monitor conflicts using the [`bdr.conflict_history_summary`](/pgd/5.6/reference/catalogs-visible#bdrconflict_history_summary) view. !!! Note The BDR extension and superuser access The one exception to the rule of not needing superuser access is in the diff --git a/product_docs/docs/pgd/5.6/sequences.mdx b/product_docs/docs/pgd/5.6/sequences.mdx index 78e9dfd457d..a9b9f47be77 100644 --- a/product_docs/docs/pgd/5.6/sequences.mdx +++ b/product_docs/docs/pgd/5.6/sequences.mdx @@ -66,7 +66,7 @@ function. This function takes a standard PostgreSQL sequence and marks it as a PGD global sequence. It can also convert the sequence back to the standard PostgreSQL sequence. -PGD also provides the configuration variable [`bdr.default_sequence_kind`](/pgd/latest/reference/pgd-settings/#bdrdefault_sequence_kind). This variable +PGD also provides the configuration variable [`bdr.default_sequence_kind`](/pgd/5.6/reference/pgd-settings/#bdrdefault_sequence_kind). This variable determines the kind of sequence to create when the `CREATE SEQUENCE` command is executed or when a `serial`, `bigserial`, or `GENERATED BY DEFAULT AS IDENTITY` column is created. Valid settings are: @@ -84,7 +84,7 @@ command is executed or when a `serial`, `bigserial`, or sequences (that is, `bigserial`) and `galloc` sequence for `int4` (that is, `serial`) and `int2` sequences. -The [`bdr.sequences`](/pgd/latest/reference/catalogs-visible/#bdrsequences) view shows information about individual sequence kinds. +The [`bdr.sequences`](/pgd/5.6/reference/catalogs-visible/#bdrsequences) view shows information about individual sequence kinds. `currval()` and `lastval()` work correctly for all types of global sequence. @@ -220,7 +220,7 @@ to or more than the above ranges assigned for each sequence datatype. `setval()` doesn't reset the global state for `galloc` sequences. Don't use it. A few limitations apply to `galloc` sequences. PGD tracks `galloc` sequences in a -special PGD catalog [bdr.sequence_alloc](/pgd/latest/reference/catalogs-visible/#bdrsequence_alloc). This +special PGD catalog [bdr.sequence_alloc](/pgd/5.6/reference/catalogs-visible/#bdrsequence_alloc). This catalog is required to track the currently allocated chunks for the `galloc` sequences. The sequence name and namespace is stored in this catalog. The sequence chunk allocation is managed by Raft, whereas any changes to the diff --git a/product_docs/docs/pgd/5.6/testingandtuning.mdx b/product_docs/docs/pgd/5.6/testingandtuning.mdx index e1965318d0d..f1905c09161 100644 --- a/product_docs/docs/pgd/5.6/testingandtuning.mdx +++ b/product_docs/docs/pgd/5.6/testingandtuning.mdx @@ -33,7 +33,7 @@ The Postgres benchmarking application [`pgbench`](https://www.postgresql.org/docs/current/pgbench.html) was extended in PGD 5.0 in the form of a new application: pgd_bench. -[pgd_bench](/pgd/latest/reference/testingandtuning#pgd_bench) is a regular command-line utility that's added to the PostgreSQL bin +[pgd_bench](/pgd/5.6/reference/testingandtuning#pgd_bench) is a regular command-line utility that's added to the PostgreSQL bin directory. The utility is based on the PostgreSQL pgbench tool but supports benchmarking CAMO transactions and PGD-specific workloads. diff --git a/product_docs/docs/pgd/5.6/transaction-streaming.mdx b/product_docs/docs/pgd/5.6/transaction-streaming.mdx index 8e6e53288fa..c1edfb191b0 100644 --- a/product_docs/docs/pgd/5.6/transaction-streaming.mdx +++ b/product_docs/docs/pgd/5.6/transaction-streaming.mdx @@ -56,8 +56,8 @@ processes on each subscriber. This capability is leveraged to provide the follow Configure transaction streaming in two locations: -- At node level, using the GUC [`bdr.default_streaming_mode`](/pgd/latest/reference/pgd-settings/#transaction-streaming) -- At group level, using the function [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) +- At node level, using the GUC [`bdr.default_streaming_mode`](/pgd/5.6/reference/pgd-settings/#transaction-streaming) +- At group level, using the function [`bdr.alter_node_group_option()`](/pgd/5.6/reference/nodes-management-interfaces/#bdralter_node_group_option) ### Node configuration using bdr.default_streaming_mode @@ -81,7 +81,7 @@ provided can also depend on the group configuration setting. See ### Group configuration using bdr.alter_node_group_option() -You can use the parameter `streaming_mode` in the function [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) +You can use the parameter `streaming_mode` in the function [`bdr.alter_node_group_option()`](/pgd/5.6/reference/nodes-management-interfaces/#bdralter_node_group_option) to set the group transaction streaming configuration. Permitted values are: @@ -95,7 +95,7 @@ Permitted values are: The default value is `default`. The value of the current setting is contained in the column `node_group_streaming_mode` -from the view [`bdr.node_group`](/pgd/latest/reference/catalogs-visible/#bdrnode_group). The value returned is +from the view [`bdr.node_group`](/pgd/5.6/reference/catalogs-visible/#bdrnode_group). The value returned is a single char type, and the possible values are `D` (`default`), `W` (`writer`), `F` (`file`), `A` (`auto`), and `O` (`off`). @@ -151,7 +151,7 @@ and can be safely handled by the writer. ## Monitoring -You can monitor the use of transaction streaming using the [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible/#bdrstat_subscription) +You can monitor the use of transaction streaming using the [`bdr.stat_subscription`](/pgd/5.6/reference/catalogs-visible/#bdrstat_subscription) function on the subscriber node. - `nstream_writer` — Number of transactions streamed to a writer. diff --git a/product_docs/docs/pgd/5.6/upgrades/compatibility.mdx b/product_docs/docs/pgd/5.6/upgrades/compatibility.mdx index ac30086652a..88cfb6a4a5b 100644 --- a/product_docs/docs/pgd/5.6/upgrades/compatibility.mdx +++ b/product_docs/docs/pgd/5.6/upgrades/compatibility.mdx @@ -66,6 +66,6 @@ Similarly to CAMO and Eager, Lag Control configuration was also moved to - `bdr.network_monitoring` view was removed along with underlying tables and functions. - Many catalogs were added and some have new columns, as described in - [Catalogs](/pgd/latest/reference/catalogs-visible/). These + [Catalogs](/pgd/5.6/reference/catalogs-visible/). These aren't breaking changes strictly speaking, but we recommend reviewing them when upgrading. diff --git a/product_docs/docs/pgd/5.6/upgrades/upgrading_major_rolling.mdx b/product_docs/docs/pgd/5.6/upgrades/upgrading_major_rolling.mdx index 140bd3aa3a1..eab13212cd4 100644 --- a/product_docs/docs/pgd/5.6/upgrades/upgrading_major_rolling.mdx +++ b/product_docs/docs/pgd/5.6/upgrades/upgrading_major_rolling.mdx @@ -3,8 +3,8 @@ title: Performing a Postgres major version rolling upgrade on a PGD cluster buil navTitle: Upgrading Postgres major versions deepToC: true redirects: - - /pgd/latest/install-admin/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/install-admin/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg + - /pgd/5.6/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg --- ## Upgrading Postgres major versions @@ -169,7 +169,7 @@ The worked example that follows shows upgrading the Postgres major version from ## Worked example -This worked example starts with a TPA-managed PGD cluster deployed using the [AWS quick start](/pgd/latest/quickstart/quick_start_aws/). The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postgres 15. +This worked example starts with a TPA-managed PGD cluster deployed using the [AWS quick start](/pgd/5.6/quickstart/quick_start_aws/). The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postgres 15. This example starts with kaboom. diff --git a/product_docs/docs/pgd/5.7/appusage/timing.mdx b/product_docs/docs/pgd/5.7/appusage/timing.mdx index 62300ef3c8f..74598063d5c 100644 --- a/product_docs/docs/pgd/5.7/appusage/timing.mdx +++ b/product_docs/docs/pgd/5.7/appusage/timing.mdx @@ -8,7 +8,7 @@ possible for a client connected to multiple PGD nodes or switching between them to read stale data. A [queue wait -function](/pgd/latest/reference/functions/#bdrwait_for_apply_queue) is provided +function](/pgd/5.7/reference/functions/#bdrwait_for_apply_queue) is provided for clients or proxies to prevent such stale reads. The synchronous replication features of Postgres are available to PGD as well. diff --git a/product_docs/docs/pgd/5.7/backup.mdx b/product_docs/docs/pgd/5.7/backup.mdx index 6d5fdc5c381..5c62cf787a8 100644 --- a/product_docs/docs/pgd/5.7/backup.mdx +++ b/product_docs/docs/pgd/5.7/backup.mdx @@ -237,7 +237,7 @@ of a single PGD node, optionally plus WAL archives: To clean up leftover PGD metadata: -1. Drop the PGD node using [`bdr.drop_node`](/pgd/latest/reference/functions-internal#bdrdrop_node). +1. Drop the PGD node using [`bdr.drop_node`](/pgd/5.7/reference/functions-internal#bdrdrop_node). 2. Fully stop and restart PostgreSQL (important!). #### Cleanup of replication origins diff --git a/product_docs/docs/pgd/5.7/cdc-failover.mdx b/product_docs/docs/pgd/5.7/cdc-failover.mdx index a50e78aefbc..0c5aa904914 100644 --- a/product_docs/docs/pgd/5.7/cdc-failover.mdx +++ b/product_docs/docs/pgd/5.7/cdc-failover.mdx @@ -43,7 +43,7 @@ Currently, there's no way to ensure exactly-once delivery, and we expect consumi ## Enabling CDC Failover support -To enable CDC Failover support run the SQL command and call the [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option) function with the following parameters: +To enable CDC Failover support run the SQL command and call the [`bdr.alter_node_group_option`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_group_option) function with the following parameters: ```sql select bdr.alter_node_group_option(, @@ -52,9 +52,9 @@ select bdr.alter_node_group_option(, ``` -Replace `` with the name of your cluster’s top-level group. If you don't know the name, it's the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group). +Replace `` with the name of your cluster’s top-level group. If you don't know the name, it's the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/5.7/reference/catalogs-visible#bdrnode_group). -If you do not know the name, it is the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group). You can also use: +If you do not know the name, it is the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/5.7/reference/catalogs-visible#bdrnode_group). You can also use: ```sql SELECT bdr.alter_node_group_option( @@ -78,7 +78,7 @@ Logical replication slots created before the option was set to `global` aren't r Failover slots can also be created with the `CREATE_REPLICATION_SLOT` command on a replication connection. -The status of failover slots is tracked in the [`bdr.failover_replication_slots`](/pgd/latest/reference/catalogs-visible#bdrfailover_replication_slots) table. +The status of failover slots is tracked in the [`bdr.failover_replication_slots`](/pgd/5.7/reference/catalogs-visible#bdrfailover_replication_slots) table. ## CDC Failover support with Postgres 17+ diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/assess/index.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/assess/index.mdx index 26c5dd16e85..9097d2aa1d5 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/assess/index.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/assess/index.mdx @@ -20,7 +20,7 @@ pgd assess [OPTIONS] The assess command has no command specific options. -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/cluster/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/cluster/show.mdx index 9bed9861c08..92642f13838 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/cluster/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/cluster/show.mdx @@ -26,7 +26,7 @@ The following table lists the options available for the `pgd cluster show` comma Only one of the above options can be specified at a time. -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Clock Drift diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/create.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/create.mdx index 4b89bcc32d2..8250604ac7e 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/create.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/create.mdx @@ -16,13 +16,13 @@ pgd commit-scope create [OPTIONS] [GROUP_NAME] Where `` is the name of the commit scope to create. -The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/commit-scopes) and [Commit Scope Rules](/pgd/latest/commit-scopes/commit-scope-rules/) for more information on the rule syntax. +The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/5.7/commit-scopes) and [Commit Scope Rules](/pgd/5.7/commit-scopes/commit-scope-rules/) for more information on the rule syntax. The optional `[GROUP_NAME]` is the name of the group to which the commit scope belongs. If omitted, it defaults to the top-level group. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/drop.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/drop.mdx index 96af78d4045..83428079e61 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/drop.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/drop.mdx @@ -20,7 +20,7 @@ The optional `[GROUP_NAME]` is the name of the group to which the commit scope b ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/show.mdx index 685df3d934d..96d03c0231b 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/show.mdx @@ -18,7 +18,7 @@ Where `` is the name of the commit scope for which you want to dis ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/update.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/update.mdx index f7c3262c2ac..049c0dbc46b 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/update.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/commit-scope/update.mdx @@ -16,13 +16,13 @@ pgd commit-scope update [OPTIONS] [GROUP_NAME] Where `` is the name of the commit scope to update. -The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/commit-scopes) and [Commit Scope Rules](/pgd/latest/commit-scopes/commit-scope-rules/) for more information on the rule syntax. +The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/5.7/commit-scopes) and [Commit Scope Rules](/pgd/5.7/commit-scopes/commit-scope-rules/) for more information on the rule syntax. The optional `[GROUP_NAME]` is the name of the group to which the commit scope belongs. If omitted, it defaults to the top-level group. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/completion/index.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/completion/index.mdx index 4e31162616a..cd0263b46fd 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/completion/index.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/completion/index.mdx @@ -19,7 +19,7 @@ Possible values for shell are `bash`, `fish`, `zsh` and `powershell`. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/events/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/events/show.mdx index 4792221be77..588ab822ec3 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/events/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/events/show.mdx @@ -24,7 +24,7 @@ The following table lists the options available for the `pgd events show` comman | | `--group ` | Only show events for the group with the specified name. | | `-n` |`--limit ` | Limit the number of events to show. Defaults to 20. | -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Node States diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/group/get-option.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/group/get-option.mdx index 0ba52568cf6..33cecd717b9 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/group/get-option.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/group/get-option.mdx @@ -69,7 +69,7 @@ When a value is shown followed by `(inherited)`, this means the value is not spe ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/group/set-leader.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/group/set-leader.mdx index 3deee676a28..6e95b98fa93 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/group/set-leader.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/group/set-leader.mdx @@ -30,7 +30,7 @@ The following table lists the options available for the `pgd group set-leader` c Strict method is the default method. The strict method waits for the new leader to be in sync with the old leader before switching the leader. The fast method is immediate as it does not wait for the new leader to be in sync with the old leader before switching the leader, ignoring `route_write_max_lag`. -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/group/set-option.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/group/set-option.mdx index d09c7c41925..e543bdcd756 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/group/set-option.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/group/set-option.mdx @@ -69,7 +69,7 @@ The following options are available: ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/group/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/group/show.mdx index e590dd84d53..c4d473c2f21 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/group/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/group/show.mdx @@ -18,7 +18,7 @@ Where `` is the name of the group for which you want to display info ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/node/get-option.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/node/get-option.mdx index b20757a4007..5a064248e76 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/node/get-option.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/node/get-option.mdx @@ -32,7 +32,7 @@ The following options are available: ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/node/set-option.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/node/set-option.mdx index 41601d75e4b..5175f2fe4d8 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/node/set-option.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/node/set-option.mdx @@ -32,7 +32,7 @@ The following options are available: ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/node/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/node/show.mdx index b1418d59851..7028014f27c 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/node/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/node/show.mdx @@ -18,7 +18,7 @@ Where `` is the name of the node for which you want to display inform ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/node/upgrade.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/node/upgrade.mdx index e301c9e4bd1..8b5d47d2826 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/node/upgrade.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/node/upgrade.mdx @@ -41,7 +41,7 @@ The following table lists the options available for the `pgd node upgrade` comma | -U | --username | | PGUSER | Cluster's install user name | | | --clone | | | Use efficient file cloning | -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/command_ref/raft/show.mdx b/product_docs/docs/pgd/5.7/cli/command_ref/raft/show.mdx index 546d1c75a92..621aaa270d8 100644 --- a/product_docs/docs/pgd/5.7/cli/command_ref/raft/show.mdx +++ b/product_docs/docs/pgd/5.7/cli/command_ref/raft/show.mdx @@ -16,7 +16,7 @@ pgd raft show [OPTIONS] ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/5.7/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.7/cli/installing/index.mdx b/product_docs/docs/pgd/5.7/cli/installing/index.mdx index af34fd1806b..dcca279832a 100644 --- a/product_docs/docs/pgd/5.7/cli/installing/index.mdx +++ b/product_docs/docs/pgd/5.7/cli/installing/index.mdx @@ -2,7 +2,7 @@ title: "Installing PGD CLI" navTitle: "Installing PGD CLI" redirects: - - /pgd/latest/cli/installing_cli + - /pgd/5.7/cli/installing_cli deepToC: true indexCards: simple description: Installing the PGD CLI on various systems. diff --git a/product_docs/docs/pgd/5.7/cli/using_cli.mdx b/product_docs/docs/pgd/5.7/cli/using_cli.mdx index a17d624a48b..05c62ad4ac0 100644 --- a/product_docs/docs/pgd/5.7/cli/using_cli.mdx +++ b/product_docs/docs/pgd/5.7/cli/using_cli.mdx @@ -30,7 +30,7 @@ Use the `--dsn` flag to pass a database connection string to the `pgd` command. pgd nodes list --dsn "host=bdr-a1 port=5432 dbname=bdrdb user=enterprisedb" ``` -See [PGD CLI Command reference](/pgd/latest/cli/command_ref/) for a description of the command options. +See [PGD CLI Command reference](/pgd/5.7/cli/command_ref/) for a description of the command options. ## Specifying a configuration file diff --git a/product_docs/docs/pgd/5.7/commit-scopes/camo.mdx b/product_docs/docs/pgd/5.7/commit-scopes/camo.mdx index 63db329783b..bafb14b8de4 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/camo.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/camo.mdx @@ -2,7 +2,7 @@ title: Commit At Most Once navTitle: Commit At Most Once redirects: - - /pgd/latest/bdr/camo/ + - /pgd/5.7/bdr/camo/ --- Commit scope kind: `CAMO` @@ -43,7 +43,7 @@ To use CAMO, an application must issue an explicit `COMMIT` message as a separat ## Configuration -See the[`CAMO`](/pgd/latest/reference/commit-scopes/#camo) commit scope reference for configuration parameters. +See the[`CAMO`](/pgd/5.7/reference/commit-scopes/#camo) commit scope reference for configuration parameters. ## Confirmation @@ -76,7 +76,7 @@ When the `DEGRADE ON ... TO ASYNC` clause is used in the commit scope, a node de This doesn't allow COMMIT status to be retrieved, but it does let you choose availability over consistency. This mode can tolerate a single-node failure. In case both nodes of a CAMO pair fail, they might choose incongruent commit decisions to maintain availability, leading to data inconsistencies. -For a CAMO partner to switch to ready, it needs to be connected, and the estimated catchup interval needs to drop below the `timeout` value of `TO ASYNC`. You can check the current readiness status of a CAMO partner with [`bdr.is_camo_partner_ready()`](/pgd/latest/reference/functions#bdris_camo_partner_ready), while [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) provides the current estimate of the catchup time. +For a CAMO partner to switch to ready, it needs to be connected, and the estimated catchup interval needs to drop below the `timeout` value of `TO ASYNC`. You can check the current readiness status of a CAMO partner with [`bdr.is_camo_partner_ready()`](/pgd/5.7/reference/functions#bdris_camo_partner_ready), while [`bdr.node_replication_rates`](/pgd/5.7/reference/catalogs-visible#bdrnode_replication_rates) provides the current estimate of the catchup time. The switch from CAMO-protected to asynchronous mode is only ever triggered by an actual CAMO transaction. This is true either because the commit exceeds the `timeout` value of `TO ASYNC` or, in case the CAMO partner is already known, disconnected at the time of commit. This switch is independent of the estimated catchup interval. If the CAMO pair is configured to require the current node to be the write lead of a group as configured through the `enable_proxy_routing` node group option. See [Commit scopes](commit-scopes) for syntax. This can prevent a split brain situation due to an isolated node from switching to asynchronous mode. If `enable_proxy_routing` isn't set for the CAMO group, the origin node switches to asynchronous mode immediately. @@ -85,7 +85,7 @@ The switch from asynchronous mode to CAMO mode depends on the CAMO partner node, the CAMO partner further delays the switch back to CAMO protected mode. Unlike during normal CAMO operation, in asynchronous mode there's no added commit overhead. This can be problematic, as it allows the node to continuously process more transactions than the CAMO pair can normally process. Even if the CAMO partner eventually reconnects and applies transactions, its lag only ever increases -in such a situation, preventing reestablishing the CAMO protection. To artificially throttle transactional throughput, PGD provides the [`bdr.camo_local_mode_delay`](/pgd/latest/reference/pgd-settings#bdrcamo_local_mode_delay) setting, which allows you to delay a `COMMIT` in local mode by an arbitrary amount of time. We recommend measuring commit times in normal CAMO mode during expected workloads and configuring this delay accordingly. The default is 5 ms, which reflects a asynchronous network and a relatively quick CAMO partner response. +in such a situation, preventing reestablishing the CAMO protection. To artificially throttle transactional throughput, PGD provides the [`bdr.camo_local_mode_delay`](/pgd/5.7/reference/pgd-settings#bdrcamo_local_mode_delay) setting, which allows you to delay a `COMMIT` in local mode by an arbitrary amount of time. We recommend measuring commit times in normal CAMO mode during expected workloads and configuring this delay accordingly. The default is 5 ms, which reflects a asynchronous network and a relatively quick CAMO partner response. Consider the choice of whether to allow asynchronous mode in view of the architecture and the availability requirements. The following examples provide some detail. @@ -184,7 +184,7 @@ If it was a bad connection, then you can check on the CAMO partner node to see i If you can't connect to the partner node, there's not a lot you can do. In this case, panic, or take similar actions. -But if you can connect, you can use [`bdr.logical_transaction_status()`](/pgd/latest/reference/functions#bdrlogical_transaction_status) to find out how the transaction did. The code recorded the required values, node_id and xid (the transaction id), just before committing the transaction. +But if you can connect, you can use [`bdr.logical_transaction_status()`](/pgd/5.7/reference/functions#bdrlogical_transaction_status) to find out how the transaction did. The code recorded the required values, node_id and xid (the transaction id), just before committing the transaction. ``` sql = "SELECT bdr.logical_transaction_status($node_id, $xid)"; @@ -224,24 +224,24 @@ must have at least the [bdr_application](../security/pgd-predefined-roles/#bdr_a role assigned to them. !!! -The function [`bdr.is_camo_partner_connected()`](/pgd/latest/reference/functions#bdris_camo_partner_connected) allows checking the connection status of a CAMO partner node configured in pair mode. There currently is no equivalent for CAMO used with Eager Replication. +The function [`bdr.is_camo_partner_connected()`](/pgd/5.7/reference/functions#bdris_camo_partner_connected) allows checking the connection status of a CAMO partner node configured in pair mode. There currently is no equivalent for CAMO used with Eager Replication. -To check that the CAMO partner is ready, use the function [`bdr.is_camo_partner_ready`](/pgd/latest/reference/functions#bdris_camo_partner_ready). Underneath, this triggers the switch to and from local mode. +To check that the CAMO partner is ready, use the function [`bdr.is_camo_partner_ready`](/pgd/5.7/reference/functions#bdris_camo_partner_ready). Underneath, this triggers the switch to and from local mode. -To find out more about the configured CAMO partner, use [`bdr.get_configured_camo_partner()`](/pgd/latest/reference/functions#bdrget_configured_camo_partner). This function returns the local node's CAMO partner. +To find out more about the configured CAMO partner, use [`bdr.get_configured_camo_partner()`](/pgd/5.7/reference/functions#bdrget_configured_camo_partner). This function returns the local node's CAMO partner. You can wait on the CAMO partner to process the queue with the function -[`bdr.wait_for_camo_partner_queue()`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue). +[`bdr.wait_for_camo_partner_queue()`](/pgd/5.7/reference/functions#bdrwait_for_camo_partner_queue). This function is a wrapper of -[`bdr.wait_for_apply_queue`](/pgd/latest/reference/functions#bdrwait_for_apply_queue). +[`bdr.wait_for_apply_queue`](/pgd/5.7/reference/functions#bdrwait_for_apply_queue). The difference is that -[`bdr.wait_for_camo_partner_queue()`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue) +[`bdr.wait_for_camo_partner_queue()`](/pgd/5.7/reference/functions#bdrwait_for_camo_partner_queue) defaults to querying the CAMO partner node. It returns an error if the local node isn't part of a CAMO pair. To check the status of a transaction that was being committed when the node failed, the application must use the function -[`bdr.logical_transaction_status()`](/pgd/latest/reference/functions#bdrlogical_transaction_status). +[`bdr.logical_transaction_status()`](/pgd/5.7/reference/functions#bdrlogical_transaction_status). You pass this function the the node_id and transaction_id of the transaction you want to check on. With CAMO used in pair mode, you can use this function only on a node that's part of a CAMO pair. Along with Eager Replication, you can use it on all nodes. diff --git a/product_docs/docs/pgd/5.7/commit-scopes/commit-scope-rules.mdx b/product_docs/docs/pgd/5.7/commit-scopes/commit-scope-rules.mdx index e636ae25fbd..01243514714 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/commit-scope-rules.mdx @@ -12,7 +12,7 @@ Each operation is made up of two or three parts: the commit scope group, an opti commit_scope_group [ confirmation_level ] commit_scope_kind ``` -A full formal syntax diagram is available in the [Commit scopes](/pgd/latest/reference/commit-scopes/#commit-scope-syntax) reference. +A full formal syntax diagram is available in the [Commit scopes](/pgd/5.7/reference/commit-scopes/#commit-scope-syntax) reference. A typical commit scope rule, such as `ANY 2 (group) GROUP COMMIT`, can be broken down into its components. `ANY 2 (group)` is the commit scope group specifying, for the rule, which nodes need to respond and confirm they processed the transaction. In this example, any two nodes from the named group must confirm. diff --git a/product_docs/docs/pgd/5.7/commit-scopes/degrading.mdx b/product_docs/docs/pgd/5.7/commit-scopes/degrading.mdx index 0c3980b09fc..67f8247b030 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/degrading.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/degrading.mdx @@ -22,7 +22,7 @@ Once during the commit, while the commit being processed is waiting for response This mechanism alone is insufficient for the intended behavior, as this alone would mean that every transaction—even those that were certain to degrade due to connectivity issues—must wait for the timeout to expire before degraded mode kicks in, which would severely affect performance in such degrading-cluster scenarios. -To avoid this, the PGD manager process also periodically (every 5s) checks the connectivity and apply rate (the one in [bdr.node_replication_rates](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates)) and if there are commit scopes that would degrade at that point based on the current state of replication, they will be automatically degraded—such that any transaction using that commit scope when processing after that uses the degraded rule instead of waiting for timeout—until the manager process detects that replication is moving swiftly enough again. +To avoid this, the PGD manager process also periodically (every 5s) checks the connectivity and apply rate (the one in [bdr.node_replication_rates](/pgd/5.7/reference/catalogs-visible/#bdrnode_replication_rates)) and if there are commit scopes that would degrade at that point based on the current state of replication, they will be automatically degraded—such that any transaction using that commit scope when processing after that uses the degraded rule instead of waiting for timeout—until the manager process detects that replication is moving swiftly enough again. ## SYNCHRONOUS COMMIT and GROUP COMMIT diff --git a/product_docs/docs/pgd/5.7/commit-scopes/group-commit.mdx b/product_docs/docs/pgd/5.7/commit-scopes/group-commit.mdx index 470ba63294e..f1ec4247898 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/group-commit.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/group-commit.mdx @@ -1,7 +1,7 @@ --- title: Group Commit redirects: - - /pgd/latest/bdr/group-commit/ + - /pgd/5.7/bdr/group-commit/ deepToC: true --- @@ -58,7 +58,7 @@ See the Group Commit section of [Limitations](limitations#group-commit). ## Configuration -`GROUP_COMMIT` supports optional `GROUP COMMIT` parameters, as well as `ABORT ON` and `DEGRADE ON` clauses. For a full description of configuration parameters, see the [GROUP_COMMIT](/pgd/latest/reference/commit-scopes/#group-commit) commit scope reference or for more regarding `DEGRADE ON` options in general, see the [Degrade options](degrading) section. +`GROUP_COMMIT` supports optional `GROUP COMMIT` parameters, as well as `ABORT ON` and `DEGRADE ON` clauses. For a full description of configuration parameters, see the [GROUP_COMMIT](/pgd/5.7/reference/commit-scopes/#group-commit) commit scope reference or for more regarding `DEGRADE ON` options in general, see the [Degrade options](degrading) section. ## Confirmation diff --git a/product_docs/docs/pgd/5.7/commit-scopes/index.mdx b/product_docs/docs/pgd/5.7/commit-scopes/index.mdx index 7c7710869d0..98eef79610c 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/index.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/index.mdx @@ -20,9 +20,9 @@ navigation: - limitations description: Durability options, commit scopes, and lag control in PGD. redirects: - - /pgd/latest/bdr/durability/ - - /pgd/latest/choosing_durability/ - - /pgd/latest/durability/ + - /pgd/5.7/bdr/durability/ + - /pgd/5.7/choosing_durability/ + - /pgd/5.7/durability/ --- EDB Postgres Distributed (PGD) offers a range of synchronous modes to complement its diff --git a/product_docs/docs/pgd/5.7/commit-scopes/lag-control.mdx b/product_docs/docs/pgd/5.7/commit-scopes/lag-control.mdx index 8bfad64a166..7c72825ba2f 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/lag-control.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/lag-control.mdx @@ -1,7 +1,7 @@ --- title: Lag Control redirects: - - /pgd/latest/bdr/lag-control/ + - /pgd/5.7/bdr/lag-control/ --- Commit scope kind: `LAG CONTROL` diff --git a/product_docs/docs/pgd/5.7/commit-scopes/limitations.mdx b/product_docs/docs/pgd/5.7/commit-scopes/limitations.mdx index a67b71db097..01118f86c9f 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/limitations.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/limitations.mdx @@ -43,7 +43,7 @@ nodes in a group. If you use this feature, take the following limitations into a ## Eager -[Eager](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. +[Eager](/pgd/5.7/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It also doesn't allow `LISTEN` or `UNLISTEN`. diff --git a/product_docs/docs/pgd/5.7/commit-scopes/synchronous_commit.mdx b/product_docs/docs/pgd/5.7/commit-scopes/synchronous_commit.mdx index 668c0a13808..26048a65994 100644 --- a/product_docs/docs/pgd/5.7/commit-scopes/synchronous_commit.mdx +++ b/product_docs/docs/pgd/5.7/commit-scopes/synchronous_commit.mdx @@ -26,7 +26,7 @@ SELECT bdr.create_commit_scope( ## Configuration -`SYNCHRONOUS COMMIT` supports the optional `DEGRADE ON` clause. See the [`SYNCHRONOUS COMMIT`](/pgd/latest/reference/commit-scopes/#synchronous-commit) commit scope reference for specific configuration parameters or see [this section](degrading) regarding Degrade on options. +`SYNCHRONOUS COMMIT` supports the optional `DEGRADE ON` clause. See the [`SYNCHRONOUS COMMIT`](/pgd/5.7/reference/commit-scopes/#synchronous-commit) commit scope reference for specific configuration parameters or see [this section](degrading) regarding Degrade on options. ## Confirmation diff --git a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/01_overview_clcd.mdx b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/01_overview_clcd.mdx index 0c810550011..78139e2d944 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/01_overview_clcd.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/01_overview_clcd.mdx @@ -35,7 +35,7 @@ Applied to the previous example, the result is `(100,100)` on both nodes, despit When thinking about column-level conflict resolution, it can be useful to see tables as vertically partitioned, so that each update affects data in only one slice. This approach eliminates conflicts between changes to different subsets of columns. In fact, vertical partitioning can even be a practical alternative to column-level conflict resolution. -Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. +Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/5.7/reference/conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. ## Special problems for column-level conflict resolution diff --git a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/02_enabling_disabling.mdx b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/02_enabling_disabling.mdx index a145d1d67a7..801a3fdd108 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/02_enabling_disabling.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/02_enabling_disabling.mdx @@ -8,11 +8,11 @@ deepToC: true Column-level conflict detection uses the `column_timestamps` type. This type requires any user needing to detect column-level conflicts to have at least the [bdr_application](../../security/pgd-predefined-roles/#bdr_application) role assigned. !!! -The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions/#bdralter_table_conflict_detection) function manages column-level conflict resolution. +The [bdr.alter_table_conflict_detection()](https://www.enterprisedb.com/docs/pgd/5.7/reference/conflict_functions/#bdralter_table_conflict_detection) function manages column-level conflict resolution. ## Using bdr.alter_table_conflict_detection to enable column-level conflict resolution -The [bdr.alter_table_conflict_detection](https://www.enterprisedb.com/docs/pgd/latest/reference/conflict_functions/#bdralter_table_conflict_detection) function takes a table name and column name as its arguments. The column is added to the table as a `column_modify_timestamp` column. The function also adds two triggers (BEFORE INSERT and BEFORE UPDATE) that are responsible for maintaining timestamps in the new column before each change. +The [bdr.alter_table_conflict_detection](https://www.enterprisedb.com/docs/pgd/5.7/reference/conflict_functions/#bdralter_table_conflict_detection) function takes a table name and column name as its arguments. The column is added to the table as a `column_modify_timestamp` column. The function also adds two triggers (BEFORE INSERT and BEFORE UPDATE) that are responsible for maintaining timestamps in the new column before each change. ```sql db=# CREATE TABLE my_app.test_table (id SERIAL PRIMARY KEY, val INT); diff --git a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/03_timestamps.mdx b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/03_timestamps.mdx index 1e20d619aad..7cf1ff4e7a7 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/03_timestamps.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/03_timestamps.mdx @@ -21,7 +21,7 @@ This approach is simple and, for many cases, it's correct, for example, when the For example, if an `UPDATE` affects multiple rows, the clock continues ticking while the `UPDATE` runs. So each row gets a slightly different timestamp, even if they're being modified concurrently by the one `UPDATE`. This behavior, in turn, means that the effects of concurrent changes might get "mixed" in various ways, depending on how the changes performed on different nodes interleaves. -Another possible issue is clock skew. When the clocks on different nodes drift, the timestamps generated by those nodes also drift. This clock skew can induce unexpected behavior such as newer changes being discarded because the timestamps are apparently switched around. However, you can manage clock skew between nodes using the parameters [bdr.maximum_clock_skew](/pgd/latest/reference/pgd-settings/#bdrmaximum_clock_skew) and [bdr.maximum_clock_skew_action](/pgd/latest/reference/pgd-settings/#bdrmaximum_clock_skew_action). +Another possible issue is clock skew. When the clocks on different nodes drift, the timestamps generated by those nodes also drift. This clock skew can induce unexpected behavior such as newer changes being discarded because the timestamps are apparently switched around. However, you can manage clock skew between nodes using the parameters [bdr.maximum_clock_skew](/pgd/5.7/reference/pgd-settings/#bdrmaximum_clock_skew) and [bdr.maximum_clock_skew_action](/pgd/5.7/reference/pgd-settings/#bdrmaximum_clock_skew_action). As the current timestamp is unrelated to the commit timestamp, using it to resolve conflicts means that the result isn't equivalent to the commit order, which means it probably can't be serialized. diff --git a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/index.mdx b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/index.mdx index 5d379171bae..e33aa5e3b9a 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/index.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/column-level-conflicts/index.mdx @@ -2,7 +2,7 @@ navTitle: Column-level conflict resolution title: Column-level conflict detection redirects: - - /pgd/latest/bdr/column-level-conflicts/ + - /pgd/5.7/bdr/column-level-conflicts/ --- By default, conflicts are resolved at row level. When changes from two nodes conflict, either the local or remote tuple is selected and the other is discarded. For example, commit timestamps for the two conflicting changes might be compared and the newer one kept. This approach ensures that all nodes converge to the same result and establishes commit-order-like semantics on the whole cluster. diff --git a/product_docs/docs/pgd/5.7/conflict-management/conflicts/00_conflicts_overview.mdx b/product_docs/docs/pgd/5.7/conflict-management/conflicts/00_conflicts_overview.mdx index da19837bec6..c101ffa4e9c 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/conflicts/00_conflicts_overview.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/conflicts/00_conflicts_overview.mdx @@ -15,7 +15,7 @@ Conflict handling is configurable, as described in [Conflict resolution](04_conf Column-level conflict detection and resolution is available with PGD, as described in [CLCD](../column-level-conflicts). -By default, all conflicts are logged to [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible/#bdrconflict_history). If conflicts are possible, then table owners must monitor for them and analyze how to avoid them or make plans to handle them regularly as an application task. The [LiveCompare](/livecompare/latest) tool is also available to scan regularly for divergence. +By default, all conflicts are logged to [`bdr.conflict_history`](/pgd/5.7/reference/catalogs-visible/#bdrconflict_history). If conflicts are possible, then table owners must monitor for them and analyze how to avoid them or make plans to handle them regularly as an application task. The [LiveCompare](/livecompare/latest) tool is also available to scan regularly for divergence. Some clustering systems use distributed lock mechanisms to prevent concurrent access to data. These can perform reasonably when servers are very close to each other but can't support geographically distributed applications where very low latency is critical for acceptable performance. diff --git a/product_docs/docs/pgd/5.7/conflict-management/conflicts/02_types_of_conflict.mdx b/product_docs/docs/pgd/5.7/conflict-management/conflicts/02_types_of_conflict.mdx index d9f1cca3255..e0851f10c89 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/conflicts/02_types_of_conflict.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/conflicts/02_types_of_conflict.mdx @@ -50,7 +50,7 @@ The deletion tries to preserve the row with the correct `PRIMARY KEY` and delete In case of multiple rows conflicting this way, if the result of conflict resolution is to proceed with the insert operation, some of the data is always deleted. !!! -You can also define a different behavior using a [conflict trigger](/pgd/latest/striggers/#conflict-triggers). +You can also define a different behavior using a [conflict trigger](/pgd/5.7/striggers/#conflict-triggers). ### UPDATE/UPDATE conflicts diff --git a/product_docs/docs/pgd/5.7/conflict-management/conflicts/index.mdx b/product_docs/docs/pgd/5.7/conflict-management/conflicts/index.mdx index 85a6b5ec93e..8f9363119cb 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/conflicts/index.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/conflicts/index.mdx @@ -1,7 +1,7 @@ --- title: Conflicts redirects: - - /pgd/latest/bdr/conflicts/ + - /pgd/5.7/bdr/conflicts/ --- EDB Postgres Distributed is an active/active or multi-master DBMS. If used asynchronously, writes to the same or related rows from multiple different nodes can result in data conflicts when using standard data types. diff --git a/product_docs/docs/pgd/5.7/conflict-management/crdt/index.mdx b/product_docs/docs/pgd/5.7/conflict-management/crdt/index.mdx index 70292e7018c..65047bb00a2 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/crdt/index.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/crdt/index.mdx @@ -2,7 +2,7 @@ navTitle: CRDTs title: Conflict-free replicated data types redirects: - - /pgd/latest/bdr/crdt/ + - /pgd/5.7/bdr/crdt/ --- Conflict-free replicated data types (CRDTs) support merging values from concurrently modified rows instead of discarding one of the rows as the traditional resolution does. diff --git a/product_docs/docs/pgd/5.7/conflict-management/index.mdx b/product_docs/docs/pgd/5.7/conflict-management/index.mdx index 3d337bf87a2..25e37917a1d 100644 --- a/product_docs/docs/pgd/5.7/conflict-management/index.mdx +++ b/product_docs/docs/pgd/5.7/conflict-management/index.mdx @@ -18,4 +18,4 @@ By default, conflicts are resolved at the row level. When changes from two nodes Column-level conflict detection and resolution is available with PGD, described in [CLCD](column-level-conflicts). -If you want to avoid conflicts, you can use [Group Commit](/pgd/latest/commit-scopes/group-commit/) with [Eager conflict resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) or conflict-free data types (CRDTs), described in [CRDT](crdt). You can also use PGD Proxy and route all writes to one write-leader, eliminating the chance for inter-nodal conflicts. +If you want to avoid conflicts, you can use [Group Commit](/pgd/5.7/commit-scopes/group-commit/) with [Eager conflict resolution](/pgd/5.7/commit-scopes/group-commit/#eager-conflict-resolution) or conflict-free data types (CRDTs), described in [CRDT](crdt). You can also use PGD Proxy and route all writes to one write-leader, eliminating the chance for inter-nodal conflicts. diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-locking.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-locking.mdx index d44fc6b9985..080b7329267 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-locking.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-locking.mdx @@ -73,7 +73,7 @@ Witness and subscriber-only nodes aren't eligible to participate. If a DDL statement isn't replicated, no global locks are acquired. -Specify locking behavior with the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) parameter, as +Specify locking behavior with the [`bdr.ddl_locking`](/pgd/5.7/reference/pgd-settings#bdrddl_locking) parameter, as explained in [Executing DDL on PGD systems](ddl-overview#executing-ddl-on-pgd-systems): - `ddl_locking = all` takes global DDL lock and, if needed, takes relation DML lock. diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-managing-with-pgd-replication.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-managing-with-pgd-replication.mdx index c1fe779c19e..c618d3fed6b 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-managing-with-pgd-replication.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-managing-with-pgd-replication.mdx @@ -32,7 +32,7 @@ SELECT bdr.run_on_all_nodes($ddl$ $ddl$); ``` -We recommend using the [`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) technique with `CREATE +We recommend using the [`bdr.run_on_all_nodes()`](/pgd/5.7/reference/functions#bdrrun_on_all_nodes) technique with `CREATE INDEX CONCURRENTLY`, noting that DDL replication must be disabled for the whole session because `CREATE INDEX CONCURRENTLY` is a multi-transaction command. Avoid `CREATE INDEX` on production systems @@ -60,10 +60,10 @@ cancel the DDL on the originating node with **Control-C** in psql or with `pg_ca You can't cancel a DDL lock from any other node. You can control how long the global lock takes with optional global locking -timeout settings. [`bdr.global_lock_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_timeout) limits how long the wait for +timeout settings. [`bdr.global_lock_timeout`](/pgd/5.7/reference/pgd-settings#bdrglobal_lock_timeout) limits how long the wait for acquiring the global lock can take before it's canceled. -[`bdr.global_lock_statement_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_statement_timeout) limits the runtime length of any statement -in transaction that holds global locks, and [`bdr.global_lock_idle_timeout`](/pgd/latest/reference/pgd-settings#bdrglobal_lock_idle_timeout) sets +[`bdr.global_lock_statement_timeout`](/pgd/5.7/reference/pgd-settings#bdrglobal_lock_statement_timeout) limits the runtime length of any statement +in transaction that holds global locks, and [`bdr.global_lock_idle_timeout`](/pgd/5.7/reference/pgd-settings#bdrglobal_lock_idle_timeout) sets the maximum allowed idle time (time between statements) for a transaction holding any global locks. You can disable all of these timeouts by setting their values to zero. @@ -84,7 +84,7 @@ locks that it holds. If it stays down for a long time or indefinitely, remove the node from the PGD group to release the global locks. This is one reason for executing emergency DDL using the `SET` command as -the bdr_superuser to update the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) value. +the bdr_superuser to update the [`bdr.ddl_locking`](/pgd/5.7/reference/pgd-settings#bdrddl_locking) value. If one of the other nodes goes down after it confirmed the global lock but before the command acquiring it executed, the execution of @@ -102,7 +102,7 @@ command continues normally, and the lock is released. Not all commands can be replicated automatically. Such commands are generally disallowed, unless DDL replication is turned off -by turning [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) off. +by turning [`bdr.ddl_replication`](/pgd/5.7/reference/pgd-settings#bdrddl_replication) off. PGD prevents some DDL statements from running when it's active on a database. This protects the consistency of the system by disallowing diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-overview.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-overview.mdx index 9d1d05b8c7b..63d91d8992b 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-overview.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-overview.mdx @@ -71,7 +71,7 @@ it a useful option when creating a new and empty database schema. These options can be set only by the bdr_superuser, by the superuser, or in the `postgres.conf` configuration file. -When using the [`bdr.replicate_ddl_command`](/pgd/latest/reference/functions#bdrreplicate_ddl_command), you can set this +When using the [`bdr.replicate_ddl_command`](/pgd/5.7/reference/functions#bdrreplicate_ddl_command), you can set this parameter directly with the third argument, using the specified -[`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) setting only for the DDL commands passed to that +[`bdr.ddl_locking`](/pgd/5.7/reference/pgd-settings#bdrddl_locking) setting only for the DDL commands passed to that function. diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-pgd-functions-like-ddl.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-pgd-functions-like-ddl.mdx index 0f9aa5d00e3..0b13a05e378 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-pgd-functions-like-ddl.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-pgd-functions-like-ddl.mdx @@ -10,13 +10,13 @@ information, see the documentation for the individual functions. Replication set management: -- [`bdr.create_replication_set`](/pgd/latest/reference/repsets-management#bdrcreate_replication_set) -- [`bdr.alter_replication_set`](/pgd/latest/reference/repsets-management#bdralter_replication_set) -- [`bdr.drop_replication_set`](/pgd/latest/reference/repsets-management#bdrdrop_replication_set) -- [`bdr.replication_set_add_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_add_table) -- [`bdr.replication_set_remove_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_remove_table) -- [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) -- [`bdr.replication_set_remove_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) +- [`bdr.create_replication_set`](/pgd/5.7/reference/repsets-management#bdrcreate_replication_set) +- [`bdr.alter_replication_set`](/pgd/5.7/reference/repsets-management#bdralter_replication_set) +- [`bdr.drop_replication_set`](/pgd/5.7/reference/repsets-management#bdrdrop_replication_set) +- [`bdr.replication_set_add_table`](/pgd/5.7/reference/repsets-membership#bdrreplication_set_add_table) +- [`bdr.replication_set_remove_table`](/pgd/5.7/reference/repsets-membership#bdrreplication_set_remove_table) +- [`bdr.replication_set_add_ddl_filter`](/pgd/5.7/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) +- [`bdr.replication_set_remove_ddl_filter`](/pgd/5.7/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) Conflict management: @@ -26,10 +26,10 @@ Conflict management: Sequence management: -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) +- [`bdr.alter_sequence_set_kind`](/pgd/5.7/reference/sequences#bdralter_sequence_set_kind) Stream triggers: -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) +- [`bdr.create_conflict_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) +- [`bdr.create_transform_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) +- [`bdr.drop_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrdrop_trigger) diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-replication-options.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-replication-options.mdx index cd37f8f04ef..185b0edadbc 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-replication-options.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-replication-options.mdx @@ -3,7 +3,7 @@ title: DDL replication options navTitle: Options --- -The [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) parameter specifies replication behavior. +The [`bdr.ddl_replication`](/pgd/5.7/reference/pgd-settings#bdrddl_replication) parameter specifies replication behavior. `bdr.ddl_replication = on` is the default. This setting replicates DDL to the default replication set, which by default means all nodes. Non-default @@ -12,7 +12,7 @@ replication sets don't replicate DDL unless they have a defined for them. You can also replicate DDL to specific replication sets using the -function [`bdr.replicate_ddl_command()`](/pgd/latest/reference/functions#bdrreplicate_ddl_command). This function can be helpful if you +function [`bdr.replicate_ddl_command()`](/pgd/5.7/reference/functions#bdrreplicate_ddl_command). This function can be helpful if you want to run DDL commands when a node is down. It's also helpful if you want to have indexes or partitions that exist on a subset of nodes or rep sets, for example, all nodes at site1. @@ -26,7 +26,7 @@ SELECT bdr.replicate_ddl_command( ``` While we don't recommend it, you can skip automatic DDL replication and -execute it manually on each node using the [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) configuration +execute it manually on each node using the [`bdr.ddl_replication`](/pgd/5.7/reference/pgd-settings#bdrddl_replication) configuration parameter. ``` diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-role-manipulation.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-role-manipulation.mdx index 37cff6150aa..74d2d8fc0ba 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-role-manipulation.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-role-manipulation.mdx @@ -11,7 +11,7 @@ PGD requires that any roles that are referenced by any replicated DDL must exist on all nodes. The roles don't have to have the same grants, password, and so on, but they must exist. -PGD replicates role manipulation statements if [`bdr.role_replication`](/pgd/latest/reference/pgd-settings#bdrrole_replication) is +PGD replicates role manipulation statements if [`bdr.role_replication`](/pgd/5.7/reference/pgd-settings#bdrrole_replication) is enabled (default) and role manipulation statements are run in a PGD-enabled database. diff --git a/product_docs/docs/pgd/5.7/ddl/ddl-workarounds.mdx b/product_docs/docs/pgd/5.7/ddl/ddl-workarounds.mdx index 921110deb78..51da9433a4e 100644 --- a/product_docs/docs/pgd/5.7/ddl/ddl-workarounds.mdx +++ b/product_docs/docs/pgd/5.7/ddl/ddl-workarounds.mdx @@ -130,7 +130,7 @@ The `ALTER TYPE` statement is replicated, but affected tables aren't locked. When you use this DDL, ensure that the statement has successfully executed on all nodes before using the new type. You can achieve this using -the [`bdr.wait_slot_confirm_lsn()`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) function. +the [`bdr.wait_slot_confirm_lsn()`](/pgd/5.7/reference/functions#bdrwait_slot_confirm_lsn) function. This example ensures that the DDL is written to all nodes before using the new value in DML statements: diff --git a/product_docs/docs/pgd/5.7/decoding_worker.mdx b/product_docs/docs/pgd/5.7/decoding_worker.mdx index 2064f4d1793..7d9b9343cd5 100644 --- a/product_docs/docs/pgd/5.7/decoding_worker.mdx +++ b/product_docs/docs/pgd/5.7/decoding_worker.mdx @@ -26,8 +26,8 @@ subscribing nodes received data. LCR files are stored under the size of the LCR files varies as replication lag increases, so this process also needs monitoring. The LCRs that aren't required by any of the PGD nodes are cleaned periodically. The interval between two consecutive cleanups is controlled by -[`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is -disabled when [`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval) is 0. +[`bdr.lcr_cleanup_interval`](/pgd/5.7/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is +disabled when [`bdr.lcr_cleanup_interval`](/pgd/5.7/reference/pgd-settings#bdrlcr_cleanup_interval) is 0. ## Disabling @@ -39,11 +39,11 @@ GUCs control the production and use of LCR per node. By default these are `false`. For production and use of LCRs, enable the decoding worker for the PGD group and set these GUCs to `true` on each of the nodes in the PGD group. -- [`bdr.enable_wal_decoder`](/pgd/latest/reference/pgd-settings#bdrenable_wal_decoder) — When `false`, all WAL +- [`bdr.enable_wal_decoder`](/pgd/5.7/reference/pgd-settings#bdrenable_wal_decoder) — When `false`, all WAL senders using LCRs restart to use WAL directly. When `true` along with the PGD group config, a decoding worker process is started to produce LCR and WAL senders that use LCR. -- [`bdr.receive_lcr`](/pgd/latest/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL +- [`bdr.receive_lcr`](/pgd/5.7/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL sender on the publisher node to use LCRs if available. @@ -84,7 +84,7 @@ The WAL decoder always streams the transactions to LCRs but based on downstream To support this feature, the system creates additional streaming files. These files have names in that begin with `STR_TXN_` and `CAS_TXN_` and each streamed transaction creates their own pair. -To enable transaction streaming with the WAL decoder, set the PGD group's `bdr.streaming_mode` set to ‘default’ using [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option). +To enable transaction streaming with the WAL decoder, set the PGD group's `bdr.streaming_mode` set to ‘default’ using [`bdr.alter_node_group_option`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_group_option). diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-cloudservice/index.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-cloudservice/index.mdx index 9d17e513485..d4ab88c67ed 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-cloudservice/index.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-cloudservice/index.mdx @@ -2,9 +2,9 @@ title: Deploying and configuring PGD on EDB Postgres AI Cloud Service navTitle: On EDB Cloud Service redirects: - - /pgd/latest/deploy-config/deploy-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/install-admin/admin-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/deploy-config/deploy-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-biganimal/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-biganimal/ #generated for pgd deploy-config-planning reorg --- EDB Postgres AI Cloud Service is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account where it's operated by our Postgres experts. EDB Postgres AI Cloud Service makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single and multi-region Always-on clusters. diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-kubernetes/index.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-kubernetes/index.mdx index f0a8813151a..989bde4a8d6 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-kubernetes/index.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-kubernetes/index.mdx @@ -2,8 +2,8 @@ title: Deploying and configuring PGD on Kubernetes navTitle: With Kubernetes redirects: - - /pgd/latest/install-admin/admin-kubernetes/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-kubernetes/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-kubernetes/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-kubernetes/ #generated for pgd deploy-config-planning reorg --- EDB CloudNativePG Global Cluster is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx index 1c9b8d291ff..b4b284b7511 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx @@ -3,8 +3,8 @@ title: Step 1 - Provisioning hosts navTitle: Provisioning hosts deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg --- ## Provisioning hosts diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/02-install-postgres.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/02-install-postgres.mdx index dcc6a02b9c8..f2073653dbd 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/02-install-postgres.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/02-install-postgres.mdx @@ -3,8 +3,8 @@ title: Step 2 - Installing Postgres navTitle: Installing Postgres deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg --- ## Installing Postgres diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx index 2f908694bab..e80ac02ea92 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx @@ -3,15 +3,15 @@ title: Step 3 - Configuring PGD repositories navTitle: Configuring PGD repositories deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg --- ## Configuring PGD repositories To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages. -Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](/pgd/latest/nodes/overview), such as witness or subscriber-only nodes. +Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](/pgd/5.7/nodes/overview), such as witness or subscriber-only nodes. * Use your EDB account. * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/04-installing-software.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/04-installing-software.mdx index 1384438cf71..2f0a6a7e02b 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/04-installing-software.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/04-installing-software.mdx @@ -3,8 +3,8 @@ title: Step 4 - Installing the PGD software navTitle: Installing PGD software deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg --- ## Installing the PGD software diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx index 009e2486ecf..8e31d446f50 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx @@ -3,8 +3,8 @@ title: Step 5 - Creating the PGD cluster navTitle: Creating the cluster deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg --- ## Creating the PGD cluster @@ -81,7 +81,7 @@ sudo -iu enterprisedb psql bdrdb ### Create the first node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); @@ -89,7 +89,7 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); #### Create the top-level group -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. +Call the [`bdr.create_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. ``` select bdr.create_node_group('pgd'); @@ -101,7 +101,7 @@ Using subgroups to organize your nodes is preferred, as it allows services like In a larger PGD installation, multiple subgroups can exist. These subgroups provide organizational grouping that enables geographical mapping of clusters and localized resilience. For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and the use of PGD Proxy. -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. +Call the [`bdr.create_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. The subgroup name is the first parameter, and the parent group is the second parameter. This example creates a subgroup `dc1` as a child of `pgd`. @@ -121,7 +121,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the second node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); @@ -129,7 +129,7 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); #### Join the second node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); @@ -146,7 +146,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the third node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); @@ -154,7 +154,7 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); #### Join the third node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/06-check-cluster.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/06-check-cluster.mdx index fc7938ce85c..d31f373f2c3 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/06-check-cluster.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/06-check-cluster.mdx @@ -3,8 +3,8 @@ title: Step 6 - Checking the cluster navTitle: Checking the cluster deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg --- ## Checking the cluster diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx index 0bee3e06b9a..27475ce8fa3 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx @@ -3,8 +3,8 @@ title: Step 7 - Configure proxies navTitle: Configure proxies deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg --- ## Configure proxies @@ -21,9 +21,9 @@ It's best practice to configure PGD Proxy for clusters to enable this behavior. To set up a proxy, you need to first prepare the cluster and subgroup the proxies will be working with by: -* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. -* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. -* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. +* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. +* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/5.7/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/5.7/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. +* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. * Create a pgdproxy user on the cluster with a password or other authentication. ### Configure each host as a proxy @@ -53,7 +53,7 @@ SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true'); SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true'); ``` -You can use the [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: +You can use the [`bdr.node_group_summary`](/pgd/5.7/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: ```sql SELECT node_group_name, enable_proxy_routing, enable_raft @@ -80,7 +80,7 @@ SELECT bdr.create_proxy('pgd-proxy-two','dc1'); SELECT bdr.create_proxy('pgd-proxy-three','dc1'); ``` -You can use the [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: +You can use the [`bdr.proxy_config_summary`](/pgd/5.7/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: ```sql SELECT proxy_name, node_group_name diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx index 11fe960552b..f1c259a0dae 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx @@ -3,8 +3,8 @@ title: Step 8 - Using PGD CLI navTitle: Using PGD CLI deepToC: true redirects: - - /pgd/latest/install-admin/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/08-using-pgd-cli/ #generated for pgd deploy-config-planning reorg --- ## Using PGD CLI diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/index.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/index.mdx index 8fff69ab309..d18de626acd 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/index.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-manual/deploying/index.mdx @@ -11,8 +11,8 @@ navigation: - 07-configure-proxies - 08-using-pgd-cli redirects: - - /pgd/latest/install-admin/admin-manual/installing/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-manual/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-manual/installing/ #generated for pgd deploy-config-planning reorg --- EDB offers automated PGD deployment using Trusted Postgres Architect (TPA) because it's generally more reliable than manual processes. diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/01-configuring.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/01-configuring.mdx index bedd73aec82..c97af2d642e 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/01-configuring.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/01-configuring.mdx @@ -2,8 +2,8 @@ title: Configuring a PGD cluster with TPA navTitle: Configuring redirects: - - /pgd/latest/install-admin/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-tpa/installing/01-configuring/ #generated for pgd deploy-config-planning reorg --- The `tpaexec configure` command generates a simple YAML configuration file to describe a cluster, based on the options you select. The configuration is ready for immediate use, and you can modify it to better suit your needs. Editing the configuration file is the usual way to make any configuration changes to your cluster both before and after it's created. diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/02-deploying.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/02-deploying.mdx index 51a0136d452..ba2cb47b940 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/02-deploying.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/02-deploying.mdx @@ -2,8 +2,8 @@ title: Provisioning, deploying, and testing navTitle: Deploying redirects: - - /pgd/latest/install-admin/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg --- ## Provision diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/index.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/index.mdx index 31738207df9..7f53745f05b 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/index.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/deploying/index.mdx @@ -4,15 +4,15 @@ navTitle: Deploying with TPA description: > Detailed reference and examples for using TPA to configure and deploy PGD redirects: - - /pgd/latest/tpa/ - - /pgd/latest/deployments/tpaexec/using_tpaexec/ - - /pgd/latest/tpa/using_tpa/ + - /pgd/5.7/tpa/ + - /pgd/5.7/deployments/tpaexec/using_tpaexec/ + - /pgd/5.7/tpa/using_tpa/ - ../deployments/tpaexec - ../deployments/tpaexec/installing_tpaexec - ../deployments/using_tpa/ - ../tpa - - /pgd/latest/install-admin/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg --- The standard way of automatically deploying EDB Postgres Distributed in a self-managed setting is to use EDB's deployment tool: [Trusted Postgres Architect](/tpa/latest/) (TPA). @@ -22,11 +22,11 @@ This applies to physical and virtual machines, both self-hosted and in the cloud !!! Note Get started with TPA and PGD quickly - If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. + If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/5.7/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. - If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. + If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/5.7/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. - If you want to run on your own Linux systems or VMs, you can also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](/pgd/latest/quickstart/quick_start_linux). + If you want to run on your own Linux systems or VMs, you can also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](/pgd/5.7/quickstart/quick_start_linux). ## Prerequisite: Install TPA diff --git a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/index.mdx b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/index.mdx index e69081153d9..4c06c56404e 100644 --- a/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/index.mdx +++ b/product_docs/docs/pgd/5.7/deploy-config/deploy-tpa/index.mdx @@ -2,8 +2,8 @@ title: Deployment and management with TPA navTitle: Using TPA redirects: - - /pgd/latest/install-admin/admin-tpa/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-tpa/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-tpa/ #generated for pgd deploy-config-planning reorg --- TPA (Trusted Postgres Architect) is a standard automated way of installing PGD and Postgres on physical and virtual machines, diff --git a/product_docs/docs/pgd/5.7/index.mdx b/product_docs/docs/pgd/5.7/index.mdx index 87b5f673f07..0e96dcfed39 100644 --- a/product_docs/docs/pgd/5.7/index.mdx +++ b/product_docs/docs/pgd/5.7/index.mdx @@ -5,7 +5,7 @@ description: EDB Postgres Distributed (PGD) provides multi-master replication an indexCards: simple redirects: - /pgd/5/compatibility_matrix - - /pgd/latest/bdr + - /pgd/5.7/bdr - /edb-postgres-ai/migration-etl/pgd/ navigation: - rel_notes @@ -55,7 +55,7 @@ categories: pdf: true directoryDefaults: version: "5.7.0" -displayBanner: 'Warning: You are not reading the most recent version of this documentation.
Documentation improvements are made only to the latest version.
As per semantic versioning, PGD minor releases remain backward compatible and may include important bug fixes and enhancements.
We recommend upgrading the latest minor release as soon as possible.
If you want up-to-date information, read the latest PGD documentation.' +displayBanner: 'Warning: You are not reading the most recent version of this documentation.
Documentation improvements are made only to the latest version.
As per semantic versioning, PGD minor releases remain backward compatible and may include important bug fixes and enhancements.
We recommend upgrading the latest minor release as soon as possible.
If you want up-to-date information, read the latest PGD documentation.' --- @@ -80,7 +80,7 @@ Read about why PostgreSQL is better when it’s distributed with EDB Postgres Di By default, EDB Postgres Distributed uses asynchronous replication, applying changes on the peer nodes only after the local commit. You can configure additional levels of synchronicity between different nodes, groups of nodes, or all nodes by configuring -[Synchronous Commit](/pgd/latest/commit-scopes/synchronous_commit/), [Group Commit](commit-scopes/group-commit) (optionally with [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution)), or [CAMO](commit-scopes/camo). +[Synchronous Commit](commit-scopes/synchronous_commit/), [Group Commit](commit-scopes/group-commit) (optionally with [Eager Conflict Resolution](commit-scopes/group-commit/#eager-conflict-resolution)), or [CAMO](commit-scopes/camo). ## Compatibility diff --git a/product_docs/docs/pgd/5.7/known_issues.mdx b/product_docs/docs/pgd/5.7/known_issues.mdx index 92f945eac79..4018ef7a2de 100644 --- a/product_docs/docs/pgd/5.7/known_issues.mdx +++ b/product_docs/docs/pgd/5.7/known_issues.mdx @@ -38,7 +38,7 @@ Adding or removing a pair doesn't require a restart of Postgres or even a reload - Transactions using Eager Replication can't yet execute DDL. The TRUNCATE command is allowed. -- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/latest/reference/pgd-settings#generic-replication). +- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/5.7/reference/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/5.7/reference/pgd-settings#generic-replication). - There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that's concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. @@ -47,7 +47,7 @@ Make sure that any transactions using a specific commit scope have finished befo - The [PGD CLI](cli) can return stale data on the state of the cluster if it's still connecting to nodes that were previously parted from the cluster. Edit the [`pgd-cli-config.yml`](cli/configuring_cli/#using-a-configuration-file) file, or change your [`--dsn`](cli/configuring_cli/#using-database-connection-strings-in-the-command-line) settings to ensure only active nodes in the cluster are listed for connection. -To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/latest/reference/functions#bdralter_commit_scope). +To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/5.7/reference/functions#bdralter_commit_scope). - DDL run in serializable transactions can face the error: `ERROR: could not serialize access due to read/write dependencies among transactions`. A workaround is to run the DDL outside serializable transactions. diff --git a/product_docs/docs/pgd/5.7/monitoring/sql.mdx b/product_docs/docs/pgd/5.7/monitoring/sql.mdx index 4f5d18f198f..5a4f619e87e 100644 --- a/product_docs/docs/pgd/5.7/monitoring/sql.mdx +++ b/product_docs/docs/pgd/5.7/monitoring/sql.mdx @@ -74,7 +74,7 @@ node_seq_id | 3 node_local_dbname | postgres ``` -Also, the table [`bdr.node_catchup_info`](/pgd/latest/reference/catalogs-visible/#bdrnode_catchup_info) gives information +Also, the table [`bdr.node_catchup_info`](/pgd/5.7/reference/catalogs-visible/#bdrnode_catchup_info) gives information on the catch-up state, which can be relevant to joining nodes or parting nodes. When a node is parted, some nodes in the cluster might not receive @@ -94,7 +94,7 @@ The `catchup_state` can be one of the following: The manager worker is responsible for many background tasks, including the managing of all the other workers. As such it is important to know what it's doing, especially in cases where it might seem stuck. -Accordingly, the [`bdr.stat_worker`](/pgd/latest/reference/catalogs-visible/#bdrstat_worker) view provides per worker statistics for PGD workers, including manager workers. With respect to ensuring manager workers do not get stuck, the current task they are executing would be reported in their `query` field prefixed by "pgd manager:". +Accordingly, the [`bdr.stat_worker`](/pgd/5.7/reference/catalogs-visible/#bdrstat_worker) view provides per worker statistics for PGD workers, including manager workers. With respect to ensuring manager workers do not get stuck, the current task they are executing would be reported in their `query` field prefixed by "pgd manager:". The `worker_backend_state` field for manager workers also reports whether the manager is idle or busy. @@ -104,15 +104,15 @@ Routing is a critical part of PGD for ensuring a seemless application experience Monitoring all of these is important for noticing issues, debugging issues, as well as informing more optimal configurations. Accoringly, there are two main views for monitoring statistics to do with routing: -- [`bdr.stat_routing_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_routing_state) for monitoring the state of the connection routing with PGD Proxy uses to route the connections. -- [`bdr.stat_routing_candidate_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_routing_candidate_state) for information about routing candidate nodes from the point of view of the Raft leader (the view is empty on other nodes). +- [`bdr.stat_routing_state`](/pgd/5.7/reference/catalogs-visible/#bdrstat_routing_state) for monitoring the state of the connection routing with PGD Proxy uses to route the connections. +- [`bdr.stat_routing_candidate_state`](/pgd/5.7/reference/catalogs-visible/#bdrstat_routing_candidate_state) for information about routing candidate nodes from the point of view of the Raft leader (the view is empty on other nodes). ## Monitoring Replication Peers You use two main views for monitoring of replication activity: -- [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) for monitoring outgoing replication -- [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible/#bdrsubscription_summary) for monitoring incoming replication +- [`bdr.node_slots`](/pgd/5.7/reference/catalogs-visible/#bdrnode_slots) for monitoring outgoing replication +- [`bdr.subscription_summary`](/pgd/5.7/reference/catalogs-visible/#bdrsubscription_summary) for monitoring incoming replication You can also obtain most of the information provided by `bdr.node_slots` by querying the standard PostgreSQL replication monitoring views @@ -128,9 +128,9 @@ something is down or disconnected. See [Replication slots](../node_management/re You can use another view for monitoring of outgoing replication activity: -- [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates) for monitoring outgoing replication +- [`bdr.node_replication_rates`](/pgd/5.7/reference/catalogs-visible/#bdrnode_replication_rates) for monitoring outgoing replication -The [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible/#bdrnode_replication_rates) view gives an overall picture of the outgoing +The [`bdr.node_replication_rates`](/pgd/5.7/reference/catalogs-visible/#bdrnode_replication_rates) view gives an overall picture of the outgoing replication activity along with the catchup estimates for peer nodes, specifically. @@ -163,10 +163,10 @@ at which the peer is consuming data from the local node. The `replay_lag` when a node reconnects to the cluster is immediately set to zero. This information will be fixed in a future release. As a workaround, we recommend using the `catchup_interval` column that refers to the time required for the peer node to catch up to the -local node data. The other fields are also available from the [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) +local node data. The other fields are also available from the [`bdr.node_slots`](/pgd/5.7/reference/catalogs-visible/#bdrnode_slots) view. -Administrators can query [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible/#bdrnode_slots) for outgoing replication from the +Administrators can query [`bdr.node_slots`](/pgd/5.7/reference/catalogs-visible/#bdrnode_slots) for outgoing replication from the local node. It shows information about replication status of all other nodes in the group that are known to the current node as well as any additional replication slots created by PGD on the current node. @@ -283,13 +283,13 @@ sub_slot_name | bdr_postgres_bdrgroup_node1 subscription_status | replicating ``` -You can further monitor subscriptions by monitoring subscription summary statistics through [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible/#bdrstat_subscription), and by monitoring the subscription replication receivers and subscription replication writers, using [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible/#bdrstat_writer), respectively. +You can further monitor subscriptions by monitoring subscription summary statistics through [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible/#bdrstat_subscription), and by monitoring the subscription replication receivers and subscription replication writers, using [`bdr.stat_receiver`](/pgd/5.7/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/5.7/reference/catalogs-visible/#bdrstat_writer), respectively. ### Monitoring WAL senders using LCR If the [decoding worker](../decoding_worker/) is enabled, you can monitor information about the current logical change record (LCR) file for each WAL sender -using the function [`bdr.wal_sender_stats()`](/pgd/latest/reference/functions/#bdrwal_sender_stats). For example: +using the function [`bdr.wal_sender_stats()`](/pgd/5.7/reference/functions/#bdrwal_sender_stats). For example: ``` postgres=# SELECT * FROM bdr.wal_sender_stats(); @@ -306,7 +306,7 @@ This is the case if the decoding worker isn't enabled or the WAL sender is serving a [logical standby](../nodes/logical_standby_nodes/). Also, you can monitor information about the decoding worker using the function -[`bdr.get_decoding_worker_stat()`](/pgd/latest/reference/functions/#bdrget_decoding_worker_stat). For example: +[`bdr.get_decoding_worker_stat()`](/pgd/5.7/reference/functions/#bdrget_decoding_worker_stat). For example: ``` postgres=# SELECT * FROM bdr.get_decoding_worker_stat(); @@ -365,9 +365,9 @@ Commit scopes are our durability and consistency configuration framework. As suc Accordingly, these two views show relevant statistics about commit scopes: -- [bdr.stat_commit_scope](/pgd/latest/reference/catalogs-visible/#bdrstat_commit_scope) for cumulative statistics for each commit scope. +- [bdr.stat_commit_scope](/pgd/5.7/reference/catalogs-visible/#bdrstat_commit_scope) for cumulative statistics for each commit scope. -- [bdr.stat_commit_scope_state](/pgd/latest/reference/catalogs-visible/#bdrstat_commit_scope_state) for information about the current use of commit scopes by backend processes. +- [bdr.stat_commit_scope_state](/pgd/5.7/reference/catalogs-visible/#bdrstat_commit_scope_state) for information about the current use of commit scopes by backend processes. ## Monitoring global locks @@ -384,7 +384,7 @@ There are currently two types of global locks: You can create either or both entry types for the same transaction, depending on the type of DDL operation and the value of the `bdr.ddl_locking` setting. -Global locks held on the local node are visible in the [`bdr.global_locks`](/pgd/latest/reference/catalogs-visible/#bdrglobal_locks) view. +Global locks held on the local node are visible in the [`bdr.global_locks`](/pgd/5.7/reference/catalogs-visible/#bdrglobal_locks) view. This view shows the type of the lock. For relation locks, it shows the relation that's being locked, the PID holding the lock (if local), and whether the lock was globally granted. In case @@ -406,7 +406,7 @@ relation | someschema.sometable pid | 15534 ``` -See [Catalogs](/pgd/latest/reference/catalogs-visible/) for details on all fields, including lock +See [Catalogs](/pgd/5.7/reference/catalogs-visible/) for details on all fields, including lock timing information. ## Monitoring conflicts @@ -421,7 +421,7 @@ row-level security to ensure they're visible only by owners of replicated tables. Owners should expect conflicts and analyze them to see which, if any, might be considered as problems to resolve. -For monitoring purposes, use [`bdr.conflict_history_summary`](/pgd/latest/reference/catalogs-visible#bdrconflict_history_summary), which doesn't +For monitoring purposes, use [`bdr.conflict_history_summary`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history_summary), which doesn't contain user data. This example shows a query to count the number of conflicts seen in the current day using an efficient query plan: @@ -437,8 +437,8 @@ WHERE local_time > date_trunc('day', current_timestamp) PGD collects statistics about replication apply, both for each subscription and for each table. -Two monitoring views exist: [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) for subscription statistics -and [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) for relation statistics. These views both provide: +Two monitoring views exist: [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible#bdrstat_subscription) for subscription statistics +and [`bdr.stat_relation`](/pgd/5.7/reference/catalogs-visible#bdrstat_relation) for relation statistics. These views both provide: - Number of INSERTs/UPDATEs/DELETEs/TRUNCATEs replicated - Block accesses and cache hit ratio @@ -447,18 +447,18 @@ and [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relatio - Number of in-progress transactions streamed to writers - Number of in-progress streamed transactions committed/aborted -For relations only, [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) also includes: +For relations only, [`bdr.stat_relation`](/pgd/5.7/reference/catalogs-visible#bdrstat_relation) also includes: - Total time spent processing replication for the relation - Total lock wait time to acquire lock (if any) for the relation (only) -For subscriptions only, [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) includes: +For subscriptions only, [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible#bdrstat_subscription) includes: - Number of COMMITs/DDL replicated for the subscription - Number of times this subscription has connected upstream Tracking of these statistics is controlled by the PGD GUCs -[`bdr.track_subscription_apply`](/pgd/latest/reference/pgd-settings#bdrtrack_subscription_apply) and [`bdr.track_relation_apply`](/pgd/latest/reference/pgd-settings#bdrtrack_relation_apply), +[`bdr.track_subscription_apply`](/pgd/5.7/reference/pgd-settings#bdrtrack_subscription_apply) and [`bdr.track_relation_apply`](/pgd/5.7/reference/pgd-settings#bdrtrack_relation_apply), respectively. The following shows the example output from these: @@ -480,9 +480,9 @@ nddl | 2 In this case, the subscription connected three times to the upstream, inserted 10 rows, and performed two DDL commands inside five transactions. -You can reset the stats counters for these views to zero using the functions [`bdr.reset_subscription_stats`](/pgd/latest/reference/functions-internal#bdrreset_subscription_stats) and [`bdr.reset_relation_stats`](/pgd/latest/reference/functions-internal#bdrreset_relation_stats). +You can reset the stats counters for these views to zero using the functions [`bdr.reset_subscription_stats`](/pgd/5.7/reference/functions-internal#bdrreset_subscription_stats) and [`bdr.reset_relation_stats`](/pgd/5.7/reference/functions-internal#bdrreset_relation_stats). -PGD also monitors statistics regarding subscription replication receivers and subscription replication writers for each subscription, using [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible/#bdrstat_writer), respectively. +PGD also monitors statistics regarding subscription replication receivers and subscription replication writers for each subscription, using [`bdr.stat_receiver`](/pgd/5.7/reference/catalogs-visible/#bdrstat_receiver) and [`bdr.stat_writer`](/pgd/5.7/reference/catalogs-visible/#bdrstat_writer), respectively. ## Standard PostgreSQL statistics views @@ -524,8 +524,8 @@ PGD allows running different Postgres versions as well as different BDR extension versions across the nodes in the same cluster. This capability is useful for upgrading. -The view [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) uses the function -[`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) to retrieve Postgres and BDR extension versions from all +The view [`bdr.group_versions_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_versions_details) uses the function +[`bdr.run_on_all_nodes()`](/pgd/5.7/reference/functions#bdrrun_on_all_nodes) to retrieve Postgres and BDR extension versions from all nodes at the same time. For example: ```sql @@ -550,7 +550,7 @@ For monitoring purposes, we recommend the following alert levels: when compared to other nodes The described behavior is implemented in the function -[`bdr.monitor_group_versions()`](/pgd/latest/reference/functions#bdrmonitor_group_versions), which uses PGD version +[`bdr.monitor_group_versions()`](/pgd/5.7/reference/functions#bdrmonitor_group_versions), which uses PGD version information returned from the view `bdr.group_version_details` to provide a cluster-wide version check. For example: @@ -577,8 +577,8 @@ follows: - PGD group replication slot doesn't advance LSN and thus keeps WAL files on disk. -The view [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) uses the functions -[`bdr.run_on_all_nodes()`](/pgd/latest/reference/functions#bdrrun_on_all_nodes) and [`bdr.get_raft_status()`](/pgd/latest/reference/functions#bdrget_raft_status) to retrieve Raft +The view [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) uses the functions +[`bdr.run_on_all_nodes()`](/pgd/5.7/reference/functions#bdrrun_on_all_nodes) and [`bdr.get_raft_status()`](/pgd/5.7/reference/functions#bdrget_raft_status) to retrieve Raft consensus status from all nodes at the same time. For example: ```sql @@ -645,8 +645,8 @@ monitoring alert levels are defined as follows: than the node set as RAFT_LEADER The described behavior is implemented in the function -[`bdr.monitor_group_raft()`](/pgd/latest/reference/functions#bdrmonitor_group_raft), which uses Raft consensus status -information returned from the view [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) +[`bdr.monitor_group_raft()`](/pgd/5.7/reference/functions#bdrmonitor_group_raft), which uses Raft consensus status +information returned from the view [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) to provide a cluster-wide Raft check. For example: ```sql @@ -656,7 +656,7 @@ node_group_name | status | message mygroup | OK | Raft Consensus is working correctly ``` -Two further views that can give a finer-grained look at the state of Raft consensus are [`bdr.stat_raft_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_raft_state), which provides the state of the Raft consensus on the local node, and [`bdr.stat_raft_followers_state`](/pgd/latest/reference/catalogs-visible/#bdrstat_raft_followers_state), which provides a view when on the Raft leader (it is empty on other nodes) regarding the state of the followers of that Raft leader. +Two further views that can give a finer-grained look at the state of Raft consensus are [`bdr.stat_raft_state`](/pgd/5.7/reference/catalogs-visible/#bdrstat_raft_state), which provides the state of the Raft consensus on the local node, and [`bdr.stat_raft_followers_state`](/pgd/5.7/reference/catalogs-visible/#bdrstat_raft_followers_state), which provides a view when on the Raft leader (it is empty on other nodes) regarding the state of the followers of that Raft leader. ## Monitoring replication slots @@ -681,7 +681,7 @@ FROM pg_replication_slots ORDER BY slot_name; Peer slot names follow the convention `bdr___`, while the PGD group slot name follows the convention `bdr__`. You can access the group slot using the function -[`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). +[`bdr.local_group_slot_name()`](/pgd/5.7/reference/functions#bdrlocal_group_slot_name). Peer replication slots must be active on all nodes at all times. If a peer replication slot isn't active, then it might mean either: @@ -698,7 +698,7 @@ maintains this slot and advances its LSN when all other peers already consumed the corresponding transactions. Consequently, it's not necessary to monitor the status of the group slot. -The function [`bdr.monitor_local_replslots()`](/pgd/latest/reference/functions#bdrmonitor_local_replslots) provides a summary of whether all +The function [`bdr.monitor_local_replslots()`](/pgd/5.7/reference/functions#bdrmonitor_local_replslots) provides a summary of whether all PGD node replication slots are working as expected. This summary is also available on subscriber-only nodes that are operating as subscriber-only group leaders in a PGD cluster when [optimized topology](../nodes/subscriber_only/optimizing-so) is enabled. For example: ```sql @@ -724,6 +724,6 @@ One of the following status summaries is returned: By default, PGD transactions are committed only to the local node. In that case, a transaction's `COMMIT` is processed quickly. PGD's [Commit Scopes](../commit-scopes/commit-scopes) feature offers a range of synchronous transaction commit scopes that allow you to balance durability, consistency, and performance for your particular queries. -You can monitor these transactions by examining the [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) catalog. The processes report different `wait_event` states as a transaction is committed. This monitoring only covers transactions in progress and doesn't provide historical timing information. +You can monitor these transactions by examining the [`bdr.stat_activity`](/pgd/5.7/reference/catalogs-visible#bdrstat_activity) catalog. The processes report different `wait_event` states as a transaction is committed. This monitoring only covers transactions in progress and doesn't provide historical timing information. diff --git a/product_docs/docs/pgd/5.7/node_management/creating_and_joining.mdx b/product_docs/docs/pgd/5.7/node_management/creating_and_joining.mdx index 91d36338d36..b561bdaa43e 100644 --- a/product_docs/docs/pgd/5.7/node_management/creating_and_joining.mdx +++ b/product_docs/docs/pgd/5.7/node_management/creating_and_joining.mdx @@ -18,7 +18,7 @@ format, like `host=myhost port=5432 dbname=mydb`, or URI format, like `postgresql://myhost:5432/mydb`. The SQL function -[`bdr.create_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) +[`bdr.create_node_group()`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node_group) creates the PGD group from the local node. Doing so activates PGD on that node and allows other nodes to join the PGD group, which consists of only one node at that point. At the time of creation, you must specify the connection string for @@ -26,11 +26,11 @@ other nodes to use to connect to this node. Once the node group is created, every further node can join the PGD group using the -[`bdr.join_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group()`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) function. Alternatively, use the command line utility -[bdr_init_physical](/pgd/latest/reference/nodes/#bdr_init_physical) to create a +[bdr_init_physical](/pgd/5.7/reference/nodes/#bdr_init_physical) to create a new node, using `pg_basebackup`. If using `pg_basebackup`, the bdr_init_physical utility can optionally specify the base backup of only the target database. The earlier behavior was to back up the entire database cluster. With this utility, @@ -62,7 +62,7 @@ more details, see [Connections and roles](../security/role-management#connection Optionally, you can skip the schema synchronization using the `synchronize_structure` parameter of the -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) function. In this case, the schema must already exist on the newly joining node. We recommend that you select the source node that has the best connection (logically close, ideally with low latency and high bandwidth) @@ -73,7 +73,7 @@ Coordinate the join procedure using the Raft consensus algorithm, which requires most existing nodes to be online and reachable. The logical join procedure (which uses the -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +[`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) function) performs data sync doing `COPY` operations and uses multiple writers (parallel apply) if those are enabled. @@ -99,6 +99,6 @@ If this is necessary, run LiveCompare on the newly joined node to correct any data divergence once all nodes are available and caught up. `pg_dump` can fail when there's concurrent DDL activity on the source node -because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) uses pg_dump +because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) uses pg_dump internally, it might fail if there's concurrent DDL activity on the source node. Retrying the join works in that case. diff --git a/product_docs/docs/pgd/5.7/node_management/creating_nodes.mdx b/product_docs/docs/pgd/5.7/node_management/creating_nodes.mdx index e2fbbee1bc3..6a0dc262b65 100644 --- a/product_docs/docs/pgd/5.7/node_management/creating_nodes.mdx +++ b/product_docs/docs/pgd/5.7/node_management/creating_nodes.mdx @@ -13,7 +13,7 @@ That means, in the most general terms, you can create a PGD node by installing P ## Which Postgres version? -PGD is built on top of Postgres, so the distribution and version of Postgres you use for your PGD nodes is important. The version of Postgres you use must be compatible with the version of PGD you are using. You can find the compatibility matrix in the [release notes](/pgd/latest/rel_notes). Features and functionality in PGD may depend on the distribution of Postgres you are using. The [EDB Postgres Advanced Server](/epas/latest/) is the recommended distribution for PGD. PGD also supports [EDB Postgres Extended Server](/pge/latest/) and [Community Postgres](https://www.postgresql.org/). You can find out what features are available in each distribution in the Planning section's [Choosing a server](../planning/choosing_server) page. +PGD is built on top of Postgres, so the distribution and version of Postgres you use for your PGD nodes is important. The version of Postgres you use must be compatible with the version of PGD you are using. You can find the compatibility matrix in the [release notes](/pgd/5.7/rel_notes). Features and functionality in PGD may depend on the distribution of Postgres you are using. The [EDB Postgres Advanced Server](/epas/latest/) is the recommended distribution for PGD. PGD also supports [EDB Postgres Extended Server](/pge/latest/) and [Community Postgres](https://www.postgresql.org/). You can find out what features are available in each distribution in the Planning section's [Choosing a server](../planning/choosing_server) page. ## Installing Postgres @@ -35,7 +35,7 @@ This process is specific to PGD and involves configuring the Postgres instance t * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

!!! Note The `max_worker_processes` value The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. - To calculate the needed value, see [Postgres configuration/settings](/pgd/latest/postgres-configuration/#postgres-settings). + To calculate the needed value, see [Postgres configuration/settings](/pgd/5.7/postgres-configuration/#postgres-settings). The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. !!! * Set a password on the EnterprisedDB/Postgres user. diff --git a/product_docs/docs/pgd/5.7/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5.7/node_management/heterogeneous_clusters.mdx index 62a3feed9c2..174922a41d9 100644 --- a/product_docs/docs/pgd/5.7/node_management/heterogeneous_clusters.mdx +++ b/product_docs/docs/pgd/5.7/node_management/heterogeneous_clusters.mdx @@ -22,7 +22,7 @@ join the cluster. Don't run any DDLs that might not be available on the older versions and vice versa. A node joining with a different major PostgreSQL release can't use -physical backup taken with [`bdr_init_physical`](/pgd/latest/reference/nodes#bdr_init_physical), and the node must join +physical backup taken with [`bdr_init_physical`](/pgd/5.7/reference/nodes#bdr_init_physical), and the node must join using the logical join method. Using this method is necessary because the major PostgreSQL releases aren't on-disk compatible with each other. diff --git a/product_docs/docs/pgd/5.7/node_management/maintainance_with_proxies.mdx b/product_docs/docs/pgd/5.7/node_management/maintainance_with_proxies.mdx index d8b7143f672..77be3533e7a 100644 --- a/product_docs/docs/pgd/5.7/node_management/maintainance_with_proxies.mdx +++ b/product_docs/docs/pgd/5.7/node_management/maintainance_with_proxies.mdx @@ -39,7 +39,7 @@ select node_name from bdr.node; ``` !!! Tip -For more details, see the [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) table. +For more details, see the [`bdr.node`](/pgd/5.7/reference/catalogs-visible#bdrnode) table. !!! This command lists just the node names. If you need to know the group they are a member of, use: @@ -49,7 +49,7 @@ select node_name, node_group_name from bdr.node_summary; ``` !!! Tip -For more details, see the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) table. +For more details, see the [`bdr.node_summary`](/pgd/5.7/reference/catalogs-visible#bdrnode_summary) table. !!! ## Finding the write leader diff --git a/product_docs/docs/pgd/5.7/node_management/node_recovery.mdx b/product_docs/docs/pgd/5.7/node_management/node_recovery.mdx index b05ac8daaea..38b54665ee6 100644 --- a/product_docs/docs/pgd/5.7/node_management/node_recovery.mdx +++ b/product_docs/docs/pgd/5.7/node_management/node_recovery.mdx @@ -7,7 +7,7 @@ PGD is designed to recover from node restart or node disconnection. The disconnected node rejoins the group by reconnecting to each peer node and then replicating any missing data from that node. -When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) with +When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/5.7/reference/catalogs-visible#bdrnode_slots) with `bdr.node_slots.state = catchup` and begins replicating missing data. Catching up continues for a period of time that depends on the amount of missing data from each peer node and will likely increase diff --git a/product_docs/docs/pgd/5.7/node_management/removing_nodes_and_groups.mdx b/product_docs/docs/pgd/5.7/node_management/removing_nodes_and_groups.mdx index 1ba218e14c8..20d78ca3699 100644 --- a/product_docs/docs/pgd/5.7/node_management/removing_nodes_and_groups.mdx +++ b/product_docs/docs/pgd/5.7/node_management/removing_nodes_and_groups.mdx @@ -10,9 +10,9 @@ permanently. If you permanently shut down a node and don't tell the other nodes, then performance suffers and eventually the whole system stops working. -Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) +Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/5.7/reference/nodes-management-interfaces#bdrpart_node) function. You must specify the node name (as passed during node creation) -to remove a node. You can call the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) function from any active +to remove a node. You can call the [`bdr.part_node()`](/pgd/5.7/reference/nodes-management-interfaces#bdrpart_node) function from any active node in the PGD group, including the node that you're removing. Just like the join procedure, parting is done using Raft consensus and requires a @@ -26,7 +26,7 @@ most recent node to allow them to catch up any missing data. A parted node still is known to PGD but doesn't consume resources. A node might be added again under the same name as a parted node. In rare cases, you might want to clear all metadata of a parted -node by using the function [`bdr.drop_node()`](/pgd/latest/reference/functions-internal#bdrdrop_node). +node by using the function [`bdr.drop_node()`](/pgd/5.7/reference/functions-internal#bdrdrop_node). ## Removing a whole PGD group diff --git a/product_docs/docs/pgd/5.7/node_management/replication_slots.mdx b/product_docs/docs/pgd/5.7/node_management/replication_slots.mdx index 8fbe149f7ff..23ec70bbeec 100644 --- a/product_docs/docs/pgd/5.7/node_management/replication_slots.mdx +++ b/product_docs/docs/pgd/5.7/node_management/replication_slots.mdx @@ -42,7 +42,7 @@ The group slot is an internal slot used by PGD primarily to track the oldest safe position that any node in the PGD group (including all logical standbys) has caught up to, for any outbound replication from this node. -The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). +The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/5.7/reference/functions#bdrlocal_group_slot_name). The group slot can: diff --git a/product_docs/docs/pgd/5.7/node_management/viewing_topology.mdx b/product_docs/docs/pgd/5.7/node_management/viewing_topology.mdx index 254fc8fcd78..f887ba3f551 100644 --- a/product_docs/docs/pgd/5.7/node_management/viewing_topology.mdx +++ b/product_docs/docs/pgd/5.7/node_management/viewing_topology.mdx @@ -26,7 +26,7 @@ pgd groups list The following simple query lists all the PGD node groups of which the current node is a member. It currently returns only one row from -[`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary). +[`bdr.local_node_summary`](/pgd/5.7/reference/catalogs-visible#bdrlocal_node_summary). ```sql SELECT node_group_name @@ -85,7 +85,7 @@ pgd nodes list | grep group_b ### Using SQL You can extract the list of all nodes in a given node group (such as `mygroup`) -from the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary)` view. For example: +from the [`bdr.node_summary`](/pgd/5.7/reference/catalogs-visible#bdrnode_summary)` view. For example: ```sql SELECT node_name AS name diff --git a/product_docs/docs/pgd/5.7/nodes/logical_standby_nodes.mdx b/product_docs/docs/pgd/5.7/nodes/logical_standby_nodes.mdx index a18d28fe430..b745034b617 100644 --- a/product_docs/docs/pgd/5.7/nodes/logical_standby_nodes.mdx +++ b/product_docs/docs/pgd/5.7/nodes/logical_standby_nodes.mdx @@ -14,17 +14,17 @@ A master node can have zero, one, or more logical standby nodes. location is always preferred. Logical standby nodes are nodes that are held in a state of continual recovery, -constantly updating until they're required. This behavior is similar to how Postgres physical standbys operate while using logical replication for better performance. [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` +constantly updating until they're required. This behavior is similar to how Postgres physical standbys operate while using logical replication for better performance. [`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` option to make the node stay in halfway-joined as a logical standby node. Logical standby nodes receive changes but don't send changes made locally to other nodes. Later, if you want, use -[`bdr.promote_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpromote_node) +[`bdr.promote_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrpromote_node) to move the logical standby into a full, normal send/receive node. A logical standby is sent data by one source node, defined by the DSN in -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group). +[`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group). Changes from all other nodes are received from this one source node, minimizing bandwidth between multiple sites. diff --git a/product_docs/docs/pgd/5.7/nodes/subscriber_only/creating-so.mdx b/product_docs/docs/pgd/5.7/nodes/subscriber_only/creating-so.mdx index b95a5b31985..64dc53fb90c 100644 --- a/product_docs/docs/pgd/5.7/nodes/subscriber_only/creating-so.mdx +++ b/product_docs/docs/pgd/5.7/nodes/subscriber_only/creating-so.mdx @@ -28,7 +28,7 @@ This creates a Subscriber-only group named `sogroup` which is a child of the `to ## Adding a node to a new Subscriber-only group manually -You can now initialize a new data node and then add it to the Subscriber-only group. Create a data node and configure the bdr extension on it as you would for any other data node. If you deployed manually, see the [manual install guide](/pgd/latest/deploy-config/deploy-manual/deploying/04-installing-software/) for instructions on how to install and deploy a data node. +You can now initialize a new data node and then add it to the Subscriber-only group. Create a data node and configure the bdr extension on it as you would for any other data node. If you deployed manually, see the [manual install guide](/pgd/5.7/deploy-config/deploy-manual/deploying/04-installing-software/) for instructions on how to install and deploy a data node. You now have to create this new node as a `subscriber-only` node. To do this, log into the new node and run the following SQL command: diff --git a/product_docs/docs/pgd/5.7/nodes/subscriber_only/optimizing-so.mdx b/product_docs/docs/pgd/5.7/nodes/subscriber_only/optimizing-so.mdx index fbe4bd5416f..b01074bea99 100644 --- a/product_docs/docs/pgd/5.7/nodes/subscriber_only/optimizing-so.mdx +++ b/product_docs/docs/pgd/5.7/nodes/subscriber_only/optimizing-so.mdx @@ -56,7 +56,7 @@ The subscriber-only node and group form the building block for PGD tree topologi By default, PGD 5.6 forces the full mesh topology. This means the optimization described here is off. To enable the optimized topology, you must have your data nodes in subgroups, with proxy routing enabled on the subgroups. -You can then set the GUC [`bdr.force_full_mesh`](/pgd/latest/reference/pgd-settings#bdrforce_full_mesh) to `off` to allow the optimization to be activated. +You can then set the GUC [`bdr.force_full_mesh`](/pgd/5.7/reference/pgd-settings#bdrforce_full_mesh) to `off` to allow the optimization to be activated. !!! Note This GUC needs to be set in the `postgresql.conf` file on each data node and each node restarted for the change to take effect. diff --git a/product_docs/docs/pgd/5.7/overview/basic-architecture.mdx b/product_docs/docs/pgd/5.7/overview/basic-architecture.mdx index f6566653d96..c4eb6b2e91c 100644 --- a/product_docs/docs/pgd/5.7/overview/basic-architecture.mdx +++ b/product_docs/docs/pgd/5.7/overview/basic-architecture.mdx @@ -25,7 +25,7 @@ BDR is a Postgres extension that enables a multi-master replication mesh between Changes are replicated directly, row-by-row between all nodes. [Logical replication](../terminology/#logical-replication) in PGD is asynchronous by default, so only eventual consistency is guaranteed (within seconds usually). -However, [commit scope](../commit-scopes/commit-scopes) options offer immediate consistency and durability guarantees via [CAMO](/pgd/latest/commit-scopes/camo/), [group](../commit-scopes/group-commit) and [synchronous](../commit-scopes/synchronous_commit) commits. +However, [commit scope](../commit-scopes/commit-scopes) options offer immediate consistency and durability guarantees via [CAMO](/pgd/5.7/commit-scopes/camo/), [group](../commit-scopes/group-commit) and [synchronous](../commit-scopes/synchronous_commit) commits. The Raft algorithm provides a mechanism for [electing](../routing/raft/04_raft_elections_in_depth/) leaders (both Raft leader and write leader), deciding which nodes to add or subtract from the cluster. It generally ensures that the distributed system remains consistent and fault tolerant, even in the face of node failures. @@ -40,9 +40,9 @@ PGD comprises several key architectural elements that work together to provide i - **Replication mechanisms**: PGD's replication mechanisms include BDR for efficient replication across nodes, enabling multi-master replication. BDR supports asynchronous replication by default but can be configured for varying levels of synchronicity, such as [Group Commit](../commit-scopes/group-commit) or [Synchronous Commit](../commit-scopes/synchronous_commit), to enhance data durability. - **Monitoring tools**: To monitor performance, health, and usage with PGD, you can use its [built-in command-line interface](../cli) (CLI), which offers several useful commands. For example: - - The [`pgd nodes list`](/pgd/latest/cli/command_ref/nodes/list/) command provides a summary of all nodes in the cluster, including their state and status. - - The [`pgd cluster show --health`](/pgd/latest/cli/command_ref/cluster/show/#options) command checks the health of the cluster, reporting on node accessibility, replication slot health, and other critical metrics. - - The [`pgd events show`](/pgd/latest/cli/command_ref/events/show/) command lists significant events like background worker errors and node membership changes, which helps in tracking the operational status and issues within the cluster. + - The [`pgd nodes list`](/pgd/5.7/cli/command_ref/nodes/list/) command provides a summary of all nodes in the cluster, including their state and status. + - The [`pgd cluster show --health`](/pgd/5.7/cli/command_ref/cluster/show/#options) command checks the health of the cluster, reporting on node accessibility, replication slot health, and other critical metrics. + - The [`pgd events show`](/pgd/5.7/cli/command_ref/events/show/) command lists significant events like background worker errors and node membership changes, which helps in tracking the operational status and issues within the cluster. Furthermore, the BDR extension allows for monitoring your cluster using SQL using the [`bdr.monitor`](../security/pgd-predefined-roles/#bdr_monitor) role. diff --git a/product_docs/docs/pgd/5.7/parallelapply.mdx b/product_docs/docs/pgd/5.7/parallelapply.mdx index 726f0b79f61..8be946f4730 100644 --- a/product_docs/docs/pgd/5.7/parallelapply.mdx +++ b/product_docs/docs/pgd/5.7/parallelapply.mdx @@ -13,9 +13,9 @@ subscription and improves replication performance. ### Configuring Parallel Apply Two variables control Parallel Apply in PGD 5: -[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) +[`bdr.max_writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) +[`bdr.writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain @@ -26,18 +26,18 @@ bdr.writers_per_subscription = 2 This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to eight writers for a subscription. -Changing [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) +Changing [`bdr.max_writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrmax_writers_per_subscription) requires a server restart to take effect. You can change -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) +[`bdr.writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by: 1. Halting the subscription using - [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). + [`bdr.alter_subscription_disable`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_subscription_disable). 1. Setting the new value. 1. Resuming the subscription using - [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). + [`bdr.alter_subscription_enable`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_subscription_enable). First though, establish the name of the subscription using `select * from @@ -61,7 +61,7 @@ Parallel Apply is always on by default and, for most operations, we recommend le ### Monitoring Parallel Apply To support Parallel Apply's deadlock mitigation, PGD 5.2 adds columns to -[`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription). +[`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible#bdrstat_subscription). The new columns are `nprovisional_waits`, `ntuple_waits`, and `ncommmit_waits`. These are metrics that indicate how well Parallel Apply is managing what previously would have been deadlocks. They don't reflect overall system @@ -77,7 +77,7 @@ are counted in `ncommit_waits`. ### Disabling Parallel Apply -To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/5.7/reference/pgd-settings#bdrwriters_per_subscription) to `1`. ### Deadlock mitigation diff --git a/product_docs/docs/pgd/5.7/planning/architectures.mdx b/product_docs/docs/pgd/5.7/planning/architectures.mdx index 45171806f3f..54d9dd4a543 100644 --- a/product_docs/docs/pgd/5.7/planning/architectures.mdx +++ b/product_docs/docs/pgd/5.7/planning/architectures.mdx @@ -1,11 +1,11 @@ --- title: "Choosing your architecture" redirects: - - /pgd/latest/architectures/bronze/ - - /pgd/latest/architectures/gold/ - - /pgd/latest/architectures/platinum/ - - /pgd/latest/architectures/silver/ - - /pgd/latest/architectures/ + - /pgd/5.7/architectures/bronze/ + - /pgd/5.7/architectures/gold/ + - /pgd/5.7/architectures/platinum/ + - /pgd/5.7/architectures/silver/ + - /pgd/5.7/architectures/ --- Always-on architectures reflect EDB’s Trusted Postgres architectures. They diff --git a/product_docs/docs/pgd/5.7/planning/choosing_server.mdx b/product_docs/docs/pgd/5.7/planning/choosing_server.mdx index 77fd46b098f..df1320150ac 100644 --- a/product_docs/docs/pgd/5.7/planning/choosing_server.mdx +++ b/product_docs/docs/pgd/5.7/planning/choosing_server.mdx @@ -1,7 +1,7 @@ --- title: "Choosing a Postgres distribution" redirects: - - /pgd/latest/choosing_server/ + - /pgd/5.7/choosing_server/ --- EDB Postgres Distributed can be deployed with three different Postgres distributions: PostgreSQL, EDB Postgres Extended Server, or EDB Postgres Advanced Server. The availability of particular EDB Postgres Distributed features depends on the Postgres distribution being used. Therefore, it's essential to adopt the Postgres distribution best suited to your business needs. For example, if having the Commit At Most Once (CAMO) feature is mission critical to your use case, don't adopt open source PostgreSQL, which doesn't have the core capabilities required to handle CAMO. @@ -10,28 +10,28 @@ The following table lists features of EDB Postgres Distributed that are dependen | Feature | PostgreSQL | EDB Postgres Extended | EDB Postgres Advanced | | ----------------------------------------------------------------------------------------------------------------------- | ---------- | --------------------- | --------------------- | -| [Rolling application and database upgrades](/pgd/latest/upgrades/) | Y | Y | Y | -| [Row-level last-update wins conflict resolution](/pgd/latest/conflict-management/conflicts/) | Y | Y | Y | -| [DDL replication](/pgd/latest/ddl/) | Y | Y | Y | -| [Granular DDL Locking](/pgd/latest/ddl/ddl-locking/) | Y | Y | Y | -| [Streaming of large transactions](/pgd/latest/transaction-streaming/) | v14+ | v13+ | v14+ | -| [Distributed sequences](/pgd/latest/sequences/#pgd-global-sequences) | Y | Y | Y | -| [Subscriber-only nodes](/pgd/latest/nodes/subscriber_only/) | Y | Y | Y | -| [Monitoring](/pgd/latest/monitoring/) | Y | Y | Y | -| [OpenTelemetry support](/pgd/latest/monitoring/otel/) | Y | Y | Y | -| [Parallel apply](/pgd/latest/parallelapply) | Y | Y | Y | -| [Conflict-free replicated data types (CRDTs)](/pgd/latest/conflict-management/crdt/) | Y | Y | Y | -| [Column-level conflict resolution](/pgd/latest/conflict-management/column-level-conflicts/) | Y | Y | Y | -| [Transform triggers](/pgd/latest/striggers/#transform-triggers) | Y | Y | Y | -| [Conflict triggers](/pgd/latest/striggers/#conflict-triggers) | Y | Y | Y | -| [Asynchronous replication](/pgd/latest/commit-scopes/) | Y | Y | Y | -| [Legacy synchronous replication](/pgd/latest/commit-scopes/legacy-sync/) | Y | Y | Y | -| [Group Commit](/pgd/latest/commit-scopes/group-commit/) | N | Y | 14+ | -| [Commit At Most Once (CAMO)](/pgd/latest/commit-scopes/camo/) | N | Y | 14+ | -| [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) | N | Y | 14+ | -| [Lag Control](/pgd/latest/commit-scopes/lag-control/) | N | Y | 14+ | -| [Decoding Worker](/pgd/latest/decoding_worker) | N | 13+ | 14+ | -| [Lag tracker](/pgd/latest/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | +| [Rolling application and database upgrades](/pgd/5.7/upgrades/) | Y | Y | Y | +| [Row-level last-update wins conflict resolution](/pgd/5.7/conflict-management/conflicts/) | Y | Y | Y | +| [DDL replication](/pgd/5.7/ddl/) | Y | Y | Y | +| [Granular DDL Locking](/pgd/5.7/ddl/ddl-locking/) | Y | Y | Y | +| [Streaming of large transactions](/pgd/5.7/transaction-streaming/) | v14+ | v13+ | v14+ | +| [Distributed sequences](/pgd/5.7/sequences/#pgd-global-sequences) | Y | Y | Y | +| [Subscriber-only nodes](/pgd/5.7/nodes/subscriber_only/) | Y | Y | Y | +| [Monitoring](/pgd/5.7/monitoring/) | Y | Y | Y | +| [OpenTelemetry support](/pgd/5.7/monitoring/otel/) | Y | Y | Y | +| [Parallel apply](/pgd/5.7/parallelapply) | Y | Y | Y | +| [Conflict-free replicated data types (CRDTs)](/pgd/5.7/conflict-management/crdt/) | Y | Y | Y | +| [Column-level conflict resolution](/pgd/5.7/conflict-management/column-level-conflicts/) | Y | Y | Y | +| [Transform triggers](/pgd/5.7/striggers/#transform-triggers) | Y | Y | Y | +| [Conflict triggers](/pgd/5.7/striggers/#conflict-triggers) | Y | Y | Y | +| [Asynchronous replication](/pgd/5.7/commit-scopes/) | Y | Y | Y | +| [Legacy synchronous replication](/pgd/5.7/commit-scopes/legacy-sync/) | Y | Y | Y | +| [Group Commit](/pgd/5.7/commit-scopes/group-commit/) | N | Y | 14+ | +| [Commit At Most Once (CAMO)](/pgd/5.7/commit-scopes/camo/) | N | Y | 14+ | +| [Eager Conflict Resolution](/pgd/5.7/commit-scopes/group-commit/#eager-conflict-resolution) | N | Y | 14+ | +| [Lag Control](/pgd/5.7/commit-scopes/lag-control/) | N | Y | 14+ | +| [Decoding Worker](/pgd/5.7/decoding_worker) | N | 13+ | 14+ | +| [Lag tracker](/pgd/5.7/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | | [Missing partition conflict](../reference/conflicts/#target_table_note) | N | Y | 14+ | | [No need for UPDATE Trigger on tables with TOAST](../conflict-management/conflicts/02_types_of_conflict/#toast-support-details) | N | Y | 14+ | | [Automatically hold back FREEZE](../conflict-management/conflicts/03_conflict_detection/#origin-conflict-detection) | N | Y | 14+ | diff --git a/product_docs/docs/pgd/5.7/planning/deployments.mdx b/product_docs/docs/pgd/5.7/planning/deployments.mdx index d5116ed8e8a..35566ff6ffe 100644 --- a/product_docs/docs/pgd/5.7/planning/deployments.mdx +++ b/product_docs/docs/pgd/5.7/planning/deployments.mdx @@ -2,7 +2,7 @@ title: "Choosing your deployment method" indexCards: simple redirects: -- /pgd/latest/deployments +- /pgd/5.7/deployments --- You can deploy and install EDB Postgres Distributed products using the following methods: diff --git a/product_docs/docs/pgd/5.7/planning/limitations.mdx b/product_docs/docs/pgd/5.7/planning/limitations.mdx index ada853b0777..78baf10f623 100644 --- a/product_docs/docs/pgd/5.7/planning/limitations.mdx +++ b/product_docs/docs/pgd/5.7/planning/limitations.mdx @@ -1,7 +1,7 @@ --- title: "Limitations" redirects: -- /pgd/latest/limitations +- /pgd/5.7/limitations --- Take these EDB Postgres Distributed (PGD) design limitations @@ -71,12 +71,12 @@ Also, there are limitations on interoperability with legacy synchronous replicat interoperability with explicit two-phase commit, and unsupported combinations within commit scope rules. -See [Durability limitations](/pgd/latest/commit-scopes/limitations/) for a full +See [Durability limitations](/pgd/5.7/commit-scopes/limitations/) for a full and current listing. ## Mixed PGD versions -While PGD was developed to [enable rolling upgrades of PGD](/pgd/latest/upgrades) by allowing mixed versions of PGD to operate during the upgrade process, we expect users to run mixed versions only during upgrades and for users to complete their upgrades as quickly as possible. +While PGD was developed to [enable rolling upgrades of PGD](/pgd/5.7/upgrades) by allowing mixed versions of PGD to operate during the upgrade process, we expect users to run mixed versions only during upgrades and for users to complete their upgrades as quickly as possible. We also recommend that you test any rolling upgrade process in a non-production environment before attempting it in production. When a node is upgraded, it returns to the cluster and communicates with the other nodes in the cluster using the lowest version of the inter-node protocol that is supported by all the other nodes in the cluster. @@ -90,7 +90,7 @@ Therefore, once an PGD cluster upgrade has begun, you should complete the whole We don't support running mixed versions of PGD except during an upgrade, and we don't support clusters running mixed versions even while being upgraded, for extended periods. -For more information on rolling upgrades and mixed versions, see [Rolling upgrade considerations](/pgd/latest/upgrades/manual_overview#rolling-upgrade-considerations). +For more information on rolling upgrades and mixed versions, see [Rolling upgrade considerations](/pgd/5.7/upgrades/manual_overview#rolling-upgrade-considerations). ## Other limitations diff --git a/product_docs/docs/pgd/5.7/planning/other_considerations.mdx b/product_docs/docs/pgd/5.7/planning/other_considerations.mdx index 7c1025cab20..d7ee4d33617 100644 --- a/product_docs/docs/pgd/5.7/planning/other_considerations.mdx +++ b/product_docs/docs/pgd/5.7/planning/other_considerations.mdx @@ -1,14 +1,14 @@ --- title: "Other considerations" redirects: -- /pgd/latest/other_considerations +- /pgd/5.7/other_considerations --- Review these other considerations when planning your deployment. ## Data consistency -Read about [Conflicts](/pgd/latest/conflict-management/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. +Read about [Conflicts](/pgd/5.7/conflict-management/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. ## Deployment @@ -32,4 +32,4 @@ EDB Postgres Distributed is designed to operate with nodes in multiple timezones Synchronize server clocks using NTP or other solutions. -Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can affect origin conflict detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides row-version conflict detection, as described in [Conflict detection](/pgd/latest/conflict-management/conflicts/). +Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can affect origin conflict detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides row-version conflict detection, as described in [Conflict detection](/pgd/5.7/conflict-management/conflicts/). diff --git a/product_docs/docs/pgd/5.7/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5.7/quickstart/quick_start_aws.mdx index 2afe7714d78..8df3d7fee6d 100644 --- a/product_docs/docs/pgd/5.7/quickstart/quick_start_aws.mdx +++ b/product_docs/docs/pgd/5.7/quickstart/quick_start_aws.mdx @@ -4,9 +4,9 @@ navTitle: "Deploying on AWS" description: > A quick demonstration of deploying a PGD architecture using TPA on Amazon EC2 redirects: - - /pgd/latest/deployments/tpaexec/quick_start/ - - /pgd/latest/tpa/quick_start/ - - /pgd/latest/quick_start_aws/ + - /pgd/5.7/deployments/tpaexec/quick_start/ + - /pgd/5.7/tpa/quick_start/ + - /pgd/5.7/quick_start_aws/ --- diff --git a/product_docs/docs/pgd/5.7/quickstart/quick_start_cloud.mdx b/product_docs/docs/pgd/5.7/quickstart/quick_start_cloud.mdx index 2e01eee8bf3..8b5bfccb760 100644 --- a/product_docs/docs/pgd/5.7/quickstart/quick_start_cloud.mdx +++ b/product_docs/docs/pgd/5.7/quickstart/quick_start_cloud.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Azure and Google" description: > A quick guide to deploying a PGD architecture using TPA on Azure and Google clouds redirects: - - /pgd/latest/quick_start_cloud/ + - /pgd/5.7/quick_start_cloud/ hideToC: True --- diff --git a/product_docs/docs/pgd/5.7/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5.7/quickstart/quick_start_docker.mdx index 81940ae1b9c..6263d1d865a 100644 --- a/product_docs/docs/pgd/5.7/quickstart/quick_start_docker.mdx +++ b/product_docs/docs/pgd/5.7/quickstart/quick_start_docker.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Docker" description: > A quick demonstration of deploying a PGD architecture using TPA on Docker redirects: - - /pgd/latest/quick_start_docker/ + - /pgd/5.7/quick_start_docker/ --- diff --git a/product_docs/docs/pgd/5.7/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5.7/quickstart/quick_start_linux.mdx index 2147435b16a..1dd08842682 100644 --- a/product_docs/docs/pgd/5.7/quickstart/quick_start_linux.mdx +++ b/product_docs/docs/pgd/5.7/quickstart/quick_start_linux.mdx @@ -4,7 +4,7 @@ navTitle: "Deploying on Linux hosts" description: > A quick demonstration of deploying a PGD architecture using TPA on Linux hosts redirects: - - /pgd/latest/quick_start_bare/ + - /pgd/5.7/quick_start_bare/ --- ## Introducing TPA and PGD diff --git a/product_docs/docs/pgd/5.7/reference/catalogs-internal.mdx b/product_docs/docs/pgd/5.7/reference/catalogs-internal.mdx index 40c1aa38dc1..33847381853 100644 --- a/product_docs/docs/pgd/5.7/reference/catalogs-internal.mdx +++ b/product_docs/docs/pgd/5.7/reference/catalogs-internal.mdx @@ -69,7 +69,7 @@ node. Specifically, it tracks: * Node joins (to the cluster) * Raft state changes (that is, whenever the node changes its role in the consensus protocol - leader, follower, or candidate to leader); see [Monitoring Raft consensus](../monitoring/sql#monitoring-raft-consensus) -* Whenever a worker has errored out (see [bdr.workers](/pgd/latest/reference/catalogs-visible/#bdrworkers) +* Whenever a worker has errored out (see [bdr.workers](/pgd/5.7/reference/catalogs-visible/#bdrworkers) and [Monitoring PGD replication workers](../monitoring/sql#monitoring-pgd-replication-workers)) #### `bdr.event_history` columns @@ -92,7 +92,7 @@ as textual representations rather than integers. ### `bdr.local_leader_change` -This is a local cache of the recent portion of leader change history. It has the same fields as [`bdr.leader`](/pgd/latest/reference/catalogs-visible#bdrleader), except that it is an ordered set of (node_group_id, leader_kind, generation) instead of a map tracking merely the current version. +This is a local cache of the recent portion of leader change history. It has the same fields as [`bdr.leader`](/pgd/5.7/reference/catalogs-visible#bdrleader), except that it is an ordered set of (node_group_id, leader_kind, generation) instead of a map tracking merely the current version. diff --git a/product_docs/docs/pgd/5.7/reference/catalogs-visible.mdx b/product_docs/docs/pgd/5.7/reference/catalogs-visible.mdx index 86a001035d2..d861e0a955b 100644 --- a/product_docs/docs/pgd/5.7/reference/catalogs-visible.mdx +++ b/product_docs/docs/pgd/5.7/reference/catalogs-visible.mdx @@ -143,7 +143,7 @@ This table tracks internal object dependencies inside PGD catalogs. ### `bdr.failover_replication_slots` -This table tracks the status of logical replication slots that are being used with failover support. For more information on failover replication slots, see [CDC Failover support](/pgd/latest/cdc-failover). +This table tracks the status of logical replication slots that are being used with failover support. For more information on failover replication slots, see [CDC Failover support](/pgd/5.7/cdc-failover). #### `bdr.failover_replication_slots` columns @@ -988,7 +988,7 @@ A view containing all the necessary info about the replication subscription rece | sub_slot_name | name | Replication slot name used by the receiver | source_name | name | Source node for this receiver (the one it connects to), this is normally the same as the origin node, but is different for forward mode subscriptions | origin_name | name | The origin node for this receiver (the one it receives forwarded changes from), this is normally the same as the source node, but is different for forward mode subscriptions -| subscription_mode | char | Mode of the subscription, see [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible/#bdrsubscription_summary) for more details +| subscription_mode | char | Mode of the subscription, see [`bdr.subscription_summary`](/pgd/5.7/reference/catalogs-visible/#bdrsubscription_summary) for more details | sub_replication_sets| text[] | Replication sets this receiver is subscribed to | sub_apply_delay | interval | Apply delay interval | receive_lsn | pg_lsn | LSN of the last change received so far diff --git a/product_docs/docs/pgd/5.7/reference/commit-scopes.mdx b/product_docs/docs/pgd/5.7/reference/commit-scopes.mdx index 1dfcedd54a9..683bf02f0ca 100644 --- a/product_docs/docs/pgd/5.7/reference/commit-scopes.mdx +++ b/product_docs/docs/pgd/5.7/reference/commit-scopes.mdx @@ -7,13 +7,13 @@ rootisheading: false deepToC: true --- -Commit scopes are rules that determine how transaction commits and conflicts are handled within a PGD system. You can read more about them in [Commit Scopes](/pgd/latest/commit-scopes/). +Commit scopes are rules that determine how transaction commits and conflicts are handled within a PGD system. You can read more about them in [Commit Scopes](/pgd/5.7/commit-scopes/). You can manipulate commit scopes using the following functions: -- [`bdr.create_commit_scope`](/pgd/latest/reference/functions#bdrcreate_commit_scope) -- [`bdr.alter_commit_scope`](/pgd/latest/reference/functions#bdralter_commit_scope) -- [`bdr.drop_commit_scope`](/pgd/latest/reference/functions#bdrdrop_commit_scope) +- [`bdr.create_commit_scope`](/pgd/5.7/reference/functions#bdrcreate_commit_scope) +- [`bdr.alter_commit_scope`](/pgd/5.7/reference/functions#bdralter_commit_scope) +- [`bdr.drop_commit_scope`](/pgd/5.7/reference/functions#bdrdrop_commit_scope) ## Commit scope syntax @@ -55,7 +55,7 @@ Where `node_group` is the name of a PGD data node group. The `commit_scope_degrade_operation` is either the same commit scope kind with a less restrictive commit scope group as the overall rule being defined, or is asynchronous (`ASYNC`). -For instance, [you can degrade](/pgd/latest/commit-scopes/degrading/) from an `ALL SYNCHRONOUS COMMIT` to a `MAJORITY SYNCHRONOUS COMMIT` or a `MAJORITY SYNCHRONOUS COMMIT` to an `ANY 3 SYNCHRONOUS COMMIT` or even an `ANY 3 SYNCHRONOUS COMMIT` to an `ANY 2 SYNCHRONOUS COMMIT`. You can also degrade from `SYNCHRONOUS COMMIT` to `ASYNC`. However, you cannot degrade from `SYNCHRONOUS COMMIT` to `GROUP COMMIT` or the other way around, regardless of the commit scope groups involved. +For instance, [you can degrade](/pgd/5.7/commit-scopes/degrading/) from an `ALL SYNCHRONOUS COMMIT` to a `MAJORITY SYNCHRONOUS COMMIT` or a `MAJORITY SYNCHRONOUS COMMIT` to an `ANY 3 SYNCHRONOUS COMMIT` or even an `ANY 3 SYNCHRONOUS COMMIT` to an `ANY 2 SYNCHRONOUS COMMIT`. You can also degrade from `SYNCHRONOUS COMMIT` to `ASYNC`. However, you cannot degrade from `SYNCHRONOUS COMMIT` to `GROUP COMMIT` or the other way around, regardless of the commit scope groups involved. It is also possible to combine rules using `AND`, each with their own degradation clause: diff --git a/product_docs/docs/pgd/5.7/reference/functions-internal.mdx b/product_docs/docs/pgd/5.7/reference/functions-internal.mdx index a37c1f4cd7e..d6df307da02 100644 --- a/product_docs/docs/pgd/5.7/reference/functions-internal.mdx +++ b/product_docs/docs/pgd/5.7/reference/functions-internal.mdx @@ -186,7 +186,7 @@ Use of this internal function is limited to: * When you're instructed to by EDB Technical Support. * Where you're specifically instructed to in the documentation. -Use [`bdr.part_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) to remove a node from a PGD group. That function sets the node to `PARTED` state and enables reuse of the node name. +Use [`bdr.part_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrpart_node) to remove a node from a PGD group. That function sets the node to `PARTED` state and enables reuse of the node name. !!! @@ -519,40 +519,40 @@ Internal function intended for use by PGD-CLI. ### `bdr.stat_get_activity` -Internal function underlying view `bdr.stat_activity`. Do not use directly. Use the [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) view instead. +Internal function underlying view `bdr.stat_activity`. Do not use directly. Use the [`bdr.stat_activity`](/pgd/5.7/reference/catalogs-visible#bdrstat_activity) view instead. ### `bdr.worker_role_id_name` -Internal helper function used when generating view `bdr.worker_tasks`. Do not use directly. Use the [`bdr.worker_tasks`](/pgd/latest/reference/catalogs-visible#bdrworker_tasks) view instead. +Internal helper function used when generating view `bdr.worker_tasks`. Do not use directly. Use the [`bdr.worker_tasks`](/pgd/5.7/reference/catalogs-visible#bdrworker_tasks) view instead. ### `bdr.lag_history` -Internal function used when generating view `bdr.node_replication_rates`. Do not use directly. Use the [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) view instead. +Internal function used when generating view `bdr.node_replication_rates`. Do not use directly. Use the [`bdr.node_replication_rates`](/pgd/5.7/reference/catalogs-visible#bdrnode_replication_rates) view instead. ### `bdr.get_raft_instance_by_nodegroup` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) view instead. ### `bdr.monitor_camo_on_all_nodes` -Internal function used when generating view `bdr.group_camo_details`. Do not use directly. Use the [`bdr.group_camo_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_camo_details) view instead. +Internal function used when generating view `bdr.group_camo_details`. Do not use directly. Use the [`bdr.group_camo_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_camo_details) view instead. ### `bdr.monitor_raft_details_on_all_nodes` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) view instead. ### `bdr.monitor_replslots_details_on_all_nodes` -Internal function used when generating view `bdr.group_replslots_details`. Do not use directly. Use the [`bdr.group_replslots_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_replslots_details) view instead. +Internal function used when generating view `bdr.group_replslots_details`. Do not use directly. Use the [`bdr.group_replslots_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_replslots_details) view instead. ### `bdr.monitor_subscription_details_on_all_nodes` -Internal function used when generating view `bdr.group_subscription_summary`. Do not use directly. Use the [`bdr.group_subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrgroup_subscription_summary) view instead. +Internal function used when generating view `bdr.group_subscription_summary`. Do not use directly. Use the [`bdr.group_subscription_summary`](/pgd/5.7/reference/catalogs-visible#bdrgroup_subscription_summary) view instead. ### `bdr.monitor_version_details_on_all_nodes` -Internal function used when generating view `bdr.group_versions_details`. Do not use directly. Use the [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) view instead. +Internal function used when generating view `bdr.group_versions_details`. Do not use directly. Use the [`bdr.group_versions_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_versions_details) view instead. ### `bdr.node_group_member_info` -Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) view instead. \ No newline at end of file +Internal function used when generating view `bdr.group_raft_details`. Do not use directly. Use the [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) view instead. \ No newline at end of file diff --git a/product_docs/docs/pgd/5.7/reference/functions.mdx b/product_docs/docs/pgd/5.7/reference/functions.mdx index 20308f1d296..b26f7f43947 100644 --- a/product_docs/docs/pgd/5.7/reference/functions.mdx +++ b/product_docs/docs/pgd/5.7/reference/functions.mdx @@ -281,7 +281,7 @@ If a slot is dropped concurrently, the wait ends for that slot. If a node is currently down and isn't updating its slot, then the wait continues. You might want to set `statement_timeout` to complete earlier in that case. -If you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so), we recommend using [`bdr.wait_node_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_node_confirm_lsn) instead. +If you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so), we recommend using [`bdr.wait_node_confirm_lsn`](/pgd/5.7/reference/functions#bdrwait_node_confirm_lsn) instead. ) #### Synopsis @@ -312,7 +312,7 @@ If no LSN is supplied, the current wal_flush_lsn (using the `pg_current_wal_flus Supplying a node name parameter tells the function to wait for that node to pass the LSN. If no node name is supplied (by passing NULL), the function waits until all the nodes pass the LSN. -We recommend using this function if you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so) instead of [`bdr.wait_slot_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn). +We recommend using this function if you are using [Optimized Topology](../nodes/subscriber_only/optimizing-so) instead of [`bdr.wait_slot_confirm_lsn`](/pgd/5.7/reference/functions#bdrwait_slot_confirm_lsn). This is because in an Optimized Topology, not all nodes have replication slots, so the function `bdr.wait_slot_confirm_lsn` might not work as expected. `bdr.wait_node_confirm_lsn` is designed to work with nodes that don't have replication slots, using alternative strategies to determine the progress of a node. @@ -433,7 +433,7 @@ bdr.replicate_ddl_command(ddl_cmd text, | --------- | ----------- | | `ddl_cmd` | DDL command to execute. | | `replication_sets` | An array of replication set names to apply the `ddlcommand` to. If NULL (or the function is passed only the `ddlcommand`), this parameter is set to the active PGD groups's default replication set. | -| `ddl_locking` | A string that sets the [`bdr.ddl_locking`](/pgd/latest/reference/pgd-settings#bdrddl_locking) value while replicating. Defaults to the GUC value for `bdr.ddl_locking` on the local system that's running `replicate_ddl_command`. | +| `ddl_locking` | A string that sets the [`bdr.ddl_locking`](/pgd/5.7/reference/pgd-settings#bdrddl_locking) value while replicating. Defaults to the GUC value for `bdr.ddl_locking` on the local system that's running `replicate_ddl_command`. | | `execute_locally` | A Boolean that determines whether the DDL command executes locally. Defaults to true. | #### Notes @@ -1054,7 +1054,7 @@ bdr.lag_control() | Column name | Description | |----------------------------|---------------------------------------------------------------------------------------------------------------------------| -| `commit_scope_id` | OID of the commit scope (see [`bdr.commit_scopes`](/pgd/latest/reference/catalogs-visible#bdrcommit_scopes)). | +| `commit_scope_id` | OID of the commit scope (see [`bdr.commit_scopes`](/pgd/5.7/reference/catalogs-visible#bdrcommit_scopes)). | | `sessions` | Number of sessions referencing the lag control entry. | | `current_commit_delay` | Current runtime commit delay, in fractional milliseconds. | | `maximum_commit_delay` | Configured maximum commit delay, in fractional milliseconds. | @@ -1174,7 +1174,7 @@ The client must be prepared to retry the function call on error. ### `bdr.add_commit_scope` -**Deprecated**. Use [`bdr.create_commit_scope`](/pgd/latest/reference/functions#bdrcreate_commit_scope) instead. Previously, this function was used to add a commit scope to a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. +**Deprecated**. Use [`bdr.create_commit_scope`](/pgd/5.7/reference/functions#bdrcreate_commit_scope) instead. Previously, this function was used to add a commit scope to a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. ### `bdr.create_commit_scope` @@ -1194,7 +1194,7 @@ bdr.create_commit_scope( #### Note -`bdr.create_commit_scope` replaces the deprecated [`bdr.add_commit_scope`](/pgd/latest/reference/functions#bdradd_commit_scope) function. Unlike `add_commit_scope`, it does not silently overwrite existing commit scopes when the same name is used. Instead, an error is reported. +`bdr.create_commit_scope` replaces the deprecated [`bdr.add_commit_scope`](/pgd/5.7/reference/functions#bdradd_commit_scope) function. Unlike `add_commit_scope`, it does not silently overwrite existing commit scopes when the same name is used. Instead, an error is reported. ### `bdr.alter_commit_scope` @@ -1226,4 +1226,4 @@ bdr.drop_commit_scope( ### `bdr.remove_commit_scope` -**Deprecated**. Use [`bdr.drop_commit_scope`](/pgd/latest/reference/functions#bdrdrop_commit_scope) instead. Previously, this function was used to remove a commit scope from a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. +**Deprecated**. Use [`bdr.drop_commit_scope`](/pgd/5.7/reference/functions#bdrdrop_commit_scope) instead. Previously, this function was used to remove a commit scope from a node group. It's now deprecated and will emit a warning until it is removed in a future release, at which point it will raise an error. diff --git a/product_docs/docs/pgd/5.7/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5.7/reference/nodes-management-interfaces.mdx index bb0d052fe90..18adfe4cdf7 100644 --- a/product_docs/docs/pgd/5.7/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5.7/reference/nodes-management-interfaces.mdx @@ -44,7 +44,7 @@ The table shows the group options that can be changed using this function. | `route_writer_max_lag` | `integer` | Maximum lag in bytes of the new write candidate to be selected as write leader. If no candidate passes this, no writer is selected. Default is `-1`. | | `route_writer_wait_flush` | `boolean` | Whether to switch if PGD needs to wait for the flush. Currently reserved for future use. | | `streaming_mode` | `text` | Enables/disables streaming of large transactions. When set to `off`, streaming is disabled. When set to any other value, large transactions are decoded while they're still in progress, and the changes are sent to the downstream. If the value is set to `file`, then the incoming changes of streaming transactions are stored in a file and applied only after the transaction is committed on upstream. If the value is set to `writer`, then the incoming changes are directly sent to one of the writers, if available.
If [parallel apply](../parallelapply) is disabled or no writer is free to handle streaming transactions, then the changes are written to a file and applied after the transaction is committed. If the value is set to `auto`, PGD tries to intelligently pick between `file` and `writer`, depending on the transaction property and available resources. You can't enable `streaming_mode` if the WAL decoder is already enabled. Default is `auto`.

For more details, see [Transaction streaming](../transaction-streaming). | -| `failover_slot_scope` | `text` | PGD 5.7 and later only. Sets the scope for Logical Slot Failover support. Valid values are `global` or `local`. Default is `local`. For more information, see [CDC Failover support](/pgd/latest/cdc-failover). | +| `failover_slot_scope` | `text` | PGD 5.7 and later only. Sets the scope for Logical Slot Failover support. Valid values are `global` or `local`. Default is `local`. For more information, see [CDC Failover support](/pgd/5.7/cdc-failover). | ### Return value @@ -317,7 +317,7 @@ bdr.join_node_group ( If `wait_for_completion` is specified as `false`, the function call returns as soon as the joining procedure starts. You can see the progress of the join in -the log files and the [`bdr.event_summary`](/pgd/latest/reference/catalogs-internal#bdrevent_summary) +the log files and the [`bdr.event_summary`](/pgd/5.7/reference/catalogs-internal#bdrevent_summary) information view. You can call the function [`bdr.wait_for_join_completion()`](#bdrwait_for_join_completion) after `bdr.join_node_group()` to wait for the join operation to complete. It can emit progress information if called with `verbose_progress` set to `true`. diff --git a/product_docs/docs/pgd/5.7/reference/pgd-settings.mdx b/product_docs/docs/pgd/5.7/reference/pgd-settings.mdx index 4c2bf954370..8442f1ef368 100644 --- a/product_docs/docs/pgd/5.7/reference/pgd-settings.mdx +++ b/product_docs/docs/pgd/5.7/reference/pgd-settings.mdx @@ -489,15 +489,15 @@ archival, and rotation to prevent disk space exhaustion. ### `bdr.track_subscription_apply` -Tracks apply statistics for each subscription with [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) (default is `on`). +Tracks apply statistics for each subscription with [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible#bdrstat_subscription) (default is `on`). ### `bdr.track_relation_apply` -Tracks apply statistics for each relation with [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) (default is `off`). +Tracks apply statistics for each relation with [`bdr.stat_relation`](/pgd/5.7/reference/catalogs-visible#bdrstat_relation) (default is `off`). ### `bdr.track_apply_lock_timing` -Tracks lock timing when tracking statistics for relations with [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) (default is `off`). +Tracks lock timing when tracking statistics for relations with [`bdr.stat_relation`](/pgd/5.7/reference/catalogs-visible#bdrstat_relation) (default is `off`). ## Decoding worker diff --git a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.4.0_rel_notes.mdx b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.4.0_rel_notes.mdx index 2ba18fc19cf..ac7975a86c2 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.4.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.4.0_rel_notes.mdx @@ -17,7 +17,7 @@ We recommend that all users of PGD 5 upgrade to PGD 5.4. See [PGD/TPA upgrades]( Highlights of this 5.4.0 release include improvements to: * Group Commit, aiming to optimize performance by minimizing the effect of a node's downtime and simplifying overall operating of PGD clusters. -* `apply_delay`, enabling the creation of a delayed read-only [replica](/pgd/latest/nodes/subscriber_only/overview/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. +* `apply_delay`, enabling the creation of a delayed read-only [replica](/pgd/5.7/nodes/subscriber_only/overview/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. ## Compatibility diff --git a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.0_rel_notes.mdx b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.0_rel_notes.mdx index 7b47fd026e0..180ecb69e5b 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.0_rel_notes.mdx @@ -16,7 +16,7 @@ We recommend that all users of PGD 5 upgrade to PGD 5.5. See [PGD/TPA upgrades]( Highlights of this 5.5.0 release include: -* Read scalability enhancements in PGD Proxy which allow [read-only queries to be routed](/pgd/latest/routing/readonly/) to nodes that are members of a read-only pool. This feature can improve the overall performance of the PGD cluster. +* Read scalability enhancements in PGD Proxy which allow [read-only queries to be routed](/pgd/5.7/routing/readonly/) to nodes that are members of a read-only pool. This feature can improve the overall performance of the PGD cluster. ## Compatibility @@ -54,7 +54,7 @@ Postgres Distributed. | BDR | 5.5.0 | Granted additional object permissions to role `bdr_read_all_stats`. | | | BDR | 5.5.0 | Improved stability of manager worker and Raft consensus by not throwing error on non-fatal dynamic shared memory read failures. | | | BDR | 5.5.0 | Improved stability of Raft consensus and workers by handling dynamic shared memory errors in the right place. | | -| BDR | 5.5.0 | The number of changes processed by writer in a large transaction is now exposed in [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters). | | +| BDR | 5.5.0 | The number of changes processed by writer in a large transaction is now exposed in [`bdr.writers`](/pgd/5.7/reference/catalogs-visible#bdrwriters). | | | BDR | 5.5.0 | `bdr_init_physical` now stops the initial replication connection and starts it only when needed. | RT102828/35305 | | BDR | 5.5.0 | `bdr_superuser` is now granted use of `pg_file_settings` and `pg_show_all_file_settings()`. | | | CLI | 5.5.0 | Added new read scalability related options to JSON output of `show-proxies ` and `show-groups` commands. | | diff --git a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.1_rel_notes.mdx b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.1_rel_notes.mdx index 4379675a399..e8783d1c174 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.5.1_rel_notes.mdx @@ -17,4 +17,4 @@ We recommend that all users of PGD 5 upgrade to PGD 5.5.1. See [PGD/TPA upgrades | Component | Version | Description | Ticket | |-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| | BDR | 5.5.1 |
Fixed potential data inconsistency issue with mixed-version usage during a rolling upgrade.
Backward-incompatible change in PGD 5.5.0 may lead to inconsistencies when replicating from a newer PGD 5.5.0 node to an older version of the PGD node, specifically during the mixed-mode rolling upgrade.
This release addresses a backward-compatibility issue in mixed-version operation, enabling seamless rolling upgrades.
| | -| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default.
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | +| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default.
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/5.7/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | diff --git a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.6.0_rel_notes.mdx b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.6.0_rel_notes.mdx index 74588e9b40b..1c42105fc72 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.6.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.6.0_rel_notes.mdx @@ -36,15 +36,15 @@ In addition to the normal LCRs segment files, we create streaming files with the BDR5.6.0
Introduce several new monitoring views

There are several view providing new information as well as making some existing information easier to discover:

    -
  • bdr.stat_commit_scope : Cumulative statistics for commit scopes.
  • -
  • bdr.stat_commit_scope_state : Information about current use of commit scopes by backends.
  • -
  • bdr.stat_receiver : Per subscription receiver statistics.
  • -
  • bdr.stat_writer : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction.
  • -
  • bdr.stat_raft_state : The state of the Raft consensus on the local node.
  • -
  • bdr.stat_raft_followers_state : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes.
  • -
  • bdr.stat_worker : Detailed information about PGD workers, including what the operation manager worker is currently doing.
  • -
  • bdr.stat_routing_state : The state of the connection routing which PGD Proxy uses to route the connections.
  • -
  • bdr.stat_routing_candidate_state : Information about routing candidate nodes on the Raft leader node (empty on other nodes).
  • +
  • bdr.stat_commit_scope : Cumulative statistics for commit scopes.
  • +
  • bdr.stat_commit_scope_state : Information about current use of commit scopes by backends.
  • +
  • bdr.stat_receiver : Per subscription receiver statistics.
  • +
  • bdr.stat_writer : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction.
  • +
  • bdr.stat_raft_state : The state of the Raft consensus on the local node.
  • +
  • bdr.stat_raft_followers_state : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes.
  • +
  • bdr.stat_worker : Detailed information about PGD workers, including what the operation manager worker is currently doing.
  • +
  • bdr.stat_routing_state : The state of the connection routing which PGD Proxy uses to route the connections.
  • +
  • bdr.stat_routing_candidate_state : Information about routing candidate nodes on the Raft leader node (empty on other nodes).
BDR5.6.0
Support conflict detection for exclusion constraints

This allows defining EXCLUDE constraint on table replicated by PGD either with diff --git a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.7.0_rel_notes.mdx b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.7.0_rel_notes.mdx index 00a955868d7..a14b00b1192 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/pgd_5.7.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5.7/rel_notes/pgd_5.7.0_rel_notes.mdx @@ -13,9 +13,9 @@ EDB Postgres Distributed 5.7.0 includes a number of enhancements and bug fixes. ## Highlights -- **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/latest/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. -- **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/latest/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. -- **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/latest/cli/command_ref/node/upgrade +- **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/5.7/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. +- **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/5.7/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. +- **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/5.7/cli/command_ref/node/upgrade ) command in the PGD CLI to upgrade a node to the latest versions of PGD and Postgres. - **Ubuntu 24.04 supported**: PGD 5.7.0 now supports Ubuntu 24.04. (23 March 2025) @@ -26,11 +26,11 @@ EDB Postgres Distributed 5.7.0 includes a number of enhancements and bug fixes. guarantees that every transaction is decoded and sent at least once.

BDR5.7.0Ensured that the `remote_commit_time` and `remote_commit_lsn` are properly reported in the conflict reports.42273 -PGD CLI5.7.0
Added new CLI command structure for easier access.

The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in the CLI command reference.

+PGD CLI5.7.0
Added new CLI command structure for easier access.

The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in the CLI command reference.

-PGD CLI5.7.0
Added a new local assesment feature for local non-PGD nodes to the CLI

The new feature allows you to assess the local node for compatibility with PGD. The feature is available as pgd assess. Full details are available in the CLI command reference.

+PGD CLI5.7.0
Added a new local assesment feature for local non-PGD nodes to the CLI

The new feature allows you to assess the local node for compatibility with PGD. The feature is available as pgd assess. Full details are available in the CLI command reference.

-PGD CLI5.7.0
Added pgd node upgrade functionality to the PGD CLI.

The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of bdr_pg_upgrade into the CLI and is run locally. See pgd node upgrade and inplace upgrades for more information.

+PGD CLI5.7.0
Added pgd node upgrade functionality to the PGD CLI.

The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of bdr_pg_upgrade into the CLI and is run locally. See pgd node upgrade and inplace upgrades for more information.

BDR5.7.0
Fixed an issue whereby concurrent joins of subscriber-only nodes occasionally stopped responding.

A node could end up waiting for the local state of another concurrently joined node to advance, which caused the system to stop responding.

42964 @@ -49,11 +49,11 @@ guarantees that every transaction is decoded and sent at least once.

BDR5.7.0
Improved bdr_init_physical to be able to run without superuser.

Now only the bdr_superuser is required.

-PGD CLI5.7.0
Added new CLI commands for adding removing and updating commit scopes.

The new commands are pgd commit-scope show, pgd commit-scope create, pgd commit-scope update and pgd commit-scope drop. Full details are available in the CLI command reference.

+PGD CLI5.7.0
Added new CLI commands for adding removing and updating commit scopes.

The new commands are pgd commit-scope show, pgd commit-scope create, pgd commit-scope update and pgd commit-scope drop. Full details are available in the CLI command reference.

PGD CLI5.7.0
Added support for legacy CLI command structure in the updated PGD CLI.

The legacy CLI command structure is still supported in the updated PGD CLI. The legacy command support is available for a limited time and will be removed in a future release. It is implemented as a wrapper around the new commands.

-PGD CLI5.7.0
Added new subcommands to PGD CLI node and group for getting options.

The new subcommands are pgd node get-options and pgd group get-options. Full details are available in the CLI command reference.

+PGD CLI5.7.0
Added new subcommands to PGD CLI node and group for getting options.

The new subcommands are pgd node get-options and pgd group get-options. Full details are available in the CLI command reference.

PGD CLI5.7.0
Added new output formatting options psql and markdown to the PGD CLI.

The new options allow you to format the output of the CLI commands in a psql-like or markdown format. Format options are now json, psql, modern, markdown, simple and defaults to simple.

diff --git a/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.6.0.yml b/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.6.0.yml index f51a4e4d786..109c7677f2d 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.6.0.yml +++ b/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.6.0.yml @@ -40,15 +40,15 @@ relnotes: details: | There are several view providing new information as well as making some existing information easier to discover: - - [`bdr.stat_commit_scope`](/pgd/latest/reference/catalogs-visible#bdrstat_commit_scope) : Cumulative statistics for commit scopes. - - [`bdr.stat_commit_scope_state`](/pgd/latest/reference/catalogs-visible#bdrstat_commit_scope_state) : Information about current use of commit scopes by backends. - - [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible#bdrstat_receiver) : Per subscription receiver statistics. - - [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible#bdrstat_writer) : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction. - - [`bdr.stat_raft_state`](/pgd/latest/reference/catalogs-visible#bdrstat_raft_state) : The state of the Raft consensus on the local node. - - [`bdr.stat_raft_followers_state`](/pgd/latest/reference/catalogs-visible#bdrstat_raft_followers_state) : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes. - - [`bdr.stat_worker`](/pgd/latest/reference/catalogs-visible#bdrstat_worker) : Detailed information about PGD workers, including what the operation manager worker is currently doing. - - [`bdr.stat_routing_state`](/pgd/latest/reference/catalogs-visible#bdrstat_routing_state) : The state of the connection routing which PGD Proxy uses to route the connections. - - [`bdr.stat_routing_candidate_state`](/pgd/latest/reference/catalogs-visible#bdrstat_routing_candidate_state) : Information about routing candidate nodes on the Raft leader node (empty on other nodes). + - [`bdr.stat_commit_scope`](/pgd/5.7/reference/catalogs-visible#bdrstat_commit_scope) : Cumulative statistics for commit scopes. + - [`bdr.stat_commit_scope_state`](/pgd/5.7/reference/catalogs-visible#bdrstat_commit_scope_state) : Information about current use of commit scopes by backends. + - [`bdr.stat_receiver`](/pgd/5.7/reference/catalogs-visible#bdrstat_receiver) : Per subscription receiver statistics. + - [`bdr.stat_writer`](/pgd/5.7/reference/catalogs-visible#bdrstat_writer) : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction. + - [`bdr.stat_raft_state`](/pgd/5.7/reference/catalogs-visible#bdrstat_raft_state) : The state of the Raft consensus on the local node. + - [`bdr.stat_raft_followers_state`](/pgd/5.7/reference/catalogs-visible#bdrstat_raft_followers_state) : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes. + - [`bdr.stat_worker`](/pgd/5.7/reference/catalogs-visible#bdrstat_worker) : Detailed information about PGD workers, including what the operation manager worker is currently doing. + - [`bdr.stat_routing_state`](/pgd/5.7/reference/catalogs-visible#bdrstat_routing_state) : The state of the connection routing which PGD Proxy uses to route the connections. + - [`bdr.stat_routing_candidate_state`](/pgd/5.7/reference/catalogs-visible#bdrstat_routing_candidate_state) : Information about routing candidate nodes on the Raft leader node (empty on other nodes). jira: BDR-5316 type: Enhancement impact: High diff --git a/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.7.0.yml b/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.7.0.yml index cf310d475e5..9c55239da52 100644 --- a/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.7.0.yml +++ b/product_docs/docs/pgd/5.7/rel_notes/src/relnote_5.7.0.yml @@ -11,9 +11,9 @@ components: intro: | EDB Postgres Distributed 5.7.0 includes a number of enhancements and bug fixes. highlights: | - - **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/latest/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. - - **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/latest/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. - - **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/latest/cli/command_ref/node/upgrade + - **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/5.7/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. + - **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/5.7/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. + - **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/5.7/cli/command_ref/node/upgrade ) command in the PGD CLI to upgrade a node to the latest versions of PGD and Postgres. - **Ubuntu 24.04 supported**: PGD 5.7.0 now supports Ubuntu 24.04. (23 March 2025) relnotes: @@ -210,7 +210,7 @@ relnotes: - relnote: Added new CLI command structure for easier access. component: PGD CLI details: | - The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). + The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in [the CLI command reference](/pgd/5.7/cli/command_ref). jira: "" addresses: "" type: Feature @@ -218,7 +218,7 @@ relnotes: - relnote: Added new CLI commands for adding removing and updating commit scopes. component: PGD CLI details: | - The new commands are `pgd commit-scope show`, `pgd commit-scope create`, `pgd commit-scope update` and `pgd commit-scope drop`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). + The new commands are `pgd commit-scope show`, `pgd commit-scope create`, `pgd commit-scope update` and `pgd commit-scope drop`. Full details are available in [the CLI command reference](/pgd/5.7/cli/command_ref). jira: "" addresses: "" type: Enhancement @@ -234,7 +234,7 @@ relnotes: - relnote: Added a new local assesment feature for local non-PGD nodes to the CLI component: PGD CLI details: | - The new feature allows you to assess the local node for compatibility with PGD. The feature is available as `pgd assess`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). + The new feature allows you to assess the local node for compatibility with PGD. The feature is available as `pgd assess`. Full details are available in [the CLI command reference](/pgd/5.7/cli/command_ref). jira: "" addresses: "" type: Feature @@ -242,7 +242,7 @@ relnotes: - relnote: Added `pgd node upgrade` functionality to the PGD CLI. component: PGD CLI details: | - The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of `bdr_pg_upgrade` into the CLI and is run locally. See [pgd node upgrade](/pgd/latest/cli/command_ref/node/upgrade) and [inplace upgrades](/pgd/latest/upgrades/inplace_upgrade) for more information. + The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of `bdr_pg_upgrade` into the CLI and is run locally. See [pgd node upgrade](/pgd/5.7/cli/command_ref/node/upgrade) and [inplace upgrades](/pgd/5.7/upgrades/inplace_upgrade) for more information. jira: "" addresses: "" type: Feature @@ -250,7 +250,7 @@ relnotes: - relnote: Added new subcommands to PGD CLI `node` and `group` for getting options. component: PGD CLI details: | - The new subcommands are `pgd node get-options` and `pgd group get-options`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). + The new subcommands are `pgd node get-options` and `pgd group get-options`. Full details are available in [the CLI command reference](/pgd/5.7/cli/command_ref). jira: "" addresses: "" type: Enhancement diff --git a/product_docs/docs/pgd/5.7/repsets.mdx b/product_docs/docs/pgd/5.7/repsets.mdx index afcf3cf7bab..a64810cb304 100644 --- a/product_docs/docs/pgd/5.7/repsets.mdx +++ b/product_docs/docs/pgd/5.7/repsets.mdx @@ -17,7 +17,7 @@ In other words, by default, all user tables are replicated between all nodes. ## Using replication sets -You can create replication sets using [`bdr.create_replication_set`](/pgd/latest/reference/repsets-management#bdrcreate_replication_set), +You can create replication sets using [`bdr.create_replication_set`](/pgd/5.7/reference/repsets-management#bdrcreate_replication_set), specifying whether to include insert, update, delete, or truncate actions. One option lets you add existing tables to the set, and a second option defines whether to add tables when they're @@ -33,12 +33,12 @@ Once the node is joined, you can still remove tables from the replication set, but you must add new tables using a resync operation. By default, a newly defined replication set doesn't replicate DDL or PGD -administration function calls. Use [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) +administration function calls. Use [`bdr.replication_set_add_ddl_filter`](/pgd/5.7/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) to define the commands to replicate. PGD creates replication set definitions on all nodes. Each node can then be defined to publish or subscribe to each replication set using -[`bdr.alter_node_replication_sets`](/pgd/latest/reference/repsets-management#bdralter_node_replication_sets). +[`bdr.alter_node_replication_sets`](/pgd/5.7/reference/repsets-management#bdralter_node_replication_sets). You can use functions to alter these definitions later or to drop the replication set. @@ -146,7 +146,7 @@ of replication set A that replicates only INSERT actions and replication set B t replicates only UPDATE actions. Both INSERT and UPDATE actions are replicated if the target node is also subscribed to both replication set A and B. -You can control membership using [`bdr.replication_set_add_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_add_table) and [`bdr.replication_set_remove_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_remove_table). +You can control membership using [`bdr.replication_set_add_table`](/pgd/5.7/reference/repsets-membership#bdrreplication_set_add_table) and [`bdr.replication_set_remove_table`](/pgd/5.7/reference/repsets-membership#bdrreplication_set_remove_table). ## Listing replication sets @@ -245,7 +245,7 @@ filter, the regular expression applied to the command tag and to the role name: SELECT * FROM bdr.ddl_replication; ``` -You can use [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) and [`bdr.replication_set_remove_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) to manipulate DDL filters. +You can use [`bdr.replication_set_add_ddl_filter`](/pgd/5.7/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) and [`bdr.replication_set_remove_ddl_filter`](/pgd/5.7/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) to manipulate DDL filters. They're considered to be `DDL` and are therefore subject to DDL replication and global locking. diff --git a/product_docs/docs/pgd/5.7/routing/administering.mdx b/product_docs/docs/pgd/5.7/routing/administering.mdx index ca8631f0b24..34f9ed09c60 100644 --- a/product_docs/docs/pgd/5.7/routing/administering.mdx +++ b/product_docs/docs/pgd/5.7/routing/administering.mdx @@ -20,7 +20,7 @@ The set-leader operation is not a guaranteed operation. If, due to a timeout or You can perform a switchover operation that explicitly changes the node that's the write leader to another node. -Use the [`bdr.routing_leadership_transfer()`](/pgd/latest/reference/routing#bdrrouting_leadership_transfer) function. +Use the [`bdr.routing_leadership_transfer()`](/pgd/5.7/reference/routing#bdrrouting_leadership_transfer) function. For example, to switch the write leader to node `node1` in group `group1`, use the following SQL command: @@ -36,7 +36,7 @@ SELECT bdr.routing_leadership_transfer('group1', 'node1'); ### Using PGD CLI -You can use the [`group set-leader`](/pgd/latest/cli/command_ref/group/set-leader/) command to perform a switchover operation. +You can use the [`group set-leader`](/pgd/5.7/cli/command_ref/group/set-leader/) command to perform a switchover operation. For example, to switch the write leader from node `node1` to node `node2` in group `group1`, use the following command: diff --git a/product_docs/docs/pgd/5.7/routing/configuration.mdx b/product_docs/docs/pgd/5.7/routing/configuration.mdx index 3e11ee0964c..1c4205b170e 100644 --- a/product_docs/docs/pgd/5.7/routing/configuration.mdx +++ b/product_docs/docs/pgd/5.7/routing/configuration.mdx @@ -8,7 +8,7 @@ navTitle: "Configuration" Configuring the routing is done either through SQL interfaces or through PGD CLI. -You can enable routing decisions by calling the [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option) function. +You can enable routing decisions by calling the [`bdr.alter_node_group_option()`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_group_option) function. For example: ```text @@ -27,7 +27,7 @@ Additional group-level options affect the routing decisions: ## Node-level configuration -Set per-node configuration of routing using [`bdr.alter_node_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option). The +Set per-node configuration of routing using [`bdr.alter_node_option()`](/pgd/5.7/reference/nodes-management-interfaces#bdralter_node_option). The available options that affect routing are: - `route_dsn` — The dsn used by proxy to connect to this node. @@ -45,7 +45,7 @@ You can configure the proxies using SQL interfaces. ### Creating and dropping proxy configurations -You can add a proxy configuration using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy). +You can add a proxy configuration using [`bdr.create_proxy`](/pgd/5.7/reference/routing#bdrcreate_proxy). For example, `SELECT bdr.create_proxy('region1-proxy1', 'region1-group');` creates the default configuration for a proxy named `region1-proxy1` in the PGD group `region1-group`. @@ -56,7 +56,7 @@ Dropping a proxy deactivates it. ### Altering proxy configurations -You can configure options for each proxy using the [`bdr.alter_proxy_option()`](/pgd/latest/reference/routing#bdralter_proxy_option) function. +You can configure options for each proxy using the [`bdr.alter_proxy_option()`](/pgd/5.7/reference/routing#bdralter_proxy_option) function. The available options are: diff --git a/product_docs/docs/pgd/5.7/routing/index.mdx b/product_docs/docs/pgd/5.7/routing/index.mdx index 27308c310ab..aba39348206 100644 --- a/product_docs/docs/pgd/5.7/routing/index.mdx +++ b/product_docs/docs/pgd/5.7/routing/index.mdx @@ -15,16 +15,16 @@ navigation: Managing application connections is an important part of high availability. PGD Proxy offers a way to manage connections to the EDB Postgres Distributed cluster. It acts as a proxy layer between the client application and the Postgres database. -* [PGD Proxy overview](/pgd/latest/routing/proxy) provides an overview of the PGD Proxy, its processes, and how it interacts with the EDB Postgres Distributed cluster. +* [PGD Proxy overview](/pgd/5.7/routing/proxy) provides an overview of the PGD Proxy, its processes, and how it interacts with the EDB Postgres Distributed cluster. -* [Installing the PGD Proxy service](/pgd/latest/routing/installing_proxy) covers installation of the PGD Proxy service on a host. +* [Installing the PGD Proxy service](/pgd/5.7/routing/installing_proxy) covers installation of the PGD Proxy service on a host. -* [Configuring PGD Proxy](/pgd/latest/routing/configuration) details the three levels (group, node, and proxy) of configuration on a cluster that control how the PGD Proxy service behaves. +* [Configuring PGD Proxy](/pgd/5.7/routing/configuration) details the three levels (group, node, and proxy) of configuration on a cluster that control how the PGD Proxy service behaves. -* [Administering PGD Proxy](/pgd/latest/routing/administering) shows how to switch the write leader and manage the PGD Proxy. +* [Administering PGD Proxy](/pgd/5.7/routing/administering) shows how to switch the write leader and manage the PGD Proxy. -* [Monitoring PGD Proxy](/pgd/latest/routing/monitoring) looks at how to monitor PGD Proxy through the cluster and at a service level. +* [Monitoring PGD Proxy](/pgd/5.7/routing/monitoring) looks at how to monitor PGD Proxy through the cluster and at a service level. -* [Read-only routing](/pgd/latest/routing/readonly) explains how the read-only routing feature in PGD Proxy enables read scalability. +* [Read-only routing](/pgd/5.7/routing/readonly) explains how the read-only routing feature in PGD Proxy enables read scalability. -* [Raft](/pgd/latest/routing/raft) provides an overview of the Raft consensus mechanism used to coordinate PGD Proxy. +* [Raft](/pgd/5.7/routing/raft) provides an overview of the Raft consensus mechanism used to coordinate PGD Proxy. diff --git a/product_docs/docs/pgd/5.7/routing/monitoring.mdx b/product_docs/docs/pgd/5.7/routing/monitoring.mdx index d8383e276be..75f5e508e1d 100644 --- a/product_docs/docs/pgd/5.7/routing/monitoring.mdx +++ b/product_docs/docs/pgd/5.7/routing/monitoring.mdx @@ -9,11 +9,11 @@ You cam monitor proxies at the cluster and group level or at the process level. ### Using SQL -The current configuration of every group is visible in the [`bdr.node_group_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_group_routing_config_summary) view. +The current configuration of every group is visible in the [`bdr.node_group_routing_config_summary`](/pgd/5.7/reference/catalogs-internal#bdrnode_group_routing_config_summary) view. -The [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary) view shows current per-node routing configuration. +The [`bdr.node_routing_config_summary`](/pgd/5.7/reference/catalogs-internal#bdrnode_routing_config_summary) view shows current per-node routing configuration. -[`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) shows per-proxy configuration. +[`bdr.proxy_config_summary`](/pgd/5.7/reference/catalogs-internal#bdrproxy_config_summary) shows per-proxy configuration. ## Monitoring at the process level diff --git a/product_docs/docs/pgd/5.7/routing/proxy.mdx b/product_docs/docs/pgd/5.7/routing/proxy.mdx index ec697d56914..d107ede3153 100644 --- a/product_docs/docs/pgd/5.7/routing/proxy.mdx +++ b/product_docs/docs/pgd/5.7/routing/proxy.mdx @@ -68,7 +68,7 @@ Upon starting, PGD Proxy connects to one of the endpoints given in the local con - Proxy options like listen address, listen port. - Routing details including the current write leader in default mode, read nodes in read-only mode, or both in any mode. -The endpoints given in the config file are used only at startup. After that, actual endpoints are taken from the PGD catalog's `route_dsn` field in [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary). +The endpoints given in the config file are used only at startup. After that, actual endpoints are taken from the PGD catalog's `route_dsn` field in [`bdr.node_routing_config_summary`](/pgd/5.7/reference/catalogs-internal#bdrnode_routing_config_summary). PGD manages write leader election. PGD Proxy interacts with PGD to get write leader change events notifications on Postgres notify/listen channels and routes client traffic to the current write leader. PGD Proxy disconnects all existing client connections on write leader change or when write leader is unavailable. Write leader election is a Raft-backed activity and is subject to Raft leader availability. PGD Proxy closes the new client connections if the write leader is unavailable. @@ -76,7 +76,7 @@ PGD Proxy responds to write leader change events that can be categorized into tw Automatic transfer of write leadership from the current write leader node to a new node in the event of Postgres or operating system crash is called *failover*. PGD elects a new write leader when the current write leader goes down or becomes unresponsive. Once the new write leader is elected by PGD, PGD Proxy closes existing client connections to the old write leader and redirects new client connections to the newly elected write leader. -User-controlled, manual transfer of write leadership from the current write leader to a new target leader is called *switchover*. Switchover is triggered through the [PGD CLI group set-leader](/pgd/latest/cli/command_ref/group/set-leader/) command. The command is submitted to PGD, which attempts to elect the given target node as the new write leader. Similar to failover, PGD Proxy closes existing client connections and redirects new client connections to the newly elected write leader. This is useful during server maintenance, for example, if the current write leader node needs to be stopped for maintenance like a server update or OS patch update. +User-controlled, manual transfer of write leadership from the current write leader to a new target leader is called *switchover*. Switchover is triggered through the [PGD CLI group set-leader](/pgd/5.7/cli/command_ref/group/set-leader/) command. The command is submitted to PGD, which attempts to elect the given target node as the new write leader. Similar to failover, PGD Proxy closes existing client connections and redirects new client connections to the newly elected write leader. This is useful during server maintenance, for example, if the current write leader node needs to be stopped for maintenance like a server update or OS patch update. If the proxy is configured to support read-only routing, it can route read-only queries to a pool of nodes that aren't the write leader. The pool of nodes is maintained by the PGD cluster and proxies listen for changes to the pool. When the pool changes, the proxy updates its routing configuration and starts routing read-only queries to the new pool of nodes and disconnecting existing client connections to nodes that have left the pool. diff --git a/product_docs/docs/pgd/5.7/scaling.mdx b/product_docs/docs/pgd/5.7/scaling.mdx index 49bb625b580..8e64c41bda9 100644 --- a/product_docs/docs/pgd/5.7/scaling.mdx +++ b/product_docs/docs/pgd/5.7/scaling.mdx @@ -19,7 +19,7 @@ your search_path, you need to schema qualify the name of each function. ## Auto creation of partitions -PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) +PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/5.7/reference/autopartition#bdrautopartition) function to create or alter the definition of automatic range partitioning for a table. If no definition exists, it's created. Otherwise, later executions will alter the definition. @@ -42,7 +42,7 @@ case, all partitions are managed locally on each node. Managing partitions locally is useful when the partitioned table isn't a replicated table. In that case, you might not need or want to have all partitions on all nodes. For example, the built-in -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +[`bdr.conflict_history`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history) table isn't a replicated table. It's managed by AutoPartition locally. Each node creates partitions for this table locally and drops them once they're old enough. @@ -145,7 +145,7 @@ upper bound. ## Stopping automatic creation of partitions Use -[`bdr.drop_autopartition()`](/pgd/latest/reference/autopartition#bdrdrop_autopartition) +[`bdr.drop_autopartition()`](/pgd/5.7/reference/autopartition#bdrdrop_autopartition) to drop the autopartitioning rule for the given relation. All pending work items for the relation are deleted, and no new work items are created. @@ -155,7 +155,7 @@ Partition creation is an asynchronous process. AutoPartition provides a set of functions to wait for the partition to be created, locally or on all nodes. Use -[`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions) +[`bdr.autopartition_wait_for_partitions()`](/pgd/5.7/reference/autopartition#bdrautopartition_wait_for_partitions) to wait for the creation of partitions on the local node. The function takes the partitioned table name and a partition key column value and waits until the partition that holds that value is created. @@ -164,14 +164,14 @@ The function waits only for the partitions to be created locally. It doesn't guarantee that the partitions also exist on the remote nodes. To wait for the partition to be created on all PGD nodes, use the -[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) +[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/5.7/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) function. This function internally checks local as well as all remote nodes and waits until the partition is created everywhere. ## Finding a partition Use the -[`bdr.autopartition_find_partition()`](/pgd/latest/reference/autopartition#bdrautopartition_find_partition) +[`bdr.autopartition_find_partition()`](/pgd/5.7/reference/autopartition#bdrautopartition_find_partition) function to find the partition for the given partition key value. If a partition to hold that value doesn't exist, then the function returns NULL. Otherwise it returns the Oid of the partition. @@ -179,10 +179,10 @@ of the partition. ## Enabling or disabling autopartitioning Use -[`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) +[`bdr.autopartition_enable()`](/pgd/5.7/reference/autopartition#bdrautopartition_enable) to enable autopartitioning on the given table. If autopartitioning is already enabled, then no action occurs. Similarly, use -[`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) +[`bdr.autopartition_disable()`](/pgd/5.7/reference/autopartition#bdrautopartition_disable) to disable autopartitioning on the given table. ## Restrictions on EDB Postgres Advanced Server-native automatic partitioning diff --git a/product_docs/docs/pgd/5.7/security/pgd-predefined-roles.mdx b/product_docs/docs/pgd/5.7/security/pgd-predefined-roles.mdx index 97e74fae2a0..75709693b14 100644 --- a/product_docs/docs/pgd/5.7/security/pgd-predefined-roles.mdx +++ b/product_docs/docs/pgd/5.7/security/pgd-predefined-roles.mdx @@ -25,71 +25,71 @@ This role provides read access to most of the tables, views, and functions that `SELECT` privilege on: -- [`bdr.autopartition_partitions`](/pgd/latest/reference/catalogs-internal#bdrautopartition_partitions) -- [`bdr.autopartition_rules`](/pgd/latest/reference/catalogs-internal#bdrautopartition_rules) -- [`bdr.ddl_epoch`](/pgd/latest/reference/catalogs-internal#bdrddl_epoch) -- [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) -- [`bdr.global_consensus_journal_details`](/pgd/latest/reference/catalogs-visible#bdrglobal_consensus_journal_details) -- [`bdr.global_lock`](/pgd/latest/reference/catalogs-visible#bdrglobal_lock) -- [`bdr.global_locks`](/pgd/latest/reference/catalogs-visible#bdrglobal_locks) -- [`bdr.group_camo_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_camo_details) -- [`bdr.local_consensus_state`](/pgd/latest/reference/catalogs-visible#bdrlocal_consensus_state) -- [`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary) -- [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) -- [`bdr.node_catchup_info`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info) -- [`bdr.node_catchup_info_details`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info_details) -- [`bdr.node_conflict_resolvers`](/pgd/latest/reference/catalogs-visible#bdrnode_conflict_resolvers) -- [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group) -- [`bdr.node_local_info`](/pgd/latest/reference/catalogs-visible#bdrnode_local_info) -- [`bdr.node_peer_progress`](/pgd/latest/reference/catalogs-visible#bdrnode_peer_progress) -- [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) -- [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) -- [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) -- [`bdr.replication_sets`](/pgd/latest/reference/catalogs-visible#bdrreplication_sets) +- [`bdr.autopartition_partitions`](/pgd/5.7/reference/catalogs-internal#bdrautopartition_partitions) +- [`bdr.autopartition_rules`](/pgd/5.7/reference/catalogs-internal#bdrautopartition_rules) +- [`bdr.ddl_epoch`](/pgd/5.7/reference/catalogs-internal#bdrddl_epoch) +- [`bdr.ddl_replication`](/pgd/5.7/reference/pgd-settings#bdrddl_replication) +- [`bdr.global_consensus_journal_details`](/pgd/5.7/reference/catalogs-visible#bdrglobal_consensus_journal_details) +- [`bdr.global_lock`](/pgd/5.7/reference/catalogs-visible#bdrglobal_lock) +- [`bdr.global_locks`](/pgd/5.7/reference/catalogs-visible#bdrglobal_locks) +- [`bdr.group_camo_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_camo_details) +- [`bdr.local_consensus_state`](/pgd/5.7/reference/catalogs-visible#bdrlocal_consensus_state) +- [`bdr.local_node_summary`](/pgd/5.7/reference/catalogs-visible#bdrlocal_node_summary) +- [`bdr.node`](/pgd/5.7/reference/catalogs-visible#bdrnode) +- [`bdr.node_catchup_info`](/pgd/5.7/reference/catalogs-visible#bdrnode_catchup_info) +- [`bdr.node_catchup_info_details`](/pgd/5.7/reference/catalogs-visible#bdrnode_catchup_info_details) +- [`bdr.node_conflict_resolvers`](/pgd/5.7/reference/catalogs-visible#bdrnode_conflict_resolvers) +- [`bdr.node_group`](/pgd/5.7/reference/catalogs-visible#bdrnode_group) +- [`bdr.node_local_info`](/pgd/5.7/reference/catalogs-visible#bdrnode_local_info) +- [`bdr.node_peer_progress`](/pgd/5.7/reference/catalogs-visible#bdrnode_peer_progress) +- [`bdr.node_replication_rates`](/pgd/5.7/reference/catalogs-visible#bdrnode_replication_rates) +- [`bdr.node_slots`](/pgd/5.7/reference/catalogs-visible#bdrnode_slots) +- [`bdr.node_summary`](/pgd/5.7/reference/catalogs-visible#bdrnode_summary) +- [`bdr.replication_sets`](/pgd/5.7/reference/catalogs-visible#bdrreplication_sets) - `bdr.replication_status` -- [`bdr.sequences`](/pgd/latest/reference/catalogs-visible#bdrsequences) -- [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) -- [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) -- [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) _deprecated_ -- [`bdr.state_journal_details`](/pgd/latest/reference/catalogs-visible#) -- [`bdr.subscription`](/pgd/latest/reference/catalogs-visible#bdrsubscription) -- [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrsubscription_summary) -- [`bdr.tables`](/pgd/latest/reference/catalogs-visible#bdrtables) -- [`bdr.taskmgr_local_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_local_work_queue) -- [`bdr.taskmgr_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_work_queue) -- [`bdr.worker_errors`](/pgd/latest/reference/catalogs-visible#) _deprecated_ -- [`bdr.workers`](/pgd/latest/reference/catalogs-visible#bdrworkers) -- [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters) +- [`bdr.sequences`](/pgd/5.7/reference/catalogs-visible#bdrsequences) +- [`bdr.stat_activity`](/pgd/5.7/reference/catalogs-visible#bdrstat_activity) +- [`bdr.stat_relation`](/pgd/5.7/reference/catalogs-visible#bdrstat_relation) +- [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible#bdrstat_subscription) _deprecated_ +- [`bdr.state_journal_details`](/pgd/5.7/reference/catalogs-visible#) +- [`bdr.subscription`](/pgd/5.7/reference/catalogs-visible#bdrsubscription) +- [`bdr.subscription_summary`](/pgd/5.7/reference/catalogs-visible#bdrsubscription_summary) +- [`bdr.tables`](/pgd/5.7/reference/catalogs-visible#bdrtables) +- [`bdr.taskmgr_local_work_queue`](/pgd/5.7/reference/catalogs-visible#bdrtaskmgr_local_work_queue) +- [`bdr.taskmgr_work_queue`](/pgd/5.7/reference/catalogs-visible#bdrtaskmgr_work_queue) +- [`bdr.worker_errors`](/pgd/5.7/reference/catalogs-visible#) _deprecated_ +- [`bdr.workers`](/pgd/5.7/reference/catalogs-visible#bdrworkers) +- [`bdr.writers`](/pgd/5.7/reference/catalogs-visible#bdrwriters) - `bdr.xid_peer_progress` EXECUTE privilege on: - `bdr.bdr_edition` _deprecated_ -- [`bdr.bdr_version`](/pgd/latest/reference/functions#bdrbdr_version) -- [`bdr.bdr_version_num`](/pgd/latest/reference/functions#bdrbdr_version_num) -- [`bdr.decode_message_payload`](/pgd/latest/reference/functions-internal#bdrdecode_message_payload) -- [`bdr.get_consensus_status`](/pgd/latest/reference/functions#bdrget_consensus_status) -- [`bdr.get_decoding_worker_stat`](/pgd/latest/reference/functions#bdrget_decoding_worker_stat) -- [`bdr.get_global_locks`](/pgd/latest/reference/functions-internal#bdrget_global_locks) -- [`bdr.get_min_required_replication_slots`](/pgd/latest/reference/functions-internal#bdrget_min_required_replication_slots) -- [`bdr.get_min_required_worker_processes`](/pgd/latest/reference/functions-internal#bdrget_min_required_worker_processes) -- [`bdr.get_raft_status`](/pgd/latest/reference/functions#bdrget_raft_status) -- [`bdr.get_relation_stats`](/pgd/latest/reference/functions#bdrget_relation_stats) -- [`bdr.get_slot_flush_timestamp`](/pgd/latest/reference/functions-internal#bdrget_slot_flush_timestamp) +- [`bdr.bdr_version`](/pgd/5.7/reference/functions#bdrbdr_version) +- [`bdr.bdr_version_num`](/pgd/5.7/reference/functions#bdrbdr_version_num) +- [`bdr.decode_message_payload`](/pgd/5.7/reference/functions-internal#bdrdecode_message_payload) +- [`bdr.get_consensus_status`](/pgd/5.7/reference/functions#bdrget_consensus_status) +- [`bdr.get_decoding_worker_stat`](/pgd/5.7/reference/functions#bdrget_decoding_worker_stat) +- [`bdr.get_global_locks`](/pgd/5.7/reference/functions-internal#bdrget_global_locks) +- [`bdr.get_min_required_replication_slots`](/pgd/5.7/reference/functions-internal#bdrget_min_required_replication_slots) +- [`bdr.get_min_required_worker_processes`](/pgd/5.7/reference/functions-internal#bdrget_min_required_worker_processes) +- [`bdr.get_raft_status`](/pgd/5.7/reference/functions#bdrget_raft_status) +- [`bdr.get_relation_stats`](/pgd/5.7/reference/functions#bdrget_relation_stats) +- [`bdr.get_slot_flush_timestamp`](/pgd/5.7/reference/functions-internal#bdrget_slot_flush_timestamp) - `bdr.get_sub_progress_timestamp` -- [`bdr.get_subscription_stats`](/pgd/latest/reference/functions#bdrget_subscription_stats) -- [`bdr.lag_control`](/pgd/latest/reference/functions#bdrlag_control) -- [`bdr.lag_history`](/pgd/latest/reference/functions-internal#bdrlag_history) -- [`bdr.node_catchup_state_name`](/pgd/latest/reference/functions-internal#bdrnode_catchup_state_name) -- [`bdr.node_kind_name`](/pgd/latest/reference/functions-internal#bdrnode_kind_name) -- [`bdr.peer_state_name`](/pgd/latest/reference/functions-internal#bdrpeer_state_name) -- [`bdr.pglogical_proto_version_ranges`](/pgd/latest/reference/functions-internal#bdrpglogical_proto_version_ranges) -- [`bdr.show_subscription_status`](/pgd/latest/reference/functions-internal#bdrshow_subscription_status) -- [`bdr.show_workers`](/pgd/latest/reference/functions-internal#bdrshow_workers) -- [`bdr.show_writers`](/pgd/latest/reference/functions-internal#bdrshow_writers) -- [`bdr.stat_get_activity`](/pgd/latest/reference/functions-internal#bdrstat_get_activity) -- [`bdr.wal_sender_stats`](/pgd/latest/reference/functions#bdrwal_sender_stats) -- [`bdr.worker_role_id_name`](/pgd/latest/reference/functions-internal#bdrworker_role_id_name) +- [`bdr.get_subscription_stats`](/pgd/5.7/reference/functions#bdrget_subscription_stats) +- [`bdr.lag_control`](/pgd/5.7/reference/functions#bdrlag_control) +- [`bdr.lag_history`](/pgd/5.7/reference/functions-internal#bdrlag_history) +- [`bdr.node_catchup_state_name`](/pgd/5.7/reference/functions-internal#bdrnode_catchup_state_name) +- [`bdr.node_kind_name`](/pgd/5.7/reference/functions-internal#bdrnode_kind_name) +- [`bdr.peer_state_name`](/pgd/5.7/reference/functions-internal#bdrpeer_state_name) +- [`bdr.pglogical_proto_version_ranges`](/pgd/5.7/reference/functions-internal#bdrpglogical_proto_version_ranges) +- [`bdr.show_subscription_status`](/pgd/5.7/reference/functions-internal#bdrshow_subscription_status) +- [`bdr.show_workers`](/pgd/5.7/reference/functions-internal#bdrshow_workers) +- [`bdr.show_writers`](/pgd/5.7/reference/functions-internal#bdrshow_writers) +- [`bdr.stat_get_activity`](/pgd/5.7/reference/functions-internal#bdrstat_get_activity) +- [`bdr.wal_sender_stats`](/pgd/5.7/reference/functions#bdrwal_sender_stats) +- [`bdr.worker_role_id_name`](/pgd/5.7/reference/functions-internal#bdrworker_role_id_name) ### bdr_monitor @@ -101,24 +101,24 @@ All privileges from [`bdr_read_all_stats`](#bdr_read_all_stats) plus the followi `SELECT` privilege on: -- [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) -- [`bdr.group_replslots_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_replslots_details) -- [`bdr.group_subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrgroup_subscription_summary) -- [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) +- [`bdr.group_raft_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_raft_details) +- [`bdr.group_replslots_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_replslots_details) +- [`bdr.group_subscription_summary`](/pgd/5.7/reference/catalogs-visible#bdrgroup_subscription_summary) +- [`bdr.group_versions_details`](/pgd/5.7/reference/catalogs-visible#bdrgroup_versions_details) - `bdr.raft_instances` `EXECUTE` privilege on: -- [`bdr.get_raft_instance_by_nodegroup`](/pgd/latest/reference/functions-internal#bdrget_raft_instance_by_nodegroup) -- [`bdr.monitor_camo_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_camo_on_all_nodes) -- [`bdr.monitor_group_raft`](/pgd/latest/reference/functions#bdrmonitor_group_raft) -- [`bdr.monitor_group_versions`](/pgd/latest/reference/functions#bdrmonitor_group_versions) -- [`bdr.monitor_local_replslots`](/pgd/latest/reference/functions#bdrmonitor_local_replslots) -- [`bdr.monitor_raft_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes) -- [`bdr.monitor_replslots_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes) -- [`bdr.monitor_subscription_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes) -- [`bdr.monitor_version_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_version_details_on_all_nodes) -- [`bdr.node_group_member_info`](/pgd/latest/reference/functions-internal#bdrnode_group_member_info) +- [`bdr.get_raft_instance_by_nodegroup`](/pgd/5.7/reference/functions-internal#bdrget_raft_instance_by_nodegroup) +- [`bdr.monitor_camo_on_all_nodes`](/pgd/5.7/reference/functions-internal#bdrmonitor_camo_on_all_nodes) +- [`bdr.monitor_group_raft`](/pgd/5.7/reference/functions#bdrmonitor_group_raft) +- [`bdr.monitor_group_versions`](/pgd/5.7/reference/functions#bdrmonitor_group_versions) +- [`bdr.monitor_local_replslots`](/pgd/5.7/reference/functions#bdrmonitor_local_replslots) +- [`bdr.monitor_raft_details_on_all_nodes`](/pgd/5.7/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes) +- [`bdr.monitor_replslots_details_on_all_nodes`](/pgd/5.7/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes) +- [`bdr.monitor_subscription_details_on_all_nodes`](/pgd/5.7/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes) +- [`bdr.monitor_version_details_on_all_nodes`](/pgd/5.7/reference/functions-internal#bdrmonitor_version_details_on_all_nodes) +- [`bdr.node_group_member_info`](/pgd/5.7/reference/functions-internal#bdrnode_group_member_info) ### bdr_application @@ -130,28 +130,28 @@ This role is designed for applications that require access to PGD features, obje - All functions for column_timestamps datatypes - All functions for CRDT datatypes -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) -- [`bdr.get_configured_camo_partner`](/pgd/latest/reference/functions#bdrget_configured_camo_partner) -- [`bdr.global_lock_table`](/pgd/latest/reference/functions#bdrglobal_lock_table) -- [`bdr.is_camo_partner_connected`](/pgd/latest/reference/functions#bdris_camo_partner_connected) -- [`bdr.is_camo_partner_ready`](/pgd/latest/reference/functions#bdris_camo_partner_ready) -- [`bdr.logical_transaction_status`](/pgd/latest/reference/functions#bdrlogical_transaction_status) +- [`bdr.alter_sequence_set_kind`](/pgd/5.7/reference/sequences#bdralter_sequence_set_kind) +- [`bdr.create_conflict_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) +- [`bdr.create_transform_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) +- [`bdr.drop_trigger`](/pgd/5.7/reference/streamtriggers/interfaces#bdrdrop_trigger) +- [`bdr.get_configured_camo_partner`](/pgd/5.7/reference/functions#bdrget_configured_camo_partner) +- [`bdr.global_lock_table`](/pgd/5.7/reference/functions#bdrglobal_lock_table) +- [`bdr.is_camo_partner_connected`](/pgd/5.7/reference/functions#bdris_camo_partner_connected) +- [`bdr.is_camo_partner_ready`](/pgd/5.7/reference/functions#bdris_camo_partner_ready) +- [`bdr.logical_transaction_status`](/pgd/5.7/reference/functions#bdrlogical_transaction_status) - `bdr.ri_fkey_trigger` -- [`bdr.seq_nextval`](/pgd/latest/reference/functions-internal#bdrseq_nextval) -- [`bdr.seq_currval`](/pgd/latest/reference/functions-internal#bdrseq_currval) -- [`bdr.seq_lastval`](/pgd/latest/reference/functions-internal#bdrseq_lastval) -- [`bdr.trigger_get_committs`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs) -- [`bdr.trigger_get_conflict_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type) -- [`bdr.trigger_get_origin_node_id`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id) -- [`bdr.trigger_get_row`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_row) -- [`bdr.trigger_get_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_type) -- [`bdr.trigger_get_xid`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid) -- [`bdr.wait_for_camo_partner_queue`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue) -- [`bdr.wait_slot_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) -- [`bdr.wait_node_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_node_confirm_lsn) +- [`bdr.seq_nextval`](/pgd/5.7/reference/functions-internal#bdrseq_nextval) +- [`bdr.seq_currval`](/pgd/5.7/reference/functions-internal#bdrseq_currval) +- [`bdr.seq_lastval`](/pgd/5.7/reference/functions-internal#bdrseq_lastval) +- [`bdr.trigger_get_committs`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs) +- [`bdr.trigger_get_conflict_type`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type) +- [`bdr.trigger_get_origin_node_id`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id) +- [`bdr.trigger_get_row`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_row) +- [`bdr.trigger_get_type`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_type) +- [`bdr.trigger_get_xid`](/pgd/5.7/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid) +- [`bdr.wait_for_camo_partner_queue`](/pgd/5.7/reference/functions#bdrwait_for_camo_partner_queue) +- [`bdr.wait_slot_confirm_lsn`](/pgd/5.7/reference/functions#bdrwait_slot_confirm_lsn) +- [`bdr.wait_node_confirm_lsn`](/pgd/5.7/reference/functions#bdrwait_node_confirm_lsn) Many of these functions require additional privileges before you can use them. For example, you must be the table owner to successfully execute @@ -161,7 +161,7 @@ specific function. ### bdr_read_all_conflicts PGD logs conflicts into the -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +[`bdr.conflict_history`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history) table. Conflicts are visible only to table owners, so no extra privileges are required for the owners to read the conflict history. @@ -170,4 +170,4 @@ you can optionally grant the role `bdr_read_all_conflicts` to that user. #### Privileges -An explicit policy is set on [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) that allows this role to read the `bdr.conflict_history` table. +An explicit policy is set on [`bdr.conflict_history`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history) that allows this role to read the `bdr.conflict_history` table. diff --git a/product_docs/docs/pgd/5.7/security/role-management.mdx b/product_docs/docs/pgd/5.7/security/role-management.mdx index cc32a697155..d21ce6f7fec 100644 --- a/product_docs/docs/pgd/5.7/security/role-management.mdx +++ b/product_docs/docs/pgd/5.7/security/role-management.mdx @@ -12,12 +12,12 @@ Remember that a user in Postgres terms is simply a role with login privileges. If you do create a role or user in a non-PGD, unreplicated database, it's especially important that you do not make an object in the PGD-replicated database rely on that role. It will break the replication process, as PGD cannot replicate a role that is not in the PGD-replicated database. -You can disable this automatic replication behavior by turning off the [`bdr.role_replication`](https://www.enterprisedb.com/docs/pgd/latest/reference/pgd-settings/#bdrrole_replication) setting, but we don't recommend that. +You can disable this automatic replication behavior by turning off the [`bdr.role_replication`](https://www.enterprisedb.com/docs/pgd/5.7/reference/pgd-settings/#bdrrole_replication) setting, but we don't recommend that. ## Roles for new nodes -New PGD nodes that are added using [`bdr_init_physical`](https://www.enterprisedb.com/docs/pgd/latest/reference/nodes/#bdr_init_physical) will automatically replicate the roles from other nodes of the PGD cluster. +New PGD nodes that are added using [`bdr_init_physical`](https://www.enterprisedb.com/docs/pgd/5.7/reference/nodes/#bdr_init_physical) will automatically replicate the roles from other nodes of the PGD cluster. If a PGD node is joined to a PGD group manually, without using `bdr_init_physical`, existing roles aren't copied to the newly joined node. This is intentional behavior to ensure that access isn't accidentally granted. @@ -37,7 +37,7 @@ When joining a new node, the “No unreplicated roles” rule also applies. If a ## Connections and roles When allocating a new PGD node, the user supplied in the DSN for the `local_dsn` -argument of [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) and the `join_target_dsn` of [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) +argument of [`bdr.create_node`](/pgd/5.7/reference/nodes-management-interfaces#bdrcreate_node) and the `join_target_dsn` of [`bdr.join_node_group`](/pgd/5.7/reference/nodes-management-interfaces#bdrjoin_node_group) are used frequently to refer to, create, and manage database objects. PGD is carefully written to prevent privilege escalation attacks even when using diff --git a/product_docs/docs/pgd/5.7/security/roles.mdx b/product_docs/docs/pgd/5.7/security/roles.mdx index f058d2567bb..9360e4d3b7a 100644 --- a/product_docs/docs/pgd/5.7/security/roles.mdx +++ b/product_docs/docs/pgd/5.7/security/roles.mdx @@ -12,7 +12,7 @@ PGD are split across the following predefined roles. | [**bdr_read_all_stats**](pgd-predefined-roles/#bdr_read_all_stats) | The role having read-only access to the tables, views, and functions, sufficient to understand the state of PGD. | | [**bdr_monitor**](pgd-predefined-roles/#bdr_monitor) | Includes the privileges of bdr_read_all_stats, with some extra privileges for monitoring. | | [**bdr_application**](pgd-predefined-roles/#bdr_application) | The minimal privileges required by applications running PGD. | - | [**bdr_read_all_conflicts**](pgd-predefined-roles/#bdr_read_all_conflicts) | Can view all conflicts in [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history). | + | [**bdr_read_all_conflicts**](pgd-predefined-roles/#bdr_read_all_conflicts) | Can view all conflicts in [`bdr.conflict_history`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history). | These roles are named to be analogous to PostgreSQL's `pg_` [predefined @@ -25,9 +25,9 @@ role has. Managing PGD doesn't require that administrators have access to user data. Arrangements for securing information about conflicts are discussed in -[Logging conflicts to a table](/pgd/latest/reference/conflict_functions#logging-conflicts-to-a-table). +[Logging conflicts to a table](/pgd/5.7/reference/conflict_functions#logging-conflicts-to-a-table). -You can monitor conflicts using the [`bdr.conflict_history_summary`](/pgd/latest/reference/catalogs-visible#bdrconflict_history_summary) view. +You can monitor conflicts using the [`bdr.conflict_history_summary`](/pgd/5.7/reference/catalogs-visible#bdrconflict_history_summary) view. !!! Note The BDR extension and superuser access The one exception to the rule of not needing superuser access is in the diff --git a/product_docs/docs/pgd/5.7/sequences.mdx b/product_docs/docs/pgd/5.7/sequences.mdx index 09798bde8ee..1df5a316b3a 100644 --- a/product_docs/docs/pgd/5.7/sequences.mdx +++ b/product_docs/docs/pgd/5.7/sequences.mdx @@ -66,7 +66,7 @@ function. This function takes a standard PostgreSQL sequence and marks it as a PGD global sequence. It can also convert the sequence back to the standard PostgreSQL sequence. -PGD also provides the configuration variable [`bdr.default_sequence_kind`](/pgd/latest/reference/pgd-settings/#bdrdefault_sequence_kind). This variable +PGD also provides the configuration variable [`bdr.default_sequence_kind`](/pgd/5.7/reference/pgd-settings/#bdrdefault_sequence_kind). This variable determines the kind of sequence to create when the `CREATE SEQUENCE` command is executed or when a `serial`, `bigserial`, or `GENERATED BY DEFAULT AS IDENTITY` column is created. Valid settings are: @@ -84,7 +84,7 @@ command is executed or when a `serial`, `bigserial`, or sequences (that is, `bigserial`) and `galloc` sequence for `int4` (that is, `serial`) and `int2` sequences. -The [`bdr.sequences`](/pgd/latest/reference/catalogs-visible/#bdrsequences) view shows information about individual sequence kinds. +The [`bdr.sequences`](/pgd/5.7/reference/catalogs-visible/#bdrsequences) view shows information about individual sequence kinds. `currval()` and `lastval()` work correctly for all types of global sequence. @@ -220,7 +220,7 @@ to or more than the above ranges assigned for each sequence datatype. `setval()` doesn't reset the global state for `galloc` sequences. Don't use it. A few limitations apply to `galloc` sequences. PGD tracks `galloc` sequences in a -special PGD catalog [bdr.sequence_alloc](/pgd/latest/reference/catalogs-visible/#bdrsequence_alloc). This +special PGD catalog [bdr.sequence_alloc](/pgd/5.7/reference/catalogs-visible/#bdrsequence_alloc). This catalog is required to track the currently allocated chunks for the `galloc` sequences. The sequence name and namespace is stored in this catalog. The sequence chunk allocation is managed by Raft, whereas any changes to the diff --git a/product_docs/docs/pgd/5.7/testingandtuning.mdx b/product_docs/docs/pgd/5.7/testingandtuning.mdx index e1965318d0d..e25b6896f1d 100644 --- a/product_docs/docs/pgd/5.7/testingandtuning.mdx +++ b/product_docs/docs/pgd/5.7/testingandtuning.mdx @@ -33,7 +33,7 @@ The Postgres benchmarking application [`pgbench`](https://www.postgresql.org/docs/current/pgbench.html) was extended in PGD 5.0 in the form of a new application: pgd_bench. -[pgd_bench](/pgd/latest/reference/testingandtuning#pgd_bench) is a regular command-line utility that's added to the PostgreSQL bin +[pgd_bench](/pgd/5.7/reference/testingandtuning#pgd_bench) is a regular command-line utility that's added to the PostgreSQL bin directory. The utility is based on the PostgreSQL pgbench tool but supports benchmarking CAMO transactions and PGD-specific workloads. diff --git a/product_docs/docs/pgd/5.7/transaction-streaming.mdx b/product_docs/docs/pgd/5.7/transaction-streaming.mdx index 8e6e53288fa..380db22b191 100644 --- a/product_docs/docs/pgd/5.7/transaction-streaming.mdx +++ b/product_docs/docs/pgd/5.7/transaction-streaming.mdx @@ -56,8 +56,8 @@ processes on each subscriber. This capability is leveraged to provide the follow Configure transaction streaming in two locations: -- At node level, using the GUC [`bdr.default_streaming_mode`](/pgd/latest/reference/pgd-settings/#transaction-streaming) -- At group level, using the function [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) +- At node level, using the GUC [`bdr.default_streaming_mode`](/pgd/5.7/reference/pgd-settings/#transaction-streaming) +- At group level, using the function [`bdr.alter_node_group_option()`](/pgd/5.7/reference/nodes-management-interfaces/#bdralter_node_group_option) ### Node configuration using bdr.default_streaming_mode @@ -81,7 +81,7 @@ provided can also depend on the group configuration setting. See ### Group configuration using bdr.alter_node_group_option() -You can use the parameter `streaming_mode` in the function [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) +You can use the parameter `streaming_mode` in the function [`bdr.alter_node_group_option()`](/pgd/5.7/reference/nodes-management-interfaces/#bdralter_node_group_option) to set the group transaction streaming configuration. Permitted values are: @@ -95,7 +95,7 @@ Permitted values are: The default value is `default`. The value of the current setting is contained in the column `node_group_streaming_mode` -from the view [`bdr.node_group`](/pgd/latest/reference/catalogs-visible/#bdrnode_group). The value returned is +from the view [`bdr.node_group`](/pgd/5.7/reference/catalogs-visible/#bdrnode_group). The value returned is a single char type, and the possible values are `D` (`default`), `W` (`writer`), `F` (`file`), `A` (`auto`), and `O` (`off`). @@ -151,7 +151,7 @@ and can be safely handled by the writer. ## Monitoring -You can monitor the use of transaction streaming using the [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible/#bdrstat_subscription) +You can monitor the use of transaction streaming using the [`bdr.stat_subscription`](/pgd/5.7/reference/catalogs-visible/#bdrstat_subscription) function on the subscriber node. - `nstream_writer` — Number of transactions streamed to a writer. diff --git a/product_docs/docs/pgd/5.7/upgrades/compatibility.mdx b/product_docs/docs/pgd/5.7/upgrades/compatibility.mdx index ac30086652a..41a487040e6 100644 --- a/product_docs/docs/pgd/5.7/upgrades/compatibility.mdx +++ b/product_docs/docs/pgd/5.7/upgrades/compatibility.mdx @@ -66,6 +66,6 @@ Similarly to CAMO and Eager, Lag Control configuration was also moved to - `bdr.network_monitoring` view was removed along with underlying tables and functions. - Many catalogs were added and some have new columns, as described in - [Catalogs](/pgd/latest/reference/catalogs-visible/). These + [Catalogs](/pgd/5.7/reference/catalogs-visible/). These aren't breaking changes strictly speaking, but we recommend reviewing them when upgrading. diff --git a/product_docs/docs/pgd/5.7/upgrades/upgrading_major_rolling.mdx b/product_docs/docs/pgd/5.7/upgrades/upgrading_major_rolling.mdx index b3ad2a4a976..0585b5cbef4 100644 --- a/product_docs/docs/pgd/5.7/upgrades/upgrading_major_rolling.mdx +++ b/product_docs/docs/pgd/5.7/upgrades/upgrading_major_rolling.mdx @@ -3,8 +3,8 @@ title: Performing a Postgres major version rolling upgrade on a PGD cluster navTitle: Rolling Postgres major version upgrade deepToC: true redirects: - - /pgd/latest/install-admin/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/install-admin/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg + - /pgd/5.7/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg --- ## Upgrading Postgres major versions @@ -167,7 +167,7 @@ The worked example that follows shows upgrading the Postgres major version from ## Worked example -This worked example starts with a TPA-managed PGD cluster deployed using the [AWS quick start](/pgd/latest/quickstart/quick_start_aws/), which create Debian OS nodes. The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postgres 16. +This worked example starts with a TPA-managed PGD cluster deployed using the [AWS quick start](/pgd/5.7/quickstart/quick_start_aws/), which create Debian OS nodes. The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postgres 16. This example starts with the node named `kaboom`. diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/node/upgrade.mdx b/product_docs/docs/pgd/5.8/cli/command_ref/node/upgrade.mdx deleted file mode 100644 index a55f01586e0..00000000000 --- a/product_docs/docs/pgd/5.8/cli/command_ref/node/upgrade.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: pgd node upgrade -navTitle: Upgrade -deepToC: true ---- - -## Synopsis - -The `pgd node upgrade` command is used to upgrade the PostgreSQL version on a node in the EDB Postgres Distributed cluster. - -## Syntax - -```plaintext -pgd node upgrade [OPTIONS] --old-bindir --new-bindir --old-datadir --new-datadir --database --username -``` - -Where `` is the name of the node which you want to upgrade and ``, ``, ``, ``, ``, and `` are the old and new Postgres instance bin directories, old and new Postgres instance data directories, database name, and cluster's install user name respectively. - -## Options - -The following table lists the options available for the `pgd node upgrade` command: - -| Short | Long | Default | Env | Description | -|-------|---------------|---------------------|-------------|---------------------------------------------------------------------------| -| -b | --old-bindir | | PGBINOLD | Old Postgres instance bin directory | -| -B | --new-bindir | | PGBINNEW | New Postgres instance bin directory | -| -d | --old-datadir | | PGDATAOLD | Old Postgres instance data directory | -| -D | --new-datadir | | PGDATANEW | New Postgres instance data directory | -| | --database | | PGDATABASE | PGD database name | -| -p | --old-port | 5432 | PGPORTOLD | Old Postgres instance port | -| | --socketdir | /var/run/postgresql | PGSOCKETDIR | Directory to use for postmaster sockets during upgrade | -| | --new-socketdir | /var/run/postgresql | PGSOCKETDIRNEW | Directory to use for postmaster sockets in the new cluster | -| | --check | | | Specify to only perform checks and not modify clusters | -| -j | --jobs | 1 | | Number of simultaneous processes or threads to use | -| -k | --link | | | Use hard links instead of copying files to the new cluster | -| | --old-options | | | Option to pass to old postgres command, multiple invocations are appended | -| | --new-options | | | Option to pass to new postgres command, multiple invocations are appended | -| -N | --no-sync | | | Don't wait for all files in the upgraded cluster to be written to disk | -| -P | --new-port | 5432 | PGPORTNEW | New Postgres instance port number | -| -r | --retain | | | Retain SQL and log files even after successful completion | -| -U | --username | | PGUSER | Cluster's install user name | -| | --clone | | | Use efficient file cloning | - -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). - -## Examples - -In the following examples, "kaolin" is the name of the node to upgrade, from the Quickstart democluster. - -### Upgrade the PostgreSQL version on a node - -```shell -pgd node kaolin upgrade --old-bindir /usr/pgsql-16/bin --new-bindir /usr/pgsql-17/bin --old-datadir /var/lib/pgsql/16/data --new-datadir /var/lib/pgsql/17/data --database bdrdb --username enterprisedb -``` - -### Upgrade the PostgreSQL version on a node with hard links - -```shell -pgd node kaolin upgrade --old-bindir /usr/pgsql-16/bin --new-bindir /usr/pgsql-17/bin --old-datadir /var/lib/pgsql/16/data --new-datadir /var/lib/pgsql/17/data --database bdrdb --username enterprisedb --link -``` - -### Upgrade the PostgreSQL version on a node with efficient file cloning - -```shell -pgd node kaolin upgrade --old-bindir /usr/pgsql-16/bin --new-bindir /usr/pgsql-17/bin --old-datadir /var/lib/pgsql/16/data --new-datadir /var/lib/pgsql/17/data --database bdrdb --username enterprisedb --clone -``` - -### Upgrade the PostgreSQL version on a node with a different port number - -```shell -pgd node kaolin upgrade --old-bindir /usr/pgsql-16/bin --new-bindir /usr/pgsql-17/bin --old-datadir /var/lib/pgsql/16/data --new-datadir /var/lib/pgsql/17/data --database bdrdb --username enterprisedb --old-port 5433 --new-port 5434 -``` diff --git a/product_docs/docs/pgd/5.8/cli/discover_connections.mdx b/product_docs/docs/pgd/5.8/cli/discover_connections.mdx deleted file mode 100644 index e004a9d10df..00000000000 --- a/product_docs/docs/pgd/5.8/cli/discover_connections.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: "Discovering connection strings" -navTitle: "Discovering connection strings" -indexdepth: 2 -deepToC: true -description: "How to obtain the correct connection strings for your PGD-powered deployment." ---- - -You can install PGD CLI on any system that can connect to the PGD cluster. To use PGD CLI, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is edb_admin on an EDB Cloud Service distributed high-availability cluster. - -## PGD CLI and database connection strings - -You might not need a database connection string. For example, when Trusted Postgres Architect installs the PGD CLI on a system, it also configures the connection to the PGD cluster. This means that PGD CLI can connect to the cluster when run. - -## Getting your database connection string - -Because of the range of different configurations that PGD supports, every deployment method has a different way of deriving a connection string for it. Generally, you can obtain the required information from the configuration of your deployment. You can then assemble that information into connection strings. - -### For a TPA-deployed PGD cluster - -Because TPA is so flexible, you have to derive your connection string from your cluster configuration file (`config.yml`). - -- You need the name or IP address of a host with the role pgd-proxy listed for it. This host has a proxy you can connect to. Usually the proxy listens on port 6432. (Check the setting for `default_pgd_proxy_options` and `listen_port` in the config to confirm.) -- The default database name is `bdrdb`. (Check the setting `bdr_database` in the config to confirm.) -- The default PGD superuser is enterprisedb for EDB Postgres Advanced Server and postgres for PostgreSQL and EDB Postgres Extended Server. - -You can then assemble a connection string based on that information: - -``` -"host= port= dbname= user= sslmode=require" -``` - -To illustrate this, here are some excerpts of a `config.yml` file for a cluster: - -```yaml -... -cluster_vars: - ... - bdr_database: bdrdb - ... - default_pgd_proxy_options: - listen_port: 6432 - ... - -instances: -- Name: kaboom - backup: kapok - location: dc1 - node: 1 - role: - - bdr - - pgd-proxy - networks: - - ipv4_address: 192.168.100.2 - name: tpanet -... -``` - -The connection string for this cluster is: - -``` -"host=192.168.100.2 port=6432 dbname=bdrdb user=enterprisedb sslmode=require" -``` - -!!! Note Host name versus IP address -The example uses the IP address because the configuration is from a Docker TPA install with no name resolution available. Generally, you can use the host name as configured. -!!! - -### For an EDB Cloud Service distributed high-availability cluster - -1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view. -1. In the filter, set **Cluster Type** to **Distributed High Availability** to show only clusters that work with PGD CLI. -1. Select your cluster. -1. In the view of your cluster, select the **Connect** tab. -1. Copy the read/write URI from the connection info. This is your connection string. - -### For a cluster deployed with EDB PGD for Kubernetes - -As with TPA, EDB PGD for Kubernetes is very flexible, and there are multiple ways to obtain a connection string. It depends, in large part, on the configuration of the deployment's [services](/postgres_distributed_for_kubernetes/latest/connectivity/#services): - -- If you use the Node Service Template, direct connectivity to each node and proxy service is available. -- If you use the Group Service Template, there's a gateway service to each group. -- If you use the Proxy Service Template, a single proxy provides an entry point to the cluster for all applications. - -Consult your configuration file to determine this information. - -Establish a host name or IP address, port, database name, and username. The default database name is `bdrdb`. The default username is enterprisedb for EDB Postgres Advanced Server and postgres for PostgreSQL and EDB Postgres Extended Server. - -You can then assemble a connection string based on that information: - -``` -"host= port= dbname= user=" -``` - -If the deployment's configuration requires it, add `sslmode=`. diff --git a/product_docs/docs/pgd/5.8/cli/installing/tpa.mdx b/product_docs/docs/pgd/5.8/cli/installing/tpa.mdx deleted file mode 100644 index bcf865e2423..00000000000 --- a/product_docs/docs/pgd/5.8/cli/installing/tpa.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Installing PGD CLI with TPA" -navTitle: "TPA" -description: "Installing PGD CLI with Trusted Postgres Architect" ---- - -By default, Trusted Postgres Architect installs and configures PGD CLI on each PGD node. - -If you want to install PGD CLI on any non-PGD instance in the cluster, attach the pgdcli role to that instance in Trusted Postgres Architect's configuration file before deploying. - -See [Trusted Postgres Architect](/tpa/latest/) for more information. diff --git a/product_docs/docs/pgd/5.8/commit-scopes/limitations.mdx b/product_docs/docs/pgd/5.8/commit-scopes/limitations.mdx deleted file mode 100644 index a67b71db097..00000000000 --- a/product_docs/docs/pgd/5.8/commit-scopes/limitations.mdx +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: Limitations ---- - -The following limitations apply to the use of commit scopes and the various durability options they enable. - -## General limitations - -- [Legacy synchronous replication](legacy-sync) uses a mechanism for transaction confirmation - different from the one used by CAMO, Eager, and Group Commit. The two aren't - compatible, so don't use them together. Whenever you use Group Commit, CAMO, - or Eager, make sure none of the PGD nodes are configured in - `synchronous_standby_names`. - -- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE - TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) - can't be used with CAMO, Group Commit, or Eager because those - features use two-phase commit underneath. - -## Group Commit - -[Group Commit](group-commit) enables configurable synchronous commits over -nodes in a group. If you use this feature, take the following limitations into account: - -- Not all DDL can run when you use Group Commit. If you use unsupported DDL, a warning is logged, and the transactions commit scope is set to local. The only supported DDL operations are: - - Nonconcurrent `CREATE INDEX` - - Nonconcurrent `DROP INDEX` - - Nonconcurrent `REINDEX` of an individual table or index - - `CLUSTER` (of a single relation or index only) - - `ANALYZE` - - `TRUNCATE` - - -- Explicit two-phase commit isn't supported by Group Commit as it already uses two-phase commit. - -- Combining different commit decision options in the same transaction or - combining different conflict resolution options in the same transaction isn't - supported. - -- Currently, Raft commit decisions are extremely slow, producing very low TPS. - We recommended using them only with the `eager` conflict resolution setting - to get the Eager All-Node Replication behavior of PGD 4 and older. - -## Eager - -[Eager](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. - -Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It -also doesn't allow `LISTEN` or `UNLISTEN`. - -## CAMO - -[Commit At Most Once](camo) (CAMO) is a feature that aims to prevent -applications committing more than once. If you use this feature, take -these limitations into account when planning: - -- CAMO is designed to query the results of a recently failed COMMIT on the -origin node. In case of disconnection, the application must request the -transaction status from the CAMO partner. Ensure that you have as little delay -as possible after the failure before requesting the status. Applications must -not rely on CAMO decisions being stored for longer than 15 minutes. - -- If the application forgets the global identifier assigned, for example, -as a result of a restart, there's no easy way to recover -it. Therefore, we recommend that applications wait for outstanding -transactions to end before shutting down. - -- For the client to apply proper checks, a transaction protected by CAMO -can't be a single statement with implicit transaction control. You also can't -use CAMO with a transaction-controlling procedure or -in a `DO` block that tries to start or end transactions. - -- CAMO resolves commit status but doesn't resolve pending -notifications on commit. CAMO doesn't -allow the `NOTIFY` SQL command or the `pg_notify()` function. -They also don't allow `LISTEN` or `UNLISTEN`. - -- When replaying changes, CAMO transactions might detect conflicts just -the same as other transactions. If timestamp-conflict detection is used, -the CAMO transaction uses the timestamp of the prepare-on-the-origin -node, which is before the transaction becomes visible on the origin -node itself. - -- CAMO isn't currently compatible with transaction streaming. -Be sure to disable transaction streaming when planning to use -CAMO. You can configure this option globally or in the PGD node group. See -[Transaction streaming configuration](../transaction-streaming#configuration). - -- CAMO isn't currently compatible with decoding worker. -Be sure to not enable decoding worker when planning to use -CAMO. You can configure this option in the PGD node group. See -[Decoding worker disabling](../decoding_worker#enabling). - -- Not all DDL can run when you use CAMO. If you use unsupported DDL, a warning is logged and the transactions commit scope is set to local only. The only supported DDL operations are: - - Nonconcurrent `CREATE INDEX` - - Nonconcurrent `DROP INDEX` - - Nonconcurrent `REINDEX` of an individual table or index - - `CLUSTER` (of a single relation or index only) - - `ANALYZE` - - `TRUNCATE` - - -- Explicit two-phase commit isn't supported by CAMO as it already uses two-phase commit. - -- You can combine only CAMO transactions with the `DEGRADE TO` clause for -switching to asynchronous operation in case of lowered availability. diff --git a/product_docs/docs/pgd/5.8/compatibility.mdx b/product_docs/docs/pgd/5.8/compatibility.mdx deleted file mode 100644 index d8e9893d57f..00000000000 --- a/product_docs/docs/pgd/5.8/compatibility.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: PGD compatibility -navTitle: Compatibility -description: Compatibility of EDB Postgres Distributed with different versions of PostgreSQL -deepToC: true ---- - -## PGD compatibility with PostgreSQL versions - -The following table shows the major versions of PostgreSQL and each version of EDB Postgres Distributed (PGD) they are compatible with. - -| Postgres Version | PGD 5 | PGD 4 | -|------------------|--------------------|--------------| -| 17.3 | [5.7+](/pgd/5.7) | | -| 17 | [5.6.1+](/pgd/5.6) | | -| 16 | [5.3+](/pgd/5.6/) | | -| 15 | [5](/pgd/5.6/) | | -| 14 | [5](/pgd/5.6/) | [4](/pgd/4/) | -| 13 | [5](/pgd/5.6/) | [4](/pgd/4/) | -| 12 | [5](/pgd/5.6/) | [4](/pgd/4/) | - -EDB recommends that you use the latest minor version of any Postgres major version with a supported PGD. - -## PGD compatibility with operating systems and architectures - -The following tables show the versions of EDB Postgres Distributed and their compatibility with various operating systems and architectures. - -### Linux x86_64 (amd64) - -| Operating System | PGD 5 | PGD 4 | -|------------------------------------|------------|-------| -| RHEL 8/9 | Yes | Yes | -| Oracle Linux 8/9 | Yes | Yes | -| Rocky Linux/AlmaLinux | Yes | Yes | -| SUSE Linux Enterprise Server 15SP5 | Yes | Yes | -| Ubuntu 20.04/22.04 | Yes | Yes | -| Ubuntu 24.04 | Yes¹ | No | -| Debian 11/12 | Yes | Yes | - -¹ from PGD 5.7 onwards - -### Linux ppc64le - -| Operating System | PGD 5 | PGD 4 | -|------------------|-------|-------| -| RHEL 8/9 | Yes | No | - - -### Linux arm64/aarch64 - -| Operating System | PGD 5¹ | PGD 4 | -|------------------|-------------|-------| -| Debian 12 | Yes | No | -| RHEL 9² | Yes | No | - -¹ From PGD 5.6.1 onwards - -² Postgres 12 is not supported on RHEL 9 on arm64/aarch64 diff --git a/product_docs/docs/pgd/5.8/data_migration/edbloader.mdx b/product_docs/docs/pgd/5.8/data_migration/edbloader.mdx deleted file mode 100644 index 9b6b5d6cf96..00000000000 --- a/product_docs/docs/pgd/5.8/data_migration/edbloader.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: EDB*Loader and PGD -navTitle: EDB*Loader -description: EDB*Loader is a high-speed data loading utility for EDB Postgres Advanced Server. -deepToC: true ---- - -[EDB\*Loader](/epas/latest/database_administration/02_edb_loader/) is a high-speed data loading utility for EDB Postgres Advanced Server. It provides an interface compatible with Oracle databases, allowing you to load data into EDB Postgres Advanced Server. It's designed to load large volumes of data into EDB Postgres Advanced Server quickly and efficiently. - -The EDB\*Loader command line utility loads data from an input source into one or more tables using a subset of the parameters offered by Oracle SQL\*Loader. The source can be a flat file, pipe, or other programs. - -## Use with PGD - -As EDB\*Loader is a utility for EDB Postgres Advanced Server, it's available for EDB Postgres Distributed when EDB Postgres Advanced Server is the database in use for PGD data nodes. PGD deployments can use EDB\*Loader in the same way as it's used on EDB Postgres Advanced Server. See the [EDB\*Loader documentation](/epas/latest/database_administration/02_edb_loader/) for more information on how to use EDB\*Loader with EDB Postgres Advanced Server. - -### Replication and EDB\*Loader - -As with EDB Postgres Advanced Server, EDB\*Loader works with PGD in a replication environment. You cannot use the direct load path method because the [direct path load method](/epas/latest/database_administration/02_edb_loader/invoking_edb_loader/direct_path_load/) skips use of the WAL, upon which all replication relies. That means that only the node connected to by EDB\*Loader gets the data that EDB\*Loader is loading and no data replicates to the other nodes. - -With PGD, you can make use of EDB\*loader's direct load path method by running it independently on each node. You can perform this either on one node at a time or in parallel to all nodes, depending on the use case. When using the direct path load method on multiple nodes, it's important to ensure there are no other writes happening to the table concurrently as this can result in inconsistencies. - - - diff --git a/product_docs/docs/pgd/5.8/data_migration/index.mdx b/product_docs/docs/pgd/5.8/data_migration/index.mdx deleted file mode 100644 index 4ab24b40105..00000000000 --- a/product_docs/docs/pgd/5.8/data_migration/index.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Data migration to EDB Postgres Distributed -navTitle: Data migration -description: Data migration to EDB Postgres Distribution -indexCards: simple ---- - -Moving data from one data source to another is a common task in the world of data management. This section provides information on how to migrate data to EDB Postgres Distributed from various data sources. diff --git a/product_docs/docs/pgd/5.8/ddl/ddl-pgd-functions-like-ddl.mdx b/product_docs/docs/pgd/5.8/ddl/ddl-pgd-functions-like-ddl.mdx deleted file mode 100644 index 0f9aa5d00e3..00000000000 --- a/product_docs/docs/pgd/5.8/ddl/ddl-pgd-functions-like-ddl.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: PGD functions that behave like DDL -navTitle: DDL-like PGD functions ---- - -The following PGD management functions act like DDL. This means that, if DDL -replication is active and DDL filter settings allow it, they -attempt to take global locks, and their actions are replicated. For detailed -information, see the documentation for the individual functions. - -Replication set management: - -- [`bdr.create_replication_set`](/pgd/latest/reference/repsets-management#bdrcreate_replication_set) -- [`bdr.alter_replication_set`](/pgd/latest/reference/repsets-management#bdralter_replication_set) -- [`bdr.drop_replication_set`](/pgd/latest/reference/repsets-management#bdrdrop_replication_set) -- [`bdr.replication_set_add_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_add_table) -- [`bdr.replication_set_remove_table`](/pgd/latest/reference/repsets-membership#bdrreplication_set_remove_table) -- [`bdr.replication_set_add_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter) -- [`bdr.replication_set_remove_ddl_filter`](/pgd/latest/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter) - -Conflict management: - -- [`bdr.alter_table_conflict_detection`](../reference/conflict_functions/#bdralter_table_conflict_detection) -- `bdr.column_timestamps_enable` (deprecated; use `bdr.alter_table_conflict_detection()`) -- `bdr.column_timestamps_disable` (deprecated; use `bdr.alter_table_conflict_detection()`) - -Sequence management: - -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) - -Stream triggers: - -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) diff --git a/product_docs/docs/pgd/5.8/ddl/ddl-workarounds.mdx b/product_docs/docs/pgd/5.8/ddl/ddl-workarounds.mdx deleted file mode 100644 index 921110deb78..00000000000 --- a/product_docs/docs/pgd/5.8/ddl/ddl-workarounds.mdx +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Workarounds for DDL restrictions -navTitle: Workarounds ---- - -You can work around some of the limitations of PGD DDL operation handling. -Often splitting the operation into smaller changes can produce the desired -result that either isn't allowed as a single statement or requires excessive -locking. - -## Adding a CONSTRAINT - -You can add `CHECK` and `FOREIGN KEY` constraints without requiring a DML lock. -This involves a two-step process: - -- `ALTER TABLE ... ADD CONSTRAINT ... NOT VALID` -- `ALTER TABLE ... VALIDATE CONSTRAINT` - -Execute these steps in two different transactions. Both of these -steps take DDL lock only on the table and hence can be run even when one -or more nodes are down. But to validate a constraint, PGD must -ensure that: -- All nodes in the cluster see the `ADD CONSTRAINT` command. -- The node validating the constraint applied replication changes from all other nodes prior to - creating the NOT VALID constraint on those nodes. - -So even though the new mechanism doesn't need all nodes -to be up while validating the constraint, it still requires that all -nodes applied the `ALTER TABLE .. ADD CONSTRAINT ... NOT VALID` -command and made enough progress. PGD waits for a consistent -state to be reached before validating the constraint. - -The new facility requires the cluster to run with Raft protocol -version 24 and later. If the Raft protocol isn't yet upgraded, the old -mechanism is used, resulting in a DML lock request. - -## Adding a column - -To add a column with a volatile default, run these commands in -separate transactions: - -```sql - ALTER TABLE mytable ADD COLUMN newcolumn coltype; -- Note the lack of DEFAULT or NOT NULL - - ALTER TABLE mytable ALTER COLUMN newcolumn DEFAULT volatile-expression; - - BEGIN; - SELECT bdr.global_lock_table('mytable'); - UPDATE mytable SET newcolumn = default-expression; - COMMIT; -``` - -This approach splits schema changes and row changes into separate transactions that -PGD can execute and results in consistent data across all nodes in a -PGD group. - -For best results, batch the update into chunks so that you don't update more than -a few tens or hundreds of thousands of rows at once. You can do this using -a `PROCEDURE` with embedded transactions. - -The last batch of changes must run in a transaction that -takes a global DML lock on the table. Otherwise you can miss rows -that are inserted concurrently into the table on other nodes. - -If required, you can run `ALTER TABLE mytable ALTER COLUMN newcolumn NOT NULL;` after the `UPDATE` has finished. - -## Changing a column's type - -Changing a column's type can cause PostgreSQL to rewrite a table. In some cases, though, you can avoid this rewriting. -For example: - -```sql -CREATE TABLE foo (id BIGINT PRIMARY KEY, description VARCHAR(128)); -ALTER TABLE foo ALTER COLUMN description TYPE VARCHAR(20); -``` - -You can rewrite this statement to avoid a table rewrite by making the -restriction a table constraint rather than a datatype change. The constraint can -then be validated in a subsequent command to avoid long locks, if you want. - -```sql -CREATE TABLE foo (id BIGINT PRIMARY KEY, description VARCHAR(128)); -ALTER TABLE foo - ALTER COLUMN description TYPE varchar, - ADD CONSTRAINT description_length_limit CHECK (length(description) <= 20) NOT VALID; -ALTER TABLE foo VALIDATE CONSTRAINT description_length_limit; -``` - -If the validation fails, then you can `UPDATE` just the failing rows. -You can use this technique for `TEXT` and `VARCHAR` using `length()` or with -`NUMERIC` datatype using `scale()`. - -In the general case for changing column type, first add a column of the desired type: - -``` -ALTER TABLE mytable ADD COLUMN newcolumn newtype; -``` - -Create a trigger defined as `BEFORE INSERT OR UPDATE ON mytable FOR EACH ROW ..`. -Creating ths trigger assigns `NEW.newcolumn` to `NEW.oldcolumn` so that new writes to the -table update the new column. - -`UPDATE` the table in batches to copy the value of `oldcolumn` to -`newcolumn` using a `PROCEDURE` with embedded transactions. Batching the work -helps reduce replication lag if it's a big table. Updating by range of -IDs or whatever method you prefer is fine. Alternatively, you can update the whole table in one pass for -smaller tables. - -`CREATE INDEX ...` any required indexes on the new column. It's safe to -use `CREATE INDEX ... CONCURRENTLY` individually without DDL replication -on each node to reduce lock durations. - -`ALTER` the column to add a `NOT NULL` and `CHECK` constraints, if required. - -1. `BEGIN` a transaction. -1. `DROP` the trigger you added. -1. `ALTER TABLE` to add any `DEFAULT` required on the column. -1. `DROP` the old column. -1. `ALTER TABLE mytable RENAME COLUMN newcolumn TO oldcolumn`. -1. `COMMIT`. - -!!! Note - Because you're dropping a column, you might have to re-create views, procedures, - and so on that depend on the table. Be careful if you `CASCADE` drop the column, - as you must be sure to re-create everything that referred to it. - -#### Changing other types - -The `ALTER TYPE` statement is replicated, but affected tables aren't locked. - -When you use this DDL, ensure that the statement has successfully -executed on all nodes before using the new type. You can achieve this using -the [`bdr.wait_slot_confirm_lsn()`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) function. - -This example ensures that the DDL is written to all nodes before using the new value -in DML statements: - -``` -ALTER TYPE contact_method ADD VALUE 'email'; -SELECT bdr.wait_slot_confirm_lsn(NULL, NULL); -``` diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-cloudservice/index.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-cloudservice/index.mdx deleted file mode 100644 index 9d17e513485..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-cloudservice/index.mdx +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Deploying and configuring PGD on EDB Postgres AI Cloud Service -navTitle: On EDB Cloud Service -redirects: - - /pgd/latest/deploy-config/deploy-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/install-admin/admin-biganimal/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-biganimal/ #generated for pgd deploy-config-planning reorg ---- - -EDB Postgres AI Cloud Service is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account where it's operated by our Postgres experts. EDB Postgres AI Cloud Service makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single and multi-region Always-on clusters. - -This section covers how to work with EDB Postgres Distributed when deployed on EDB Postgres AI Cloud Service: - -- [Creating a distributed high-availability cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_dha_cluster/) in the Cloud Service documentation works through the steps needed to: - - Prepare your cloud environment for a distributed high-availability cluster. - - Sign in to Cloud Service. - - Create a distributed high-availability cluster, including: - - Creating and configuring a data group. - - Optionally creating and configuring a second data group in a different region. diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-kubernetes/index.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-kubernetes/index.mdx deleted file mode 100644 index f0a8813151a..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-kubernetes/index.mdx +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Deploying and configuring PGD on Kubernetes -navTitle: With Kubernetes -redirects: - - /pgd/latest/install-admin/admin-kubernetes/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-kubernetes/ #generated for pgd deploy-config-planning reorg ---- - -EDB CloudNativePG Global Cluster is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. - -This section covers how to deploy and configure EDB Postgres Distributed using the EDB CloudNativePG Global Cluster operator. - -* [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart) in the EDB CloudNativePG Global Cluster documentation works through the steps needed to: - * Create a Kind/Minikube cluster. - * Install Helm and the Helm chart for EDB CloudNativePG Global Cluster. - * Create a simple configuration file for a PGD cluster. - * Deploy a PGD cluster from that simple configuration file. - -* [Installation and upgrade](/postgres_distributed_for_kubernetes/latest/installation_upgrade) provides detailed instructions for installing and upgrading EDB CloudNativePG Global Cluster. diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/02-install-postgres.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/02-install-postgres.mdx deleted file mode 100644 index dcc6a02b9c8..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/02-install-postgres.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Step 2 - Installing Postgres -navTitle: Installing Postgres -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/02-install-postgres/ #generated for pgd deploy-config-planning reorg ---- - -## Installing Postgres - -You need to install Postgres on all the hosts. - -An EDB account is required to use the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can get installation instructions. -Select your platform and Postgres edition. -You're presented with 2 steps of instructions. The first step covers how to configure the required package repository. The second step covers how to install the packages from that repository. - -Run both steps. - -## Worked example - -This example installs EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9). - -### EDB account - -You need an EDB account to install both Postgres and PGD. - -Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform. Then scroll down the list to select the Postgres version you want to install: - -* EDB Postgres Advanced Server -* EDB Postgres Extended -* PostgreSQL - -When you select the version of the Postgres server you want, two steps are displayed. - - -### 1: Configuring repositories - -For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token is inserted into these scripts by the EDB Repos 2.0 site. -In the examples, it's shown as `XXXXXXXXXXXXXXXX`. - -On each provisioned host, you either run the automatic repository installation script or use the manual installation steps. The automatic script looks like this: - -```shell -curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/setup.rpm.sh' | sudo -E bash -``` - -The manual installation steps look like this: - -```shell -dnf install yum-utils -rpm --import 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/gpg.E71EB0829F1EF813.key' -curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/config.rpm.txt?distro=el&codename=9' > /tmp/enterprise.repo -dnf config-manager --add-repo '/tmp/enterprise.repo' -dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-enterprise' -``` - -### 2: Install Postgres - -For step 2, run the command to install the packages: - -``` -sudo dnf -y install edb-as16-server -``` diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx deleted file mode 100644 index 2f908694bab..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: Step 3 - Configuring PGD repositories -navTitle: Configuring PGD repositories -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/03-configuring-repositories/ #generated for pgd deploy-config-planning reorg ---- - -## Configuring PGD repositories - -To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages. - -Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](/pgd/latest/nodes/overview), such as witness or subscriber-only nodes. - -* Use your EDB account. - * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. - -* Set environment variables. - * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token: - - ``` - export EDB_SUBSCRIPTION_TOKEN= - ``` - -* Configure the repository. - * Run the automated installer to install the repositories: - - !!! Note Red Hat - ``` - curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash - ``` - !!! - - !!! Note Ubuntu/Debian - ``` - curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash - ``` - !!! - -## Worked example - -### Use your EDB account - -You need an EDB account to install Postgres Distributed. - -Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. - -On your first visit to this page, select **Request Access** to generate your repo token. - -![EDB Repos 2.0](images/edbrepos2.0.png) - -Select **Copy Token** to copy the token to your clipboard, and store the token safely. - - -### Set environment variables - -Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the value of your EDB repo token, obtained in the [EDB account](#use-your-edb-account) step. - -``` -export EDB_SUBSCRIPTION_TOKEN= -``` - -You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. - -!!! Note -Your preferred platform may support storing this variable as a secret, which can appear as an environment variable. If this is the case, add it to your platform's secret manager, and don't add the setting to `.bashrc`. -!!! - -### Configure the repository - -All the software you need is available from the EDB Postgres Distributed package repository. -You have the option to download and run a script to configure the EDB Postgres Distributed repository. -You can also download, inspect, and then run that same script. - -The following instructions also include the essential steps that the scripts take for any user wanting to manually run the installation process or to automate it. - -#### RHEL/Other RHEL-based - -You can autoinstall with automated OS detection: - -``` -curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash -``` - -If you want to inspect the script that's generated for you, run: - -``` -curl -1sLfO "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" -``` - -Then inspect the resulting `setup.rpm.sh` file. When you're ready to proceed, run: - -``` -sudo -E bash setup.rpm.sh -``` - -If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide. - -You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you're using. This example sets it for RHEL-compatible Linux version 9: - -``` -export DISTRO="el" -export CODENAME="9" -``` - -Now install the yum-utils package: - -``` -sudo dnf install -y yum-utils -``` - -The next step imports a GPG key for the repositories: - -``` -sudo rpm --import "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/gpg.B09F406230DA0084.key" -``` - -Now you can import the repository details, add them to the local configuration, and enable the repository. - -``` -curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/config.rpm.txt?distro=$DISTRO&codename=$CODENAME" > /tmp/enterprise.repo -sudo dnf config-manager --add-repo '/tmp/enterprise.repo' -sudo dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-postgres_distributed' -``` - - - diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/04-installing-software.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/04-installing-software.mdx deleted file mode 100644 index 1384438cf71..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/04-installing-software.mdx +++ /dev/null @@ -1,372 +0,0 @@ ---- -title: Step 4 - Installing the PGD software -navTitle: Installing PGD software -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/04-installing-software/ #generated for pgd deploy-config-planning reorg ---- - -## Installing the PGD software - -With the repositories configured, you can now install the Postgres Distributed software. -You must perform these steps on each host before proceeding to the next step. - -* **Install the packages.** - * Install the PGD packages, which include a server-specific BDR package and generic PGD Proxy and CLI packages. (`edb-bdr5-`, `edb-pgd5-proxy`, and `edb-pgd5-cli`) - - -* **Ensure the Postgres database server has been initialized and started.** - * Use `systemctl status` to check that the service is running. - * If the service isn't running, initialize the database and start the service. - - -* **Configure the BDR extension.** - * Add the BDR extension (`$libdir/bdr`) at the start of the `shared_preload_libraries` setting in `postgresql.conf`. - * Set the `wal_level` GUC variable to `logical` in `postgresql.conf`. - * Turn on commit timestamp tracking by setting `track_commit_timestamp` to `'on'` in `postgresql.conf`. - * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

- !!! Note The `max_worker_processes` value - The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. - To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings). - The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. - !!! - * Set a password on the EnterprisedDB/Postgres user. - * Add rules to `pg_hba.conf` to allow nodes to connect to each other. - * Ensure that these lines are present in `pg_hba.conf`: - ``` - host all all all md5 - host replication all all md5 - ``` - * Add a `.pgpass` file to allow nodes to authenticate each other. - * Configure a user with sufficient privileges to log in to the other nodes. - * See [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the Postgres documentation for more on the `.pgpass` file. - - -* **Restart the server.** - * Verify the restarted server is running with the modified settings and the BDR extension is available. - - -* **Create the replicated database.** - * Log in to the server's default database (`edb` for EDB Postgres Advanced Server, `postgres` for PGE and community Postgres). - * Use `CREATE DATABASE bdrdb` to create the default PGD replicated database. - * Log out and then log back in to `bdrdb`. - * Use `CREATE EXTENSION bdr` to enable the BDR extension and PGD to run on that database. - - -The worked example that follows shows the steps for EDB Postgres Advanced Server in detail. - -If you're installing PGD with EDB Postgres Extended Server or community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql). - -## Worked example - -### Install the packages - -The first step is to install the packages. Each Postgres package has an `edb-bdr5-` package to go with it. -For example, if you're installing EDB Postgres Advanced Server (epas) version 16, you'd install `edb-bdr5-epas16`. - -There are two other packages to also install: - -- `edb-pgd5-proxy` for PGD Proxy -- `edb-pgd5-cli` for the PGD command line tool - -To install all of these packages on a RHEL or RHEL compatible Linux, run: - -``` -sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli -``` - -### Ensure the database is initialized and started - -If the server wasn't initialized and started by the database's package initialization (or you're repeating the process), you need to initialize and start the server. - -To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-16`, so run: - -``` -sudo systemctl status edb-as-16 -``` - -If the server isn't running, the response is: - -``` -○ edb-as-16.service - EDB Postgres Advanced Server 16 - Loaded: loaded (/usr/lib/systemd/system/edb-as-16.service; disabled; preset: disabled) - Active: inactive (dead) -``` - -`Active: inactive (dead)` tells you that you need to initialize and start the server. - -You need to know the path to the setup script for your particular Postgres flavor. - -For EDB Postgres Advanced Server, you can find this script in `/usr/edb/as16/bin` as `edb-as-16-setup`. -Run this command with the `initdb` parameter and pass an option to set the database to use UTF-8: - -``` -sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as16/bin/edb-as-16-setup initdb -``` - -Once the database is initialized, start it so that you can continue configuring the BDR extension: - -``` -sudo systemctl start edb-as-16 -``` - -### Configure the BDR extension - -Installing EDB Postgres Advanced Server creates a system user enterprisedb with admin capabilities when connected to the database. You'll use this user to configure the BDR extension. - -#### Preload the BDR library - -You need to preload the BDR library with other libraries. -EDB Postgres Advanced Server has a number of libraries already preloaded, so you have to prefix the existing list with the BDR library. - -``` -echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null -``` - -!!!tip -This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the `postgresql.conf` file, which is owned by another user. -!!! - -#### Set the `wal_level` - -The BDR extension needs to set the server to perform logical replication. Do this by setting `wal_level` to `logical`: - -``` -echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null - -``` - -#### Enable commit timestamp tracking - -The BDR extension also needs the commit timestamp tracking enabled: - -``` -echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null - -``` - -#### Increase `max_worker_processes` - -To communicate between multiple nodes, Postgres Distributed nodes run more worker processes than usual. -The default limit (8) is too low even for a small cluster. - -The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. -To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings). - -This example, with a 3-node cluster, uses the value of 16. - -Increase the maximum number of worker processes to 16: - -``` -echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null - -``` - - -This value must be increased for larger clusters. - -#### Add a password to the Postgres enterprisedb user - -To allow connections between nodes, a password needs to be set on the Postgres enterprisedb user. -This example uses the password `secret`. -Select a different password for your deployments. -You will need this password to [Enable authentication between nodes](#enable-authentication-between-nodes). - -``` -sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'" - -``` - -#### Enable inter-node authentication in pg_hba.conf - -Out of the box, Postgres allows local authentication and connections with the database but not external network connections. -To enable this, edit `pg_hba.conf` and add appropriate rules, including rules for the replication users. -To simplify the process, use this command: - -``` -echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/edb/as16/data/pg_hba.conf - -``` - -The command appends the following to `pg_hba.conf`: - -``` -host all all all md5 -host replication all all md5 - -``` - -These commands enable the nodes to replicate. - -#### Enable authentication between nodes - -As part of the process of connecting nodes for replication, PGD logs into other nodes. -It performs that login as the user that Postgres is running under. -For EDB Postgres Advanced server, this is the enterprisedb user. -That user needs credentials to log into the other nodes. -Supply these credentials using the `.pgpass` file, which needs to reside in the user's home directory. -The home directory for `enterprisedb` is `/var/lib/edb`. - -Run this command to create the file: - -``` -echo -e "*:*:*:enterprisedb:secret" | sudo -u enterprisedb tee /var/lib/edb/.pgpass; sudo chmod 0600 /var/lib/edb/.pgpass - -``` - -You can read more about the `.pgpass` file in [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the PostgreSQL documentation. - -### Restart the server - -After all these configuration changes, we recommend that you restart the server with: - -``` -sudo systemctl restart edb-as-16 - -``` - -#### Check the extension has been installed - -At this point, it's worth checking whether the extension is actually available and the configuration was correctly loaded. You can query the `pg_available_extensions` table for the BDR extension like this: - -``` -sudo -u enterprisedb psql edb -c "select * from pg_available_extensions where name like 'bdr'" - -``` - -This command returns an entry for the extension and its version: - -``` - name | default_version | installed_version | comment -------+-----------------+-------------------+------------------------------------------- - bdr | 5.3.0 | | Bi-Directional Replication for PostgreSQL - ``` - -You can also confirm the other server settings using this command: - -``` -sudo -u enterprisedb psql edb -c "show all" | grep -e wal_level -e track_commit_timestamp -e max_worker_processes - -``` - -### Create the replicated database - -The server is now prepared for PGD. -You need to next create a database named `bdrdb` and install the BDR extension when logged into it: - -``` -sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb" -sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr" - -``` - -Finally, test the connection by logging in to the server. - -``` -sudo -u enterprisedb psql bdrdb -``` - -You're connected to the server. -Execute the command "\\dx" to list extensions installed: - -``` -bdrdb=# \dx - List of installed extensions - Name | Version | Schema | Description -------------------+---------+------------+-------------------------------------------------- - bdr | 5.3.0 | pg_catalog | Bi-Directional Replication for PostgreSQL - edb_dblink_libpq | 1.0 | pg_catalog | EnterpriseDB Foreign Data Wrapper for PostgreSQL - edb_dblink_oci | 1.0 | pg_catalog | EnterpriseDB Foreign Data Wrapper for Oracle - edbspl | 1.0 | pg_catalog | EDB-SPL procedural language - plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language -(5 rows) -``` - -Notice that the BDR extension is listed in the table, showing that it's installed. - -## Summaries - -### Installing PGD for EDB Postgres Advanced Server - -For your convenience, here's a summary of the commands used in this example. - -``` -sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli -sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as16/bin/edb-as-16-setup initdb -sudo systemctl start edb-as-16 -echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null -echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null -echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null -echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null -sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'" -echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/edb/as16/data/pg_hba.conf >/dev/null -echo -e "*:*:*:enterprisedb:secret" | sudo -u enterprisedb tee /var/lib/edb/.pgpass >/dev/null; sudo chmod 0600 /var/lib/edb/.pgpass -sudo systemctl restart edb-as-16 -sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb" -sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr" -sudo -u enterprisedb psql bdrdb - -``` - -### Installing PGD for EDB Postgres Extended Server - -Installing PGD with EDB Postgres Extended Server has a number of differences from the EDB Postgres Advanced Server installation: - -* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number). -* Call a different setup utility: `/usr/edb/pgeNN/bin/edb-pge-NN-setup`. -* The service name is `edb-pge-NN`. -* The system user is postgres (not enterprisedb). -* The home directory for the postgres user is `/var/lib/pgqsl`. -* There are no preexisting libraries to add to `shared_preload_libraries`. - -#### Summary: Installing PGD for EDB Postgres Extended Server 16 - -``` -sudo dnf -y install edb-bdr5-pgextended16 edb-pgd5-proxy edb-pgd5-cli -sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/pge16/bin/edb-pge-16-setup ekend initdb -sudo systemctl start edb-pge-16 -echo -e "shared_preload_libraries = '\$libdir/bdr'" | sudo -u postgres tee -a /var/lib/edb-pge/16/data/postgresql.conf >/dev/null -echo -e "wal_level = 'logical'" | sudo -u postgres tee -a /var/lib/edb-pge/16/data/postgresql.conf >/dev/null -echo -e "track_commit_timestamp = 'on'" | sudo -u postgres tee -a /var/lib/edb-pge/16/data/postgresql.conf >/dev/null -echo -e "max_worker_processes = '16'" | sudo -u postgres tee -a /var/lib/edb-pge/16/data/postgresql.conf >/dev/null -sudo -u postgres psql postgres -c "ALTER USER postgres WITH PASSWORD 'secret'" -echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/edb-pge/16/data/pg_hba.conf >/dev/null -echo -e "*:*:*:postgres:secret" | sudo -u postgres tee /var/lib/pgsql/.pgpass >/dev/null; sudo chmod 0600 /var/lib/pgsql/.pgpass -sudo systemctl restart edb-pge-16 -sudo -u postgres psql postgres -c "CREATE DATABASE bdrdb" -sudo -u postgres psql bdrdb -c "CREATE EXTENSION bdr" -sudo -u postgres psql bdrdb - -``` - -### Installing PGD for Postgresql - -Installing PGD with PostgresSQL has a number of differences from the EDB Postgres Advanced Server installation: - -* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number). -* Call a different setup utility: `/usr/pgsql-NN/bin/postgresql-NN-setup`. -* The service name is `postgresql-NN`. -* The system user is postgres (not enterprisedb). -* The home directory for the postgres user is `/var/lib/pgqsl`. -* There are no preexisting libraries to add to `shared_preload_libraries`. - -#### Summary: Installing PGD for Postgresql 16 - -``` -sudo dnf -y install edb-bdr5-pg16 edb-pgd5-proxy edb-pgd5-cli -sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/pgsql-16/bin/postgresql-16-setup initdb -sudo systemctl start postgresql-16 -echo -e "shared_preload_libraries = '\$libdir/bdr'" | sudo -u postgres tee -a /var/lib/pgsql/16/data/postgresql.conf >/dev/null -echo -e "wal_level = 'logical'" | sudo -u postgres tee -a /var/lib/pgsql/16/data/postgresql.conf >/dev/null -echo -e "track_commit_timestamp = 'on'" | sudo -u postgres tee -a /var/lib/pgsql/16/data/postgresql.conf >/dev/null -echo -e "max_worker_processes = '16'" | sudo -u postgres tee -a /var/lib/pgsql/16/data/postgresql.conf >/dev/null -sudo -u postgres psql postgres -c "ALTER USER postgres WITH PASSWORD 'secret'" -echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/pgsql/16/data/pg_hba.conf >/dev/null -echo -e "*:*:*:postgres:secret" | sudo -u postgres tee /var/lib/pgsql/.pgpass; sudo chmod 0600 /var/lib/pgsql/.pgpass -sudo systemctl restart postgresql-16 -sudo -u postgres psql postgres -c "CREATE DATABASE bdrdb" -sudo -u postgres psql bdrdb -c "CREATE EXTENSION bdr" -sudo -u postgres psql bdrdb - -``` diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx deleted file mode 100644 index 009e2486ecf..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/05-creating-cluster.mdx +++ /dev/null @@ -1,163 +0,0 @@ ---- -title: Step 5 - Creating the PGD cluster -navTitle: Creating the cluster -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/05-creating-cluster/ #generated for pgd deploy-config-planning reorg ---- - -## Creating the PGD cluster - -* **Create connection strings for each node**. -For each node, create a connection string that will allow PGD to perform replication. - - The connection string is a key/value string that starts with a `host=` and the IP address of the host. (If you have resolvable named hosts, the name of the host is used instead of the IP address.) - - That's followed by the name of the database. In this case, use `dbname=bdrdb`, as a `bdrdb` database was created when [installing the software](04-installing-software). - - We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and community PostgreSQL. - - -* **Prepare the first node.** -To create the cluster, select and log in to the `bdrdb` database on any host's Postgres server. - -* **Create the first node.** - Run `bdr.create_node` and give the node a name and its connection string where *other* nodes can connect to it. - * Create the top-level group. - Create a top-level group for the cluster with `bdr.create_node_group`, giving it a single parameter: the name of the top-level group. - * Create a subgroup. - Create a subgroup as a child of the top-level group with `bdr.create_node_group`, giving it two parameters: the name of the subgroup and the name of the parent (and top-level) group. - This process initializes the first node. - - -* **Add the second node.** - * Create the second node. - Log in to another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. - * Join the second node to the cluster. - Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. - - -* **Add the third node.** - * Create the third node. - Log in to another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. - * Join the third node to the cluster. - Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. - - -## Worked example - -So far, this example has: - -* Created three hosts. -* Installed a Postgres server on each host. -* Installed Postgres Distributed on each host. -* Configured the Postgres server to work with PGD on each host. - -To create the cluster, you tell host-one's Postgres instance that it's a PGD node—node-one—and create PGD groups on that node. -Then you tell host-two and host-three's Postgres instances that they are PGD nodes—node-two and node-three—and that they must join a group on node-one. - -### Create connection strings for each node - -Calculate the connection strings for each of the nodes in advance. -Following are the connection strings for this 3-node example. - -| Name | Node name | Private IP | Connection string | -| ---------- | ---------- | --------------- | -------------------------------------- | -| host-one | node-one | 192.168.254.166 | host=host-one dbname=bdrdb port=5444 | -| host-two | node-two | 192.168.254.247 | host=host-two dbname=bdrdb port=5444 | -| host-three | node-three | 192.167.254.135 | host=host-three dbname=bdrdb port=5444 | - -### Preparing the first node - -Log in to host-one's Postgres server. - -``` -ssh admin@host-one -sudo -iu enterprisedb psql bdrdb -``` - -### Create the first node - -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. - -``` -select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); -``` - -#### Create the top-level group - -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. - -``` -select bdr.create_node_group('pgd'); -``` - -#### Create a subgroup - -Using subgroups to organize your nodes is preferred, as it allows services like PGD Proxy, which you'll configure later, to coordinate their operations. -In a larger PGD installation, multiple subgroups can exist. These subgroups provide organizational grouping that enables geographical mapping of clusters and localized resilience. -For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and the use of PGD Proxy. - -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. -The subgroup name is the first parameter, and the parent group is the second parameter. -This example creates a subgroup `dc1` as a child of `pgd`. - - -``` -select bdr.create_node_group('dc1','pgd'); -``` - -### Adding the second node - -Log in to host-two's Postgres server - -``` -ssh admin@host-two -sudo -iu enterprisedb psql bdrdb -``` - -#### Create the second node - -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. - -``` -select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); -``` - -#### Join the second node to the cluster - -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. - -``` -select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); -``` - -### Add the third node - -Log in to host-three's Postgres server. - -``` -ssh admin@host-three -sudo -iu enterprisedb psql bdrdb -``` - -#### Create the third node - -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. - -``` -select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); -``` - -#### Join the third node to the cluster - -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. - -``` -select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); -``` - -A PGD cluster is now created. diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx deleted file mode 100644 index 0bee3e06b9a..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/07-configure-proxies.mdx +++ /dev/null @@ -1,282 +0,0 @@ ---- -title: Step 7 - Configure proxies -navTitle: Configure proxies -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/07-configure-proxies/ #generated for pgd deploy-config-planning reorg ---- - -## Configure proxies - -PGD can use proxies to direct traffic to one of the cluster's nodes, selected automatically by the cluster. -There are performance and availability reasons for using a proxy: - -* Performance: By directing all traffic (in particular, write traffic) to one node, the node can resolve write conflicts locally and more efficiently. -* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can direct new traffic to a new write leader that it selects. - -It's best practice to configure PGD Proxy for clusters to enable this behavior. - -### Configure the cluster for proxies - -To set up a proxy, you need to first prepare the cluster and subgroup the proxies will be working with by: - -* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. -* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. -* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. -* Create a pgdproxy user on the cluster with a password or other authentication. - -### Configure each host as a proxy - -Once the cluster is ready, you need to configure each host to run pgd-proxy: - -* Create a pgdproxy local user. -* Create a `.pgpass` file for that user that allows the user to log into the cluster as pgdproxy. -* Modify the systemd service file for pgdproxy to use the pgdproxy user. -* Create a proxy config file for the host that lists the connection strings for all the nodes in the subgroup, specifies the name for the proxy to use when fetching proxy options like `listen_address` and `listen_port`. -* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml`. -* Restart the systemd service and check its status. -* Log in to the proxy and verify its operation. - -Further detail on all these steps is included in the worked example. - -## Worked example - -## Preparing for proxies - -For proxies to function, the `dc1` subgroup must enable Raft and routing. - -Log in to any node in the cluster, using psql to connect to the `bdrdb` database as the enterprisedb user. Execute: - -```sql -SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true'); -SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true'); -``` - -You can use the [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: - -```sql -SELECT node_group_name, enable_proxy_routing, enable_raft - FROM bdr.node_group_summary - WHERE parent_group_name IS NOT NULL; -__OUTPUT__ - node_group_name | enable_proxy_routing | enable_raft ------------------+----------------------+------------- - dc1 | t | t -(1 row) - -bdrdb=# -``` - - -Next, create a PGD proxy within the cluster using the `bdr.create_proxy` function. -This function takes two parameters: the proxy's unique name and the group you want it to be a proxy for. - -In this example, you want a proxy on each host in the `dc1` subgroup: - -``` -SELECT bdr.create_proxy('pgd-proxy-one','dc1'); -SELECT bdr.create_proxy('pgd-proxy-two','dc1'); -SELECT bdr.create_proxy('pgd-proxy-three','dc1'); -``` - -You can use the [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: - -```sql -SELECT proxy_name, node_group_name - FROM bdr.proxy_config_summary; -__OUTPUT__ - proxy_name | node_group_name ------------------+----------------- - pgd-proxy-one | dc1 - pgd-proxy-two | dc1 - pgd-proxy-three | dc1 - - bdrdb=# - ``` - -## Create a pgdproxy user on the database - -Create a user named pgdproxy and give it a password. This example uses `proxysecret`. - -On any node, log into the `bdrdb` database as enterprisedb/postgres. - -``` -CREATE USER pgdproxy PASSWORD 'proxysecret'; -GRANT bdr_superuser TO pgdproxy; -``` - -## Configure proxy routes to each node - -Once a proxy has connected, it gets its dsn values (connection strings) from the cluster. The cluster needs to know the connection details that a proxy should use for each node in the subgroup. This is done by setting the `route_dsn` option for each node to a connection string that the proxy can use to connect to that node. - -Please note that when a proxy starts, it gets the initial dsn from the proxy's config file. The route_dsn value set in this step and in config file should match. - -On any node, log into the bdrdb database as enterprisedb/postgres. - -```sql -SELECT bdr.alter_node_option('node-one', 'route_dsn', 'host=host-one dbname=bdrdb port=5444 user=pgdproxy'); -SELECT bdr.alter_node_option('node-two', 'route_dsn', 'host=host-two dbname=bdrdb port=5444 user=pgdproxy'); -SELECT bdr.alter_node_option('node-three', 'route_dsn', 'host=host-three dbname=bdrdb port=5444 user=pgdproxy'); -``` - -Note that the endpoints in this example specify `port=5444`. -This is necessary for EDB Postgres Advanced Server instances. -For EDB Postgres Extended and community PostgreSQL, you can omit this. - -## Create a pgdproxy user on each host - -```shell -sudo adduser pgdproxy -``` - -This user needs credentials to connect to the server. -Create a `.pgpass` file with the `proxysecret` password in it. -Then lock down the `.pgpass` file so it's accessible only by its owner. - -```shell -echo -e "*:*:*:pgdproxy:proxysecret" | sudo tee /home/pgdproxy/.pgpass -sudo chown pgdproxy /home/pgdproxy/.pgpass -sudo chmod 0600 /home/pgdproxy/.pgpass -``` - -## Configure the systemd service on each host - -Switch the service file from using root to using the pgdproxy user. - -```shell -sudo sed -i s/root/pgdproxy/ /usr/lib/systemd/system/pgd-proxy.service -``` - -Reload the systemd daemon. - -```shell -sudo systemctl daemon-reload -``` - -## Create a proxy config file for each host - -The proxy configuration file is slightly different for each host. -It's a YAML file that contains a cluster object. The cluster object has three -properties: - -* The name of the PGD cluster's top-level group (as `name`) -* An array of endpoints of databases (as `endpoints`) -* The proxy definition object with a name and endpoint (as `proxy`) - -The first two properties are the same for all hosts: - -``` -cluster: - name: pgd - endpoints: - - "host=host-one dbname=bdrdb port=5444 user=pgdproxy" - - "host=host-two dbname=bdrdb port=5444 user=pgdproxy" - - "host=host-three dbname=bdrdb port=5444 user=pgdproxy" -``` - -Remember that host-one, host-two, and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running. -You use the name of the host, not the node, for the endpoint connection. - -Again, note that the endpoints in this example specify `port=5444`. -This is necessary for EDB Postgres Advanced Server instances. -For EDB Postgres Extended and community PostgreSQL, you can set this to `port=5432`. - -The third property, `proxy`, has a `name` property. -The `name` property is a name created with `bdr.create_proxy` earlier, and it's different on each host. -A proxy can't be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host. -Typically, you use port 6432 for PGD proxies. - -``` - proxy: - name: pgd-proxy-one -``` - -In this case, using `localhost` in the endpoint specifies that this proxy will listen on the host where the proxy is running. - -## Install a PGD proxy configuration on each host - -For each host, create the `/etc/edb/pgd-proxy` directory: - -``` -sudo mkdir -p /etc/edb/pgd-proxy -``` - -Then, on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory. - -For this example, you can run this on host-one to create the file: - -``` -cat < --architecture [options] -``` - -The available configuration options include: - -| Flags | Description | -| ------------------ | ----------- | -| `--architecture` | Required. Set to `PGD-Always-ON` for EDB Postgres Distributed deployments. | -| `–-postgresql `
or
`--edb-postgres-advanced `
or
`--edb-postgres-extended ` | Required. Specifies the distribution and version of Postgres to use. For more details, see [Cluster configuration: Postgres flavour and version](/tpa/latest/tpaexec-configure/#postgres-flavour-and-version). | -| `--redwood` or `--no-redwood` | Required when `--edb-postgres-advanced` flag is present. Specifies whether Oracle database compatibility features are desired. | -| `--location-names l1 l2 l3` | Required. Specifies the names of the locations to deploy PGD to. | -| `--data-nodes-per-location N` | Specifies the number of data nodes per location. Default is 3. | -| `--add-witness-node-per-location` | For an even number of data nodes per location, adds witness nodes to allow for local consensus. Enabled by default for 2-data-node locations. | -| `--add-proxy-nodes-per-location` | Specifies whether to separate PGD proxies from data nodes and how many to configure. By default one proxy is configured and cohosted for each data node. | -| `--pgd-proxy-routing global\|local` | Specifies whether PGD Proxy routing is handled on a global or local (per-location) basis. | -| `--add-witness-only-location loc` | Designates one of the cluster locations as witness-only (no data nodes are present in that location). | -| `--enable-camo` | Sets up a CAMO pair in each location. Works only with 2 data nodes per location. | - -More configuration options are listed in the TPA documentation for [PGD-Always-ON](/tpa/latest/architecture-PGD-Always-ON/). - -For example: - -``` -[tpa]$ tpaexec configure ~/clusters/speedy \ - --architecture PGD-Always-ON \ - --platform aws \ - --edb-postgres-advanced 16 \ - --redwood \ - --location-names eu-west-1 eu-north-1 eu-central-1 \ - --data-nodes-per-location 3 \ - --pgd-proxy-routing global -``` - -The first argument must be the cluster directory, for example, `speedy` or `~/clusters/speedy`. (The cluster is named `speedy` in both cases.) We recommend that you keep all your clusters in a common directory, for example, `~/clusters`. The next argument must be `--architecture` to select an architecture, followed by options. - -The command creates a directory named `~/clusters/speedy` and generates a configuration file named `config.yml` that follows the layout of the PGD-Always-ON architecture. You can use the `tpaexec configure --architecture PGD-Always-ON --help` command to see the values that are supported for the configuration options in this architecture. - -In the example, the options select: - -- An AWS deployment (`--platform aws`) -- EDB Postgres Advanced Server, version 16 and Oracle compatibility (`--edb-postgres-advanced 16` and `--redwood`) -- Three locations (`--location-names eu-west-1 eu-north-1 eu-central-1`) -- Three data nodes at each location (`--data-nodes-per-location 3`) -- Proxy routing policy of global (`--pgd-proxy-routing global`) - -### Common configuration options - -Other configuration options include the following. - -#### Owner -Every cluster must be directly traceable to a person responsible for the provisioned resources. - -By default, a cluster is tagged as being owned by the login name of the user running `tpaexec provision`. If this name doesn't identify a person (for example, `postgres`, `ec2-user`), you must specify `--owner SomeId` to set an identifiable owner. - -You can use your initials, "Firstname Lastname", or any text that identifies you uniquely. - -#### Platform options -The default value for `--platform` is `aws`, which is the platform supported by the PGD-Always-ON architecture. - -Specify `--region` to specify any existing AWS region that you have access to and that allows you to create the required number of instances. The default region is eu-west-1. - -Specify `--instance-type` with any valid instance type for AWS. The default is t3.micro. - -### Subnet selection -By default, each cluster is assigned a random /28 subnet under 10.33/16. However, depending on the architecture, there can be one or more subnets, and each subnet can be anywhere between a /24 and a /29. - -Specify `--subnet` to use a particular subnet, for example, `--subnet 192.0.2.128/27`. - -### Disk space -Specify `--root-volume-size` to set the size of the root volume in GB, for example, `--root-volume-size 64`. The default is 16GB. Depending on the image used to create instances, there might be a minimum size for the root volume. - -For architectures that support separate Postgres and Barman volumes: - -- Specify `--postgres-volume-size` to set the size of the Postgres volume in GB. The default is 16GB. - -- Specify `--barman-volume-size` to set the size of the Barman volume in GB. The default is 32GB. - -### Distribution -Specify `--os` or `--distribution` to specify the OS to use on the cluster's instances. The value is case sensitive. - -The selected platform determines the distributions that are available and the one that's used by default. For more details, see `tpaexec info platforms/`. - -In general, you can use `Debian`, `RedHat`, and `Ubuntu` to select TPA images that have Postgres and other software preinstalled (to reduce deployment times). To use stock distribution images instead, append `-minimal` to the value, for example, `--distribution Debian-minimal`. - -### Repositories -When using TPA to deploy PDG 5 and later, TPA selects repositories from EDB Repos 2.0. All software is sourced from these repositories. - -To use [EDB Repos 2.0](https://www.enterprisedb.com/repos/), you must use -`export EDB_SUBSCRIPTION_TOKEN=xxx` before you run tpaexec. You can get -your subscription token from [the web -interface](https://www.enterprisedb.com/repos-downloads). - -Optionally, use `--edb-repositories repository …` to specify EDB repositories in addition to the default repository to install on each instance. - - -### Software versions -By default, TPA uses the latest major version of Postgres. Specify `--postgres-version` to install an earlier supported major version, or specify both version and distribution using one of the flags described under [Configure](#configurationoptions). - -By default, TPA installs the latest version of every package, which is usually the desired behavior. However, in some testing scenarios, you might need to select specific package versions. For example: - -``` ---postgres-package-version 10.4-2.pgdg90+1 ---repmgr-package-version 4.0.5-1.pgdg90+1 ---barman-package-version 2.4-1.pgdg90+1 ---pglogical-package-version '2.2.0*' ---bdr-package-version '3.0.2*' ---pgbouncer-package-version '1.8*' -``` - -Specify `--extra-packages` or `--extra-postgres-packages` to install more packages. The former lists packages to install along with system packages. The latter lists packages to install later along with Postgres packages. (If you mention packages that depend on Postgres in the former list, the installation fails because Postgres isn't yet installed.) The arguments are passed on to the package manager for installation without any modifications. - -The `--extra-optional-packages` option behaves like `--extra-packages`, but it's not an error if the named packages can't be installed. - -### Hostnames -By default, `tpaexec configure` randomly selects as many hostnames as it needs from a preapproved list of several dozen names, which is enough for most clusters. - -Specify `--hostnames-from` to select names from a different list, for example, if you need more names than are available in the supplied list. The file must contain one hostname per line. - -Specify `--hostnames-pattern` to restrict hostnames to those matching the egrep-syntax pattern. If you choose to do this, you must ensure that the pattern matches only valid hostnames ([a-zA-Z0-9-]) and finds enough of them. - -### Locations -By default, `tpaexec configure` uses the names first, second, and so on for any locations used by the selected architecture. - -Specify `--location-names` to provide more meaningful names for each location. diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/02-deploying.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/02-deploying.mdx deleted file mode 100644 index 51a0136d452..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/02-deploying.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Provisioning, deploying, and testing -navTitle: Deploying -redirects: - - /pgd/latest/install-admin/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/02-deploying/ #generated for pgd deploy-config-planning reorg ---- - -## Provision - -!!! note -TPA now runs the `provision` command as part of the `deploy` command. The `provision` command is still available for use, but you don't need to run it separately. -!!! - -The `tpaexec provision` command creates instances and other resources required by the cluster. The details of the process depend on the architecture (for example, PGD-Always-ON) and platform (for example, AWS) that you selected while configuring the cluster. - -For example, given AWS access with the necessary privileges, TPA provisions EC2 instances, VPCs, subnets, routing tables, internet gateways, security groups, EBS volumes, elastic IPs, and so on. - -You can also provision existing servers by selecting the `bare` platform and providing connection details. Whether these are bare metal servers or those provisioned separately on a cloud platform, you can use them as if they were created by TPA. - -You aren't restricted to a single platform. You can spread your cluster out across some AWS instances in multiple regions and some on-premises servers or servers in other data centres, as needed. - -At the end of the provisioning stage, you will have the required number of instances with the basic operating system installed, which TPA can access using SSH (with sudo to root). - -## Deploy -The `tpaexec deploy` command installs and configures Postgres and other software on the provisioned servers. This includes setting up replication, backups, and so on. TPA can create the servers, but it doesn't matter who created them so long as SSH and sudo access are available. - - -At the end of the deployment stage, EDB Postgres Distributed is up and running. - -## Test -The `tpaexec test` command executes various architecture and platform-specific tests against the deployed cluster to ensure that it's working as expected. - -At the end of the testing stage, you have a fully functioning cluster. - -For more information, see [Trusted Postgres Architect](/tpa/latest/). diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/index.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/index.mdx deleted file mode 100644 index 31738207df9..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/deploying/index.mdx +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Deploying PGD using TPA -navTitle: Deploying with TPA -description: > - Detailed reference and examples for using TPA to configure and deploy PGD -redirects: - - /pgd/latest/tpa/ - - /pgd/latest/deployments/tpaexec/using_tpaexec/ - - /pgd/latest/tpa/using_tpa/ - - ../deployments/tpaexec - - ../deployments/tpaexec/installing_tpaexec - - ../deployments/using_tpa/ - - ../tpa - - /pgd/latest/install-admin/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/installing/ #generated for pgd deploy-config-planning reorg ---- - -The standard way of automatically deploying EDB Postgres Distributed in a self-managed setting is to use EDB's deployment tool: [Trusted Postgres Architect](/tpa/latest/) (TPA). -This applies to physical and virtual machines, both self-hosted and in the cloud (EC2), - - - -!!! Note Get started with TPA and PGD quickly - - If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. - - If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. - - If you want to run on your own Linux systems or VMs, you can also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](/pgd/latest/quickstart/quick_start_linux). - -## Prerequisite: Install TPA - -Before you can use TPA to deploy PGD, you must install TPA. Follow the [installation instructions in the Trusted Postgres Architect documentation](/tpa/latest/INSTALL/) before continuing. - - -At the highest level, using TPA to deploy PGD involves the following steps: - -   1: [Use TPA to create a configuration](01-configuring) for your PGD cluster. - -   2: [Provision, deploy, and test](02-deploying) your PGD cluster. diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/index.mdx b/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/index.mdx deleted file mode 100644 index e69081153d9..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-tpa/index.mdx +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Deployment and management with TPA -navTitle: Using TPA -redirects: - - /pgd/latest/install-admin/admin-tpa/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/ #generated for pgd deploy-config-planning reorg ---- - -TPA (Trusted Postgres Architect) is a standard automated way of installing PGD and Postgres on physical and virtual machines, -both self-hosted and in the cloud (with AWS EC2). - -!!! Note Get started with TPA and PGD quickly - - If you want to experiment with a local deployment as quickly as possible, you can [deploying an EDB Postgres Distributed example cluster on Docker](../../quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. - - If deploying to the cloud is your aim, you can [deploying an EDB Postgres Distributed example cluster on AWS](../../quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. - - If you want to run on your own Linux systems or VMs, you can use also use TPA to [deploy EDB Postgres Distributed directly to your own Linux hosts](../../quickstart/quick_start_linux) - -This section covers how to use TPA to deploy and administer EDB Postgres Distributed. - -* [Deploying with TPA](deploying) works through the steps needed to: - * Install TPA. - * Use TPA to create a configuration. - * Deploy the configuration with TPA. - -The installing section provides an example cluster that will be used in future examples. - -You can also [perform a rolling major version upgrade](../../upgrades/upgrading_major_rolling) with PGD administered by TPA. diff --git a/product_docs/docs/pgd/5.8/deploy-config/index.mdx b/product_docs/docs/pgd/5.8/deploy-config/index.mdx deleted file mode 100644 index 3486ddbe42a..00000000000 --- a/product_docs/docs/pgd/5.8/deploy-config/index.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Deploying and configuring EDB Postgres Distributed -navTitle: Deploying and configuring -description: How to deploy EDB Postgres Distributed with a range of deployment options. -navigation: -- deploy-manual -- deploy-tpa -- deploy-kubernetes -- deploy-biganimal ---- - -This section covers how to deploy EDB Postgres Distributed and how to configure it. - -There are four main ways to deploy PGD: - -* [Manual deployment and administration](deploy-manual) describes how to manually deploy and configure EDB Postgres Distributed on a set of servers. - - -* [Trusted Postgres Architect (TPA)](deploy-tpa) describes how to use TPA to deploy and configure EDB Postgres Distributed to a Docker environment, Linux hosts, or AWS. - - -* [EDB Postgres Distributed for Kubernetes](deploy-kubernetes) describes how to deploy and configure EDB Postgres Distributed to a Kubernetes environment. - - -* [EDB Postgres AI Cloud Service](deploy-cloudservice) describes how to deploy and configure EDB Postgres Distributed on the EDB Postgres AI Cloud Service. diff --git a/product_docs/docs/pgd/5.8/index.mdx b/product_docs/docs/pgd/5.8/index.mdx deleted file mode 100644 index 8507b232839..00000000000 --- a/product_docs/docs/pgd/5.8/index.mdx +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: "EDB Postgres Distributed (PGD)" -navTitle: EDB Postgres Distributed -description: EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication. -indexCards: simple -redirects: - - /pgd/5/compatibility_matrix - - /pgd/latest/bdr - - /edb-postgres-ai/migration-etl/pgd/ -navigation: - - rel_notes - - known_issues - - compatibility - - "#Concepts" - - terminology - - overview - - "#Get Started" - - quickstart - - planning - - deploy-config - - "#Using" - - appusage - - ddl - - sequences - - "#Administration" - - node_management - - postgres-configuration - - routing - - backup - - security - - monitoring - - testingandtuning - - upgrades - - data_migration - - "#Tools" - - cli - - "#PGD Features" - - nodes - - commit-scopes - - conflict-management - - parallelapply - - repsets - - striggers - - scaling - - twophase - - decoding_worker - - transaction-streaming - - tssnapshots - - cdc-failover - - "#Reference" - - reference -navRootedTo: /edb-postgres-ai/databases -categories: - - /edb-postgres-ai/platforms-and-tools/high-availability/ -pdf: true -directoryDefaults: - version: "5.8.0" ---- - - -EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and [throughput up to 5X faster than native logical replication](https://www.enterprisedb.com/blog/performance-improvements-edb-postgres-distributed). It enables distributed Postgres clusters with high availability up to five 9s. - - -
-
-
-Read about why PostgreSQL is better when it’s distributed with EDB Postgres Distributed in Distributed PostgreSQL:The Key to Always On Database Availability -
-
- -
- -By default, EDB Postgres Distributed uses asynchronous replication, applying changes on -the peer nodes only after the local commit. You can configure additional levels of synchronicity between different nodes, groups of nodes, or all nodes by configuring -[Synchronous Commit](/pgd/latest/commit-scopes/synchronous_commit/), [Group Commit](commit-scopes/group-commit) (optionally with [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution)), or [CAMO](commit-scopes/camo). - -## Compatibility - -EDB Postgres Distributed 5 is compatible with PostgreSQL, EDB Postgres Extended, and EDB Postgres Advanced versions 13-17. See [Compatibility](compatibility) for more details, including information about compatibility with different operating systems and architectures. -For feature compatibility with compatible servers, see [Choosing a Postgres distribution](planning/choosing_server). - ---- diff --git a/product_docs/docs/pgd/5.8/known_issues.mdx b/product_docs/docs/pgd/5.8/known_issues.mdx deleted file mode 100644 index 92f945eac79..00000000000 --- a/product_docs/docs/pgd/5.8/known_issues.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: 'Known issues' -description: 'Known issues in EDB Postgres Distributed 5' ---- - -These are currently known issues in EDB Postgres Distributed 5. -These known issues are tracked in PGD's -ticketing system and are expected to be resolved in a future -release. - -- If the resolver for the `update_origin_change` conflict - is set to `skip`, `synchronous_commit=remote_apply` is used, and - concurrent updates of the same row are repeatedly applied on two - different nodes, then one of the update statements might hang due - to a deadlock with the PGD writer. As mentioned in - [Conflicts](conflict-management/conflicts/), `skip` isn't the default - resolver for the `update_origin_change` conflict, and this - combination isn't intended to be used in production. It discards - one of the two conflicting updates based on the order of arrival - on that node, which is likely to cause a divergent cluster. - In the rare situation that you do choose to use the `skip` - conflict resolver, note the issue with the use of the - `remote_apply` mode. - -- The Decoding Worker feature doesn't work with CAMO/Eager/Group Commit. -Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled. - -- Lag Control doesn't adjust commit delay in any way on a fully isolated node, that's in case all other nodes are unreachable or not operational. -As soon as at least one node connects, replication Lag Control picks up its work and adjusts the PGD commit delay again. - -- For time-based Lag Control, PGD currently uses the lag time, measured by commit timestamps, rather than the estimated catch up time that's based on historic apply rates. - -- Changing the CAMO partners in a CAMO pair isn't currently possible. -It's possible only to add or remove a pair. -Adding or removing a pair doesn't require a restart of Postgres or even a reload of the configuration. - -- Group Commit can't be combined with [CAMO](commit-scopes/camo/). - -- Transactions using Eager Replication can't yet execute DDL. The TRUNCATE command is allowed. - -- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/latest/reference/pgd-settings#generic-replication). - -- There currently is no protection against altering or removing a commit scope. -Running transactions in a commit scope that's concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. -Make sure that any transactions using a specific commit scope have finished before altering or removing it. - -- The [PGD CLI](cli) can return stale data on the state of the cluster if it's still connecting to nodes that were previously parted from the cluster. -Edit the [`pgd-cli-config.yml`](cli/configuring_cli/#using-a-configuration-file) file, or change your [`--dsn`](cli/configuring_cli/#using-database-connection-strings-in-the-command-line) settings to ensure only active nodes in the cluster are listed for connection. - -To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/latest/reference/functions#bdralter_commit_scope). - -- DDL run in serializable transactions can face the error: `ERROR: could not serialize access due to read/write dependencies among transactions`. A workaround is to run the DDL outside serializable transactions. - -- The EBD Postgres Advanced Server 17 data type [`BFILE`](/epas/latest/reference/sql_reference/02_data_types/03a_bfiles/) is not currently supported. This is due to `BFILE` being a file reference that is stored in the database, and the file itself is stored outside the database and not replicated. - -- EDB Postgres Advanced Server's native autopartioning is not supported in PGD. See [Restrictions on EDB Postgres Advanced Server-native automatic partitioning](scaling#restrictions-on-edb-postgres-advanced-server-native-automatic-partitioning) for more information. - -Details of other design or implementation [limitations](planning/limitations) are also available. diff --git a/product_docs/docs/pgd/5.8/monitoring/index.mdx b/product_docs/docs/pgd/5.8/monitoring/index.mdx deleted file mode 100644 index b9240d7e0ff..00000000000 --- a/product_docs/docs/pgd/5.8/monitoring/index.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Monitoring -originalFilePath: monitoring.md -description: Monitoring EDB Postgres Distributed through Postgres Enterprise Manager, SQL, and OpenTelemetry ---- - -Monitoring replication setups is important to ensure that your system: - -- Performs optimally -- Doesn't run out of disk space -- Doesn't encounter other faults that might halt operations - -It's important to have automated monitoring in place to ensure that the administrator is alerted and can -take proactive action when issues occur. For example, the administrator can be alerted if -replication slots start falling badly behind. - -EDB provides Postgres Enterprise Manager (PEM), which supports PGD starting with version 8.1. See [Monitoring EDB Postgres Distributed](/pem/latest/monitoring_BDR_nodes/) for more information. - -Alternatively, tools or users can make their own calls into information views -and functions provided by the BDR extension. See [Monitoring through SQL](sql) for -details. - -EDB Postgres Distributed also integrates with OpenTelemetry, allowing you to -use an existing reporting setup to follow the state of the EDB Postgres Distributed -cluster. See [OpenTelemetry integration](otel) for details. diff --git a/product_docs/docs/pgd/5.8/monitoring/otel.mdx b/product_docs/docs/pgd/5.8/monitoring/otel.mdx deleted file mode 100644 index 172090dff80..00000000000 --- a/product_docs/docs/pgd/5.8/monitoring/otel.mdx +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: OpenTelemetry integration ---- - -You can configure EDB Postgres Distributed to report monitoring information -as well as traces to the [OpenTelemetry](https://opentelemetry.io/) collector. - -EDB Postgres Distributed OTEL collector fills several resource attributes. -These are attached to all metrics and traces: - - - The `service.name` is configurable with the `bdr.otel_service_name` configuration setting. - - The `service.namespace` is always set to `edb_postgres_distributed`. - - The `service.instance.id` is always set to the system identifier of the Postgres instance. - - The `service.version` is set to the current version of the BDR extension loaded in the Postgresql instance. - -## OTEL and OLTP compatibility - -For OTEL connections, the integration supports OLTP/HTTP version 1.0.0 only, -over HTTP or HTTPS. It doesn't support OLTP/gRPC. - -## Metrics collection - -Setting the configuration option -`bdr.metrics_otel_http_url` to a non-empty URL enables the metric collection. - -Different kinds of metrics are collected, as shown in the tables that follow. - -### Generic metrics - -| Metric name | Type | Labels | Description -| ----------- | ---- | ------ | ----------- -| pg_backends_by_state | gauge | conn_state - idle, active, idle in transaction, fastpath functioncall, idle in transaction (aborted), disabled, undefined | Number of backends in a given state -| pg_oldest_xact_start | gauge | | Oldest transaction start time -| pg_oldest_activity_start | gauge | | Oldest query start time -| pg_waiting_backends | gauge | wait_type - LWLock, Lock, BufferPin, Activity, Client, Extension, IPC, Timeout, IO, ??? (for unknown) | Number of currently waiting backends by wait type -| pg_start_time | gauge | | Timestamp at which the server has started -| pg_reload_time | gauge | | Timestamp at which the server has last reloaded configuration - - -### Replication metrics - -| Metric name | Type | Labels | Description -| ----------- | ---- | ------ | ----------- -| bdr_slot_sent_lag | gauge | slot_name - name of a slot | Current sent lag in bytes for each replication slot -| bdr_slot_write_lag | gauge | slot_name - name of a slot | Current write lag in bytes for each replication slot -| bdr_slot_flush_lag | gauge | slot_name - name of a slot | Current flush lag in bytes for each replication slot -| bdr_slot_apply_lag | gauge | slot_name - name of a slot | Current apply lag in bytes for each replication slot -| bdr_subscription_receive_lsn | gauge | sub_name - name of subscription | Current received LSN for each subscription -| bdr_subscription_flush_lsn | gauge | sub_name - name of subscription | Current flushed LSN for each subscription -| bdr_subscription_apply_lsn | gauge | sub_name - name of subscription | Current applied LSN for each subscription -| bdr_subscription_receiver | gauge | sub_name - name of subscription | Whether subscription receiver is currently running (1) or not (0) - -### Consensus metric - -See also [Monitoring Raft consensus](sql/#monitoring-raft-consensus) - -| Metric name | Type | Labels | Description -| ----------- | ---- | ------ | ----------- -| bdr_raft_state | gauge | state_str - RAFT_FOLLOWER, RAFT_CANDIDATE, RAFT_LEADER, RAFT_STOPPED | Raft state of the consensus on this node -| bdr_raft_protocol_version | gauge | | Consensus protocol version used by this node -| bdr_raft_leader_node | gauge | | Id of a node that this node considers to be current leader -| bdr_raft_nodes | gauge | | Total number of nodes that participate in consensus (includes learner/non-voting nodes) -| bdr_raft_voting_nodes | gauge | | Number of actual voting nodes in consensus -| bdr_raft_term | gauge | | Current raft term this node is on -| bdr_raft_commit_index | gauge | | Raft commit index committed by this node -| bdr_raft_apply_index | gauge | | Raft commit index applied by this node - -## Tracing - -Tracing collection to OpenTelemetry requires configuring `bdr.trace_otel_http_url` -and enabling tracing using `bdr.trace_enable`. - -The tracing is currently limited to only some subsystems, primarily to the -cluster management functionality. The following spans can be seen in traces. - -| Span name | Description | -| --------- | ----------- | -| create_node_group | Group creation -| alter_node_group_config | Change of group config options -| alter_node_config | Change of node config option -| join_node_group | Node joining a group -| join_send_remote_request | Join source sending the join request on behalf of the joining node -| add_camo_pair | Add CAMO pair -| alter_camo_pair | Change CAMO pair -| remove_camo_pair | Delete CAMO pair -| alter_commit_scope | Change commit scope definition (either create new or update existing) -| alter_proxy_config | Change config for PGD-Proxy instance (either create new or update existing) -| walmsg_global_lock_send | Send global locking WAL message -| walmsg_global_lock_recv | Received global locking WAL message -| ddl_epoch_apply | Global locking epoch apply (ensure cluster is synchronized enough for new epoch to start) -| walmsg_catchup | Catchup during node removal WAL message -| raft_send_appendentries | Internal Raft book keeping message -| raft_recv_appendentries | Internal Raft book keeping message -| raft_request | Raft request execution -| raft_query | Raft query execution -| msgb_send | Consensus messaging layer message -| msgb_recv_receive | Consensus messaging layer message -| msgb_recv_deliver | Consensus messaging layer message delivery -| preprocess_ddl | DDL command preprocessing - -## TLS support - -The metrics and tracing endpoints can be HTTP or HTTPS. You can -configure paths to the CA bundle, client key, and client certificate using -`bdr.otel_https_ca_path`, `bdr.otel_https_key_path`, and `bdr.otel_https_cert_path` -configuration options. diff --git a/product_docs/docs/pgd/5.8/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5.8/node_management/heterogeneous_clusters.mdx deleted file mode 100644 index 62a3feed9c2..00000000000 --- a/product_docs/docs/pgd/5.8/node_management/heterogeneous_clusters.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Joining a heterogeneous cluster ---- - - -A PGD 4.0 node can join an EDB Postgres Distributed cluster running 3.7.x at a specific -minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes. -This procedure is useful when you want to upgrade not just the PGD -major version but also the underlying PostgreSQL major -version. You can achieve this by joining a 3.7 node running on -PostgreSQL 12 or 13 to an EDB Postgres Distributed cluster running 3.6.x on -PostgreSQL 11. The new node can also -run on the same PostgreSQL major release as all of the nodes in the -existing cluster. - -PGD ensures that the replication works correctly in all directions -even when some nodes are running 3.6 on one PostgreSQL major release and -other nodes are running 3.7 on another PostgreSQL major release. However, -we recommend that you quickly bring the cluster into a -homogenous state by parting the older nodes once enough new nodes -join the cluster. Don't run any DDLs that might -not be available on the older versions and vice versa. - -A node joining with a different major PostgreSQL release can't use -physical backup taken with [`bdr_init_physical`](/pgd/latest/reference/nodes#bdr_init_physical), and the node must join -using the logical join method. Using this method is necessary because the major -PostgreSQL releases aren't on-disk compatible with each other. - -When a 3.7 node joins the cluster using a 3.6 node as a -source, certain configurations, such as conflict resolution, -aren't copied from the source node. The node must be configured -after it joins the cluster. \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/node_management/maintainance_with_proxies.mdx b/product_docs/docs/pgd/5.8/node_management/maintainance_with_proxies.mdx deleted file mode 100644 index d8b7143f672..00000000000 --- a/product_docs/docs/pgd/5.8/node_management/maintainance_with_proxies.mdx +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: Maintenance commands through proxies ---- - -## Maintenance and performance - -As a general rule, you should never perform maintenance operations on a cluster's write leader. -Maintenance operations such as `VACUUM` can be quite disruptive to the smooth running of a busy server and often detrimental to workload performance. -Therefore, it's best to run maintenance commands on any node in a group that isn't the write leader. -Generally, this requires you to connect directly and issue the maintenance commands on the non-write-leader nodes. -But in some situations, this isn't possible. - -## Maintenance and proxies - -Proxies, by design, always connect to and send commands to the current write leader. -This usually means that you must not connect by way of a proxy to perform maintenance. -PGD clusters nodes can present a direct connection for psql and PGD CLI clients that you can use to issue maintenance commands to the server on those nodes. -But there are environment in which the PGD cluster is deployed where a proxy is the only way to access the cluster. - -For example, in EDB Cloud Service, PGD clusters are locked down such that the only access to the database is through an instance of PGD Proxy. -This configuration reduces the footprint of the cluster and makes it more secure. However, it requires that you use a different way of sending maintenance requests to the cluster's nodes. - -The technique outlined here is generally useful for despatching commands to specific nodes without being directly connected to that node's server. - -## Maintenance commands - -The term *maintenance commands* refers to: - -* `VACUUM` -* Non-replicated DDL commands (which you might want to manually replicate) - - -## A note on node names - -The servers in the cluster are referred to by their PGD cluster node names. To get a list of node names in your cluster, use: - -```SQL -select node_name from bdr.node; -``` - -!!! Tip -For more details, see the [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) table. -!!! - -This command lists just the node names. If you need to know the group they are a member of, use: - -``` -select node_name, node_group_name from bdr.node_summary; -``` - -!!! Tip -For more details, see the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) table. -!!! - -## Finding the write leader - -If you're connected through the proxy, then you're connected to the write leader. -Run `select node_name from bdr.local_node_summary` to see the name of the node: - -``` -select node_name from bdr.local_node_summary; -__OUTPUT__ - node_name ------------------- -node-two -(1 row) -``` - -This is the node you do **not** want to run your maintenance tasks on. - -``` -select * from bdr.node_group_routing_summary; -__OUTPUT__ - node_group_name | write_lead | previous_write_lead | read_nodes ------------------+------------------+---------------------+------------------------- - dc1 | node-two | node-one | {node-one,node-three} -``` - -Where the `write_lead` is the node determined earlier (node-two), you can also see the two `read_nodes` (node-one and node-three). -It's on these nodes that you can safely perform maintenance. - - -!!! Tip -You can perform that operation with a single query: -```SQL -select read_nodes from bdr.node_group_routing_summary where write_lead = (select node_name from bdr.local_node_summary); -``` -!!! - -## Using `bdr.run_on_nodes()` -PGD has the ability to run specific commands on specific nodes using the `bdr.run_on_nodes()` function. This function takes two parameters: an array of node names and the command you want to run on those nodes. For example: - -```SQL -SELECT bdr.run_on_nodes(ARRAY['node-one','node-three'],'vacuum full foo'); -__OUTPUT__ - - run_on_nodes ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ - [{"dsn": "host=host-one port=5444 dbname=bdrdb", "node_id": "807899305", "response": {"command_status": "VACUUM"}, "node_name": "node-one", "query_send_time": "2024-01-16 16:24:35.418323+00"}, {"dsn": "host=host-three port=5432 dbname=bdrdb", "node_id": "199017004", "response": {"command_status": "VACUUM"}, "node_name": "node", "query_send_time": "2024-01-16 16:24:35.4542+00"}] -``` - -This command runs the `vacuum full foo` command on the node-one and node-three nodes. -The node names are passed to the function in an array. - -The `bdr.run_on_nodes` function reports its results as JSONB. -The results include the name of the node and the response (or error message) resulting from running the command. -Other fields included might be include and might not be relevant. - -The results also appear as a single string that's hard to read. By applying some formatting to this string, it can become more readable. - -## Formatting `bdr.run_on_nodes()` output - -Using Postgres's JSON expressions, you can reduce the output to just the columns you're interested in. The following command is functionally equivalent to the previous example but lists only the node and response as its results: - -```SQL -select q->>'node_name' as node, q->>'response' as response FROM jsonb_array_elements(bdr.run_on_nodes(ARRAY['node-one','node-three'], 'VACUUM FULL foo')) q; -__OUTPUT__ - node | response -------------------+------------------------------ - node-one | {"command_status": "VACUUM"} - node-three | {"command_status": "VACUUM"} -``` - -If an error occurs, the `command_status` field is set to error. An additional `error_message` value is included in the response. For example: - -```SQL -select q->>'node_name' as node, q->>'response' as response FROM jsonb_array_elements(bdr.run_on_nodes(ARRAY['node-one','node-three'], 'VACUUM FULL fool')) q; -__OUTPUT__ - node | response -------------------+-------------------------------------------------------------------------------------------- - node-one | {"error_message": "ERROR: relation \"fool\" does not exist\n", "command_status": "ERROR"} - node-three | {"error_message": "ERROR: relation \"fool\" does not exist\n", "command_status": "ERROR"} -(2 rows) -``` - -## Defining a function for maintenance - -If you find yourself regularly issuing maintenance commands to one node at a time, you can define a function to simplify things: - -```SQL -create or replace function runmaint(nodename varchar, command varchar) returns TABLE(node text,response jsonb) as $$ -begin -return query -select (q->>'node_name')::text, (q->'response') from jsonb_array_elements(bdr.run_on_nodes(ARRAY [nodename], command)) as q; -end; -$$ language 'plpgsql'; -``` - -This function takes a node name and a command and runs the command on that node, returning the results as shown in this interaction: - -```SQL -select runmaint('node-one','VACUUM FULL foo'); -__OUTPUT__ - runmaint -------------------------------------------------------- - (node-one,"{""command_status"": ""VACUUM""}") -``` - -You can break up the response by using `select * from`: - -```SQL -select * from runmaint('node-one','VACUUM FULL foo'); -__OUTPUT__ - node | response -------------------+------------------------------ - node-one | {"command_status": "VACUUM"} -(1 row) -``` diff --git a/product_docs/docs/pgd/5.8/node_management/replication_slots.mdx b/product_docs/docs/pgd/5.8/node_management/replication_slots.mdx deleted file mode 100644 index 8fbe149f7ff..00000000000 --- a/product_docs/docs/pgd/5.8/node_management/replication_slots.mdx +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Replication slots created by PGD ---- - -On a PGD master node, the following replication slots are -created by PGD: - -- One *group slot*, named `bdr__` -- N-1 *node slots*, named `bdr___`, where N is the total number of PGD nodes in the cluster, - including direct logical standbys, if any - -!!! Warning - Don't drop those slots. PGD creates and manages them and drops them when or if necessary. - -On the other hand, you can create or drop replication slots required by software like Barman -or logical replication using the appropriate commands -for the software without any effect on PGD. -Don't start slot names used by other software with the -prefix `bdr_`. - -For example, in a cluster composed of the three nodes `alpha`, `beta`, and -`gamma`, where PGD is used to replicate the `mydb` database and the -PGD group is called `mygroup`: - -- Node `alpha` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_beta` and - `bdr_mydb_mygroup_gamma` -- Node `beta` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_alpha` and - `bdr_mydb_mygroup_gamma` -- Node `gamma` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_alpha` and - `bdr_mydb_mygroup_beta` - -## Group replication slot - -The group slot is an internal slot used by PGD primarily to track the -oldest safe position that any node in the PGD group (including all logical -standbys) has caught up to, for any outbound replication from this node. - -The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). - -The group slot can: - -- Join new nodes to the PGD group without having all existing nodes - up and running (although the majority of nodes should be up). This process doesn't - incur data loss in case the node that was down during join starts - replicating again. -- Part nodes from the cluster consistently, even if some nodes haven't - caught up fully with the parted node. -- Hold back the freeze point to avoid missing some conflicts. -- Keep the historical snapshot for timestamp-based snapshots. - -The group slot is usually inactive and is fast forwarded only periodically -in response to Raft progress messages from other nodes. - -!!! Warning - Don't drop the group slot. Although usually inactive, it's - still vital to the proper operation of the EDB Postgres Distributed cluster. If you drop it, - then some or all of the features can stop working or have - incorrect outcomes. - -## Hashing long identifiers - -The name of a replication slot, like any other PostgreSQL -identifier, can't be longer than 63 bytes. PGD handles this by -shortening the database name, the PGD group name, and the name of the -node in case the resulting slot name is too long for that limit. -Shortening an identifier is carried out by replacing the final section -of the string with a hash of the string itself. - -For example, consider a cluster that replicates a database -named `db20xxxxxxxxxxxxxxxx` (20 bytes long) using a PGD group named -`group20xxxxxxxxxxxxx` (20 bytes long). The logical replication slot -associated to node `a30xxxxxxxxxxxxxxxxxxxxxxxxxxx` (30 bytes long) -is called since `3597186`, `be9cbd0`, and `7f304a2` are respectively the hashes -of `db20xxxxxxxxxxxxxxxx`, `group20xxxxxxxxxxxxx`, and -`a30xxxxxxxxxxxxxxxxxxxxxxxxxx`. - -``` -bdr_db20xxxx3597186_group20xbe9cbd0_a30xxxxxxxxxxxxx7f304a2 -``` diff --git a/product_docs/docs/pgd/5.8/planning/choosing_server.mdx b/product_docs/docs/pgd/5.8/planning/choosing_server.mdx deleted file mode 100644 index 77fd46b098f..00000000000 --- a/product_docs/docs/pgd/5.8/planning/choosing_server.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: "Choosing a Postgres distribution" -redirects: - - /pgd/latest/choosing_server/ ---- - -EDB Postgres Distributed can be deployed with three different Postgres distributions: PostgreSQL, EDB Postgres Extended Server, or EDB Postgres Advanced Server. The availability of particular EDB Postgres Distributed features depends on the Postgres distribution being used. Therefore, it's essential to adopt the Postgres distribution best suited to your business needs. For example, if having the Commit At Most Once (CAMO) feature is mission critical to your use case, don't adopt open source PostgreSQL, which doesn't have the core capabilities required to handle CAMO. - -The following table lists features of EDB Postgres Distributed that are dependent on the Postgres distribution and version. - -| Feature | PostgreSQL | EDB Postgres Extended | EDB Postgres Advanced | -| ----------------------------------------------------------------------------------------------------------------------- | ---------- | --------------------- | --------------------- | -| [Rolling application and database upgrades](/pgd/latest/upgrades/) | Y | Y | Y | -| [Row-level last-update wins conflict resolution](/pgd/latest/conflict-management/conflicts/) | Y | Y | Y | -| [DDL replication](/pgd/latest/ddl/) | Y | Y | Y | -| [Granular DDL Locking](/pgd/latest/ddl/ddl-locking/) | Y | Y | Y | -| [Streaming of large transactions](/pgd/latest/transaction-streaming/) | v14+ | v13+ | v14+ | -| [Distributed sequences](/pgd/latest/sequences/#pgd-global-sequences) | Y | Y | Y | -| [Subscriber-only nodes](/pgd/latest/nodes/subscriber_only/) | Y | Y | Y | -| [Monitoring](/pgd/latest/monitoring/) | Y | Y | Y | -| [OpenTelemetry support](/pgd/latest/monitoring/otel/) | Y | Y | Y | -| [Parallel apply](/pgd/latest/parallelapply) | Y | Y | Y | -| [Conflict-free replicated data types (CRDTs)](/pgd/latest/conflict-management/crdt/) | Y | Y | Y | -| [Column-level conflict resolution](/pgd/latest/conflict-management/column-level-conflicts/) | Y | Y | Y | -| [Transform triggers](/pgd/latest/striggers/#transform-triggers) | Y | Y | Y | -| [Conflict triggers](/pgd/latest/striggers/#conflict-triggers) | Y | Y | Y | -| [Asynchronous replication](/pgd/latest/commit-scopes/) | Y | Y | Y | -| [Legacy synchronous replication](/pgd/latest/commit-scopes/legacy-sync/) | Y | Y | Y | -| [Group Commit](/pgd/latest/commit-scopes/group-commit/) | N | Y | 14+ | -| [Commit At Most Once (CAMO)](/pgd/latest/commit-scopes/camo/) | N | Y | 14+ | -| [Eager Conflict Resolution](/pgd/latest/commit-scopes/group-commit/#eager-conflict-resolution) | N | Y | 14+ | -| [Lag Control](/pgd/latest/commit-scopes/lag-control/) | N | Y | 14+ | -| [Decoding Worker](/pgd/latest/decoding_worker) | N | 13+ | 14+ | -| [Lag tracker](/pgd/latest/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | -| [Missing partition conflict](../reference/conflicts/#target_table_note) | N | Y | 14+ | -| [No need for UPDATE Trigger on tables with TOAST](../conflict-management/conflicts/02_types_of_conflict/#toast-support-details) | N | Y | 14+ | -| [Automatically hold back FREEZE](../conflict-management/conflicts/03_conflict_detection/#origin-conflict-detection) | N | Y | 14+ | -| [Transparent Data Encryption](/tde/latest/) | N | 15+ | 15+ | diff --git a/product_docs/docs/pgd/5.8/planning/deployments.mdx b/product_docs/docs/pgd/5.8/planning/deployments.mdx deleted file mode 100644 index d5116ed8e8a..00000000000 --- a/product_docs/docs/pgd/5.8/planning/deployments.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: "Choosing your deployment method" -indexCards: simple -redirects: -- /pgd/latest/deployments ---- - -You can deploy and install EDB Postgres Distributed products using the following methods: - -- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances, or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information. - -- EDB Postgres AI Cloud Service is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or Cloud Service's cloud account where it's operated by EDB's Postgres experts. EDB Postgres AI Cloud Service makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always-on clusters. See [Distributed high availability](/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability/) in the [Cloud Service documentation](/edb-postgres-ai/cloud-service/) for more information. - -- [EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/) is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. - -| |
TPA
|
EDB Postgres AI
Cloud Service
|
Kubernetes
| -| ------------------------------------------ | :----------------------------------: | :------------------------------------------------: | :----------------------------------: | -| Single region | | | | -| Active-Active support | 2+ regions | 2 regions | 2 regions | -| Write/Read routing | Local or global | Local | Local | -| Automated failover | AZ or Region | AZ | AZ | -| Major version upgrades | | - | - | -| Subscriber-only nodes
(read replicas) | | - | - | -| Logical standby nodes | | - | - | -| PgBouncer | | - | - | -| Selective data replication | | | | -| Maintenance windows per region | | | | -| Target availability | 99.999% SLO | 99.99 SLA (single)
99.995% SLA (multi) | 99.999% SLO | diff --git a/product_docs/docs/pgd/5.8/planning/images/always-on-2x3-aa-updated.png b/product_docs/docs/pgd/5.8/planning/images/always-on-2x3-aa-updated.png deleted file mode 100644 index 7eeee2faa6d..00000000000 --- a/product_docs/docs/pgd/5.8/planning/images/always-on-2x3-aa-updated.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fcac4fccf322961fe6d3e861158937b8adb166d6c16684fa37308a6bc5c2703f -size 892126 diff --git a/product_docs/docs/pgd/5.8/planning/images/always_on_1x3_updated.png b/product_docs/docs/pgd/5.8/planning/images/always_on_1x3_updated.png deleted file mode 100644 index eab3130d07d..00000000000 --- a/product_docs/docs/pgd/5.8/planning/images/always_on_1x3_updated.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2f90da8c9a8b418060c1f6251b358723e7afef1c9a5ca9c3e22847dc90a49079 -size 595795 diff --git a/product_docs/docs/pgd/5.8/planning/images/always_on_2x3_aa_updated.png b/product_docs/docs/pgd/5.8/planning/images/always_on_2x3_aa_updated.png deleted file mode 100644 index 7eeee2faa6d..00000000000 --- a/product_docs/docs/pgd/5.8/planning/images/always_on_2x3_aa_updated.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fcac4fccf322961fe6d3e861158937b8adb166d6c16684fa37308a6bc5c2703f -size 892126 diff --git a/product_docs/docs/pgd/5.8/planning/index.mdx b/product_docs/docs/pgd/5.8/planning/index.mdx deleted file mode 100644 index 3415e3e20b6..00000000000 --- a/product_docs/docs/pgd/5.8/planning/index.mdx +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Planning your PGD deployment -navTitle: Planning -description: Understand the requirements of your application and the capabilities of PGD to plan your deployment. -navigation: - - architectures - - choosing_server - - deployments - - other_considerations - - limitations ---- - -Planning your PGD deployment involves understanding the requirements of your application and the capabilities of PGD. This section provides an overview of the key considerations for planning your PGD deployment. - -* [Choosing your architecture](architectures): Understand the different architectures that PGD supports and choose the one that best fits your requirements. - - -* [Choosing a Postgres distribution](choosing_server): Choose the Postgres distribution to deploy with PGD. - - -* [Choosing your deployment method](deployments): Pick the deployment method that suits your needs. - - -* [Other considerations](other_considerations): Consider other factors that may affect your deployment. - - -* [Limitations](limitations): Know the limitations of PGD and their effect on your plans. - - diff --git a/product_docs/docs/pgd/5.8/planning/limitations.mdx b/product_docs/docs/pgd/5.8/planning/limitations.mdx deleted file mode 100644 index ada853b0777..00000000000 --- a/product_docs/docs/pgd/5.8/planning/limitations.mdx +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: "Limitations" -redirects: -- /pgd/latest/limitations ---- - -Take these EDB Postgres Distributed (PGD) design limitations -into account when planning your deployment. - -## Nodes - -- PGD can run hundreds of nodes, assuming adequate hardware and network. However, - for mesh-based deployments, we generally don’t recommend running more than 48 - nodes in one cluster. If you need extra read scalability beyond the 48-node - limit, you can add subscriber-only nodes without adding connections to the - mesh network. - -- The minimum recommended number of nodes in a group is three to provide fault - tolerance for PGD's consensus mechanism. With just two nodes, consensus would - fail if one of the nodes were unresponsive. Consensus is required for some PGD - operations, such as distributed sequence generation. For more information about - the consensus mechanism used by EDB Postgres Distributed, see [Architectural - details](architectures/#architecture-details). - -## Multiple databases on single instances - -Support for using PGD for multiple databases on the same Postgres instance is -**deprecated** beginning with PGD 5 and will no longer be supported with PGD 6. As -we extend the capabilities of the product, the added complexity introduced -operationally and functionally is no longer viable in a multi-database design. - -It's best practice and we recommend that you configure only one database per PGD instance. - -The deployment automation with TPA and the tooling such as the CLI -and PGD Proxy already codify that recommendation. - -While it's still possible to host up to 10 databases in a single instance, -doing so incurs many immediate risks and current limitations: - -- If PGD configuration changes are needed, you must execute administrative commands - for each database. Doing so increases the risk for potential - inconsistencies and errors. - -- You must monitor each database separately, adding overhead. - -- TPAexec assumes one database. Additional coding is needed by customers or by the EDB Professional Services team - in a post-deploy hook to set up replication for more databases. - -- PGD Proxy works at the Postgres instance level, not at the database level, - meaning the leader node is the same for all databases. - -- Each additional database increases the resource requirements on the server. - Each one needs its own set of worker processes maintaining replication, for example, - logical workers, WAL senders, and WAL receivers. Each one also needs its own - set of connections to other instances in the replication cluster. These needs might - severely impact performance of all databases. - -- Synchronous replication methods, for example, CAMO and Group Commit, won’t work as - expected. Since the Postgres WAL is shared between the databases, a - synchronous commit confirmation can come from any database, not necessarily in - the right order of commits. - -- CLI and OTEL integration (new with v5) assumes one database. - -## Durability options (Group Commit/CAMO) - -There are various limits on how the PGD durability options work. -These limitations are a product of the interactions between Group Commit and CAMO, and how they interact with PGD features such as the [WAL decoder](../decoding_worker/) and [transaction streaming](../transaction-streaming/). - -Also, there are limitations on interoperability with legacy synchronous replication, -interoperability with explicit two-phase commit, and unsupported combinations -within commit scope rules. - -See [Durability limitations](/pgd/latest/commit-scopes/limitations/) for a full -and current listing. - -## Mixed PGD versions - -While PGD was developed to [enable rolling upgrades of PGD](/pgd/latest/upgrades) by allowing mixed versions of PGD to operate during the upgrade process, we expect users to run mixed versions only during upgrades and for users to complete their upgrades as quickly as possible. -We also recommend that you test any rolling upgrade process in a non-production environment before attempting it in production. - -When a node is upgraded, it returns to the cluster and communicates with the other nodes in the cluster using the lowest version of the inter-node protocol that is supported by all the other nodes in the cluster. -This means that the upgraded node will be able to communicate with all other nodes in the cluster, but it will not be able to take advantage of any new features or improvements that were introduced in the newer version of PGD. - -That will stay the case until all nodes in the cluster have been upgraded to the same newer version. -The longer you run mixed versions, the longer you will be without the benefits of the new version, and the longer you will be exposed to any potential interoperability issues that might arise from running mixed versions. -Mixed version clusters are not supported for extended periods of time. - -Therefore, once an PGD cluster upgrade has begun, you should complete the whole cluster upgrade as quickly as possible. - -We don't support running mixed versions of PGD except during an upgrade, and we don't support clusters running mixed versions even while being upgraded, for extended periods. - -For more information on rolling upgrades and mixed versions, see [Rolling upgrade considerations](/pgd/latest/upgrades/manual_overview#rolling-upgrade-considerations). - -## Other limitations - -This noncomprehensive list includes other limitations that are expected and -are by design. We don't expect to resolve them in the future. -Consider these limitations when planning your deployment: - -- A `galloc` sequence might skip some chunks if you create the sequence in a - rolled back transaction and then create it again with the same name. Skipping chunks can - also occur if you create and drop the sequence when DDL replication isn't active - and then you create it again when DDL replication is active. The impact of - the problem is mild because the sequence guarantees aren't violated. The - sequence skips only some initial chunks. Also, as a workaround, you can - specify the starting value for the sequence as an argument to the - `bdr.alter_sequence_set_kind()` function. diff --git a/product_docs/docs/pgd/5.8/planning/other_considerations.mdx b/product_docs/docs/pgd/5.8/planning/other_considerations.mdx deleted file mode 100644 index 7c1025cab20..00000000000 --- a/product_docs/docs/pgd/5.8/planning/other_considerations.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "Other considerations" -redirects: -- /pgd/latest/other_considerations ---- - -Review these other considerations when planning your deployment. - -## Data consistency - -Read about [Conflicts](/pgd/latest/conflict-management/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. - -## Deployment - -PGD is intended to be deployed in one of a small number of known-good configurations, using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach and deployment architecture approved by Technical Support. - -Log messages and documentation are currently available only in English. - -## Sizing considerations - -For production deployments, EDB recommends a minimum of 4 cores for each Postgres data node. Witness nodes don't participate in the data replication operation and don't have to meet this requirement. One core is enough without subgroup Raft. Two cores are enough when using subgroup Raft. Always size logical standbys exactly like the data nodes to avoid performance degradations in case of a node promotion. In production deployments, PGD Proxy nodes require a minimum of 1 core and must increase incrementally with an increase in the number of database cores in approximately a 1:10 ratio. We recommend detailed benchmarking of your specific performance requirements to determine appropriate sizing based on your workload. The EDB Professional Services team is available to assist if needed. - -For development purposes, don't assign Postgres data nodes fewer than two cores. The sizing of Barman nodes depends on the database size and the data change rate. - -You can deploy Postgres data nodes, Barman nodes, and PGD Proxy nodes on virtual machines or in a bare metal deployment mode. However, don't deploy multiple data nodes on VMs that are on the same physical hardware, as that reduces resiliency. Also don't deploy multiple PGD Proxy nodes on VMs on the same physical hardware, as that, too, reduces resiliency. - -Single PGD Proxy nodes can be colocated with single PGD data nodes. - -## Clocks and timezones - -EDB Postgres Distributed is designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers don't need to be configured with matching timezones, though we do recommend using `log_timezone = UTC` to ensure the human readable server log is more accessible and comparable. - -Synchronize server clocks using NTP or other solutions. - -Clock synchronization isn't critical to performance, as it is with some other solutions. Clock skew can affect origin conflict detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides row-version conflict detection, as described in [Conflict detection](/pgd/latest/conflict-management/conflicts/). diff --git a/product_docs/docs/pgd/5.8/quickstart/connecting_applications.mdx b/product_docs/docs/pgd/5.8/quickstart/connecting_applications.mdx deleted file mode 100644 index 89a6f59b26d..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/connecting_applications.mdx +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: "Connecting to your database" -navTitle: "Connecting to your database" -description: > - Connect to your quick started PGD cluster with psql and client applications ---- - -Connecting your application or remotely connecting to your new EDB Postgres Distributed cluster involves: - -* Getting credentials and optionally creating a `.pgpass` file -* Establishing the IP address of any PGD Proxy hosts you want to connect to -* Ensuring that you can connect to that IP address -* Getting an appropriate Postgres client -* Connecting the client to the cluster - -## Getting credentials - -The default user, enterprisedb, was created in the cluster by tpaexec. It also generated passwords for that user as part of the provisioning. To get the password, run: - -```shell -tpaexec show-password democluster enterprisedb -``` - -This command returns a string that's the password for the enterprisedb user. If you want, you can use that string when prompted for a password. - -## Creating a .pgpass file - -You can avoid entering passwords for psql and other Postgres clients by creating [a `.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) in your home directory. It contains password details that applications can look up when connecting. After getting the password (see [Getting credentials](#getting-credentials)), you can open the `.pgpass` file using your preferred editor. - -In the file, enter: - -```plain -*:*:bdrdb:enterprisedb: -``` - -Save the file and exit the editor. To secure the file, run the following command. This command gives read and write access only to you. - -```shell -chmod 0600 ~/.pgpass -``` - -## Establishing the IP address - -### Docker - -Your Docker quick start cluster is by default accessible on the IP addresses 10.33.111.18 (kaboom), 10.33.111.19 (kaftan), 10.33.111.20 (kaolin), and 10.33.111.21 (kapok). TPA generates these addresses. - -### AWS - -You can refer to the IP addresses in `democluster/ssh_config`. Alternatively, run: - -```shell -aws ec2 --region eu-west-1 describe-instances --query 'Reservations[*].Instances[*].{PublicIpAddress:PublicIpAddress,Name:Tags[?Key==`Name`]|[0].Value}' -__OUTPUT__ -[ - [ - { - "PublicIpAddress": "54.217.130.13", - "Name": "kapok" - } - ], - [ - { - "PublicIpAddress": "54.170.119.101", - "Name": "kaolin" - } - ], - [ - { - "PublicIpAddress": "3.250.235.130", - "Name": "kaftan" - } - ], - [ - { - "PublicIpAddress": "34.247.188.211", - "Name": "kaboom" - } - ] -] - -``` - -This command shows you EC2's list of public IP addresses for the cluster instances. - - -### Linux hosts - -You set IP addresses for your Linux servers when you configured the cluster in the quick start. Use those addresses. - -## Ensure you can connect to your IP addresses - -### Linux hosts and Docker - -You don't need to perform any configuration to connect these. - -### AWS - -AWS is configured to allow outside access only to its SSH endpoints. To allow Postgres clients to connect from outside the AWS cloud, you need to enable the transit of traffic on port 6432. - -Get your own external IP address or the external IP address of the system you want to connect to the cluster. One way to do this is to run: - -```shell -curl https://checkip.amazonaws.com -__OUTPUT__ -89.97.100.108 -``` - -You also need the security groupid for your cluster. Run: - -```shell -aws ec2 --region eu-west-1 describe-security-groups --filter Name=group-name,Values="*democluster*" | grep GroupId -__OUTPUT__ - "GroupId": "sg-072f996360ba20d5c", -``` - -Enter the correct region for your cluster, which you set when you configured it. - -``` -aws ec2 authorize-security-group-ingress --group-id --protocol tcp --port 6432 --cidr /32 --region eu-west-1 -``` - -Again, make sure you put in the correct region for your cluster. - -You can read more about this command in [Add rules to your security group](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-sg.html#configuring-a-security-group) in the AWS CLI guide. - - -## Getting an appropriate Postgres client - -Unless you installed Postgres on your local system, you probably need to install a client application, such as psql, to connect to your database cluster. - -On Ubuntu, for example, you can run: - -```shell -sudo apt install postgresql-client -``` - -This command installs psql, along with some other tools but without installing the Postgres database locally. - -## Connecting the client to the cluster - -After you install psql or a similar client, you can connect to the cluster. Run: - -```shell -psql -h -p 6432 -U enterprisedb bdrdb -__OUTPUT__ -psql (16.2, server 16.2.0) -SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) -Type "help" for help. - -bdrdb=# -``` - -[Use the `.pgpass` file](#creating-a-pgpass-file) with clients that support it, or use the host, port, user, password, and database name to connect with other applications. - -## Using proxy failover to connect the client to the cluster - -By listing all the addresses of proxies as the host, you can ensure that the client will always failover and connect to the first available proxy in the event of a proxy failing. - - -```shell -psql -h ,, -U enterprisedb -p 6432 bdrdb -__OUTPUT__ -psql (16.2, server 16.2.0) -SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) -Type "help" for help. - -bdrdb=# -``` - -## Creating a connection URL - -Many applications use a [connection URL](https://www.postgresql.org/docs/current/libpq-connect.html#id-1.7.3.8.3.6) to connect to the database. To create a connection URL, you need to assemble a string in the format: - -``` -postgresql://@:6432,:6432,:6432/bdrdb -``` - -This format of the string can be used with the `psql` command, so if your database nodes are on IP addresses 192.168.9.10, 192.168.10.10, and 192.168.10.11, you can use: - -``` -psql postgresql://enterprisedb@192.168.9.10:6432,192.168.10.10:6432,192.168.11.10:6432/bdrdb -``` - -You can also embed the password in the created URL. If you're using the enterprisedb user, and the password for the enterprisedb user is `notasecret`, then the URL -looks like: - -``` -psql postgresql://enterprisedb:notasecret@192.168.9.10:6432,192.168.10.10:6432,192.168.11.10:6432/bdrdb -``` - -Actual passwords are more complex than that and can contain special characters. You need to urlencode the password to ensure that it doesn't trip up the shell, the command, or the driver you're using. - -!!! Warning Passwords should not be embedded - While we have shown you how to embed a password, we recommend that you do not do this. The password is easily extracted from the URL and can easily be saved in insecure locations. Consider other ways of passing the password. - -### Making a Java connection URL - -Finally, the URL you created is fine for many Postgres applications and clients, but those based on Java require one change to allow Java to invoke the appropriate driver. Precede the URL with `jdbc:` to make a Java compatible URL: - -``` -jdbc:postgresql://enterprisedb@192.168.9.10:6432,192.168.10.10:6432,192.168.11.10:6432/bdrdb -``` - -## Moving on - -You're now equipped to connect your applications to your cluster, with all the connection credentials and URLs you need. diff --git a/product_docs/docs/pgd/5.8/quickstart/further_explore_conflicts.mdx b/product_docs/docs/pgd/5.8/quickstart/further_explore_conflicts.mdx deleted file mode 100644 index a144c4b6942..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/further_explore_conflicts.mdx +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: "Exploring conflict handling with PGD" -navTitle: "Exploring conflicts" -description: > - An exploration of how PGD handles conflicts between data nodes ---- - -In a multi-master architecture like PGD, conflicts happen. PGD is built to handle them. - -A conflict can occur when one database node has an update from an application to a row and another node has a different update to the same row. This type of conflict is called a *row-level conflict*. Conflicts aren't errors. Resolving them effectively is core to how Postgres Distributed maintains consistency. - -The best way to handle conflicts is to prevent them! Use PGD's Always-on architecture with proxies to ensure that your applications write to the same server in the cluster. -is -When conflicts occur, though, it's useful to know how PGD resolves them, how you can control that resolution, and how you can find out that they're happening. Row insertion and row updates are two actions that can cause conflicts. - -To see how it works, you need to open a command line view of all the servers. - -## Your quick start configuration - -This exploration assumes that you created your PGD cluster using the [quick start for Docker](quick_start_docker), the [quick start for AWS](quick_start_aws), or the [quick start for Linux hosts](quick_start_linux). - -At the end of each quick start, you'll have a cluster with four nodes and these roles: - -| Host name | Host role | -| --------- | ----------------- | -| kaboom | PGD data node and pgd-proxy co-host | -| kaftan | PGD data node and pgd-proxy co-host | -| kaolin | PGD data node and pgd-proxy co-host | -| kapok | Barman backup node | - -You'll use these hostnames throughout this exercise. - -## Installing xpanes - -!!! Note Xpanes optional -We recommend the `xpanes` utility for this exercise. It allows you to easily switch between multiple terminal sessions. If you prefer to use multiple terminals, tmux or another terminal multiplexer, you can do so. Just make sure you can easily switch between multiple terminal sessions. -!!! - -You'll use `xpanes`, a utility that allows you to quickly create multiple terminal sessions that you can easily switch between. It isn't installed by default, so you'll have to install it. Start by connecting to the kaboom node with ssh: - -```shell -cd democluster && ssh -F ssh_config kaboom -``` - -If you're running the quick start on Docker, you'll be using Rocky Linux, a Red Hat derivative. To perform the xpanes install, run: - -```shell -dnf -y install xpanes -``` - -If you're running the quick start on AWS, you'll be using Debian Linux. To perform the xpanes install, run: - -```shell -wget https://github.com/greymd/tmux-xpanes/releases/download/v4.1.4/tmux-xpanes_v4.1.4.deb -sudo apt -y install ./tmux-xpanes*.deb -rm tmux-xpanes*.deb -``` - -## Connecting to four servers - -You need to be logged in as the enterprisedb user to allow authentication to work: - -```shell -sudo -iu enterprisedb -``` - -Then, run the following command to connect to three database servers and a proxy server: - -```shell -xpanes -d -c "psql postgresql://enterprisedb@{}/bdrdb?sslmode=require" "kaboom:5444" "kaftan:5444" "kaolin:5444" "kaboom:6432" -``` - -xpanes takes the command after `-c` and uses the values in the arguments that follow to create a command to run. That means that, after you run it, there will be four panes. Three panes will be connected to the database nodes kaboom, kaftan, and kaolin on port 5444. One will be connected to the pgd-proxy running on kaboom on port 6432. Each one will be logged into the database as enterprisedb. - -Press **Control-b** followed by **q** to briefly display the numeric values for each pane. - -![4 Sessions showing numbers](images/4sessions.png) - -To switch the focus between the panes, you can use **Control-b** and the cursor keys to navigate between them. -Or you can use **Control-b** followed by **q** and the number of the pane you want to focus on. We'll show both ways. - -Move to the bottom-left pane using **Control-b ↓ Control-b → or **Control-b q 3**. - -## Preparing for conflicts - -To make a conflict, you first need a simple table. In the pane that currently has focus, enter: - -``` -drop table if exists test_conflict; -create table test_conflict( - id integer primary key , - value_1 text); -``` - -## Monitoring conflicts - -In the pane that currently has focus, enter: - -```sql -select * from bdr.conflict_history_summary -\watch 1 -``` - -The `select` command displays the conflict history for the cluster. The `\watch 1` command is a psql command that reruns the preceding command every second. - -You are now ready to generate a conflict. - -## Creating a conflict - -The most basic form of conflict is when an insert happens to a table on two different nodes and both have the same primary key. You can now create that scenario and observe it. - -Move to the top-left pane using **Control-b ↑ Control-b ←** or **Control-b q 0**. This pane is the kaboom node. Start a transaction there, and insert a row: - -``` -start transaction; -insert into test_conflict values (1, 'from kaboom'); -``` - -Next, move to the top-right pane using **Control-b →** or **Control-b q 1**. This pane is the kaftan node. Here, you'll also start a transaction and insert into the same row with different data: - -``` -start transaction; -insert into test_conflict values (1, 'from kaftan'); -``` - -You now have two transactions open on different servers, with an insert operation already performed successfully. You need to commit both transactions at this point: - -* Use **Control-b ←** or **Control-b q 0**, and then enter `commit;`. -* Use **Control-b →** or **Control-b q 1**, and then enter `commit;`. - -You'll see that both commits are working. However, in the bottom-right pane, you can see the conflict being detected. - -![4 Sessions showing conflict detected](images/4sessionsinsertconflict.png) - -A row in the conflict history now notes a conflict in the table where the `insert_exists`. It also notes that the resolution for this conflict is that the newer record, based on the timing of the commit, is retained. This conflict is called an INSERT/INSERT conflict. You can read more about this type of conflict in [INSERT/INSERT conflicts](../conflict-management/conflicts/02_types_of_conflict/#insertinsert-conflicts). - -## Creating an update conflict - -When different updates to the same records take place on different nodes, a conflict occurs. You can create that scenario with the current configuration, too. Leave `\watch 1` running in the bottom-right pane. - -Move to the top-left pane using **Control-b ←** or **Control-b q 0**. This pane is the kaboom node. Here, start a transaction and update a row: - -``` -start transaction; -update test_conflict set value_1 = 'update from kaboom' where id = 1; -``` - -Next, move to the top-right pane using **Control-b →** or **Control-b q 1**. This pane is the kaftan node. Here, also start a transaction, and update the same row with different data: - -``` -start transaction; -update test_conflict set value_1 = 'update from kaftan' where id = 1; -``` - -You now have two transactions open on different servers, with an update operation already performed successfully. You need to commit both transactions at this point: - -* Use **Control-b ←** or **Control-b q 0**, and then enter `commit;`. -* Use **Control-b →** or **Control-b q 1**, and then enter `commit;`. - -Again you'll see both commits working. And, again, in the bottom-right pane, you can see the update conflict being detected. - -![4 Sessions showing update conflict detected](images/4sessionsupdateconflict.png) - -An additional row in the conflict history shows an `update_origin_change` conflict occurred and that the resolution was `apply_remote`. This resolution means that the remote change was applied, updating the record. This conflict is called an UPDATE/UPDATE conflict and is explained in more detail in [UPDATE/UPDATE conflicts](../conflict-management/conflicts/02_types_of_conflict/#updateupdate-conflicts). - -!!!Tip Exiting tmux -You can quickly exit tmux and all the associated sessions. First terminate any running processes, as they will otherwise continue running after the session is killed. Press **Control-b** and then enter `:kill-session`. This approach is simpler than quitting each pane's session one at a time using **Control-D** or `exit`. -!!! - -## Other conflicts - -You're now equipped to explore all the possible conflict scenarios and resolutions that can occur. For full details of how conflicts are managed, see [Conflict management](../conflict-management/). While ideally you should avoid conflicts, it's important to know that, when they do happen, they're recorded and managed by Postgres Distributed's integrated and configurable conflict resolver. diff --git a/product_docs/docs/pgd/5.8/quickstart/further_explore_failover.mdx b/product_docs/docs/pgd/5.8/quickstart/further_explore_failover.mdx deleted file mode 100644 index fe76e153c70..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/further_explore_failover.mdx +++ /dev/null @@ -1,430 +0,0 @@ ---- -title: "Exploring failover handling with PGD" -navTitle: "Exploring failover" -description: > - An exploration of how PGD handles failover between data nodes ---- - -With a high-availability cluster, the ability to failover is crucial to the overall resilience of the cluster. When the lead data nodes stops working for whatever reason, applications need to be able to continue working with the database with little or no interruption. For PGD, that means directing applications to the new lead data node, which takes over automatically. This is where PGD Proxy is useful. It works with the cluster and directs traffic to the lead data node automatically. - -In this exercise, you'll create an application that sends data to the database regularly. Then you'll first softly switch lead data node by requesting a change through the PGD CLI. And then you'll forcibly shut down a database instance and see how PGD handles that. - -## Your quick started configuration - -This exploration assumes that you created your PGD cluster using the [quick start for Docker](quick_start_docker), the [quick start for AWS](quick_start_aws), or the [quick start for Linux hosts](quick_start_linux). - -At the end of each quick start, you'll have a cluster with four nodes and these roles: - -| Host name | Host role | -| --------- | ----------------- | -| kaboom | PGD data node and pgd-proxy co-host | -| kaftan | PGD data node and pgd-proxy co-host | -| kaolin | PGD data node and pgd-proxy co-host | -| kapok | Barman backup node | - -You'll use these hostnames throughout this exercise. - -!!! Note A best practice recommendation -This example is based on the quick start configuration. For speed -and simplicity, it uses the Barman backup server in place of creating a bastion -server. It also uses the Barman login to the Postgres cluster. - -In a production environment, we recommend that you create a separate -bastion server to run the failover experiment from and that you create an appropriate -Postgres user to log in to the cluster. -!!! - -## Installing xpanes - -!!! Note Xpanes optional -We recommend the xpanes utility for this exercise. It allows you to easily switch between multiple terminal sessions. If you prefer to use multiple terminals, tmux, or another terminal multiplexer, you can do so. Just make sure you can easily switch between multiple terminal sessions. -!!! - -You'll use xpanes, a utility that allows you to quickly create multiple terminal sessions that you can easily switch between. It isn't installed by default, so you have to install it. For this exercise, you launch xpanes from the system where you ran tpaexec to configure your quick-start cluster. - -If the system is running Ubuntu, run: - -``` -sudo apt install software-properties-common -sudo add-apt-repository ppa:greymd/tmux-xpanes -sudo apt update -sudo apt install tmux-xpanes -``` - -These are the installation instructions from [the xpanes repository](https://github.com/greymd/tmux-xpanes). If you aren't on Ubuntu, the repository also contains installation instructions for other systems. - -## Connecting to the four servers - -With xpanes installed, you can create an SSH session with all four servers by running: - -``` -cd democluster -xpanes -d -c "ssh -F ssh_config {}" "kaboom" "kaolin" "kaftan" "kapok" -``` - -After running these commands, there are four panes. The four panes are connected to kaboom, kaolin, kaftan, and kapok and you're logged in as the root user on each. You need this privilege so you can easily stop and start services later in the exercise. - -Press **Control-b** followed by **q** to briefly display the numeric values for each pane. - -![4 SSH Sessions showing numbers](images/4sshsessions.png) - -To switch the focus between the panes, you can use **Control-b** and the cursor keys to navigate between them. -Or you can use **Control-b** followed by **q** and the number of the pane you want to focus on. We'll show both ways. - -Use **Control-b ↓ Control-b →** or **Control-b q 3** to move the focus to the bottom-right pane, which is the kapok host. This server is responsible for performing backups. You'll use this as the base of operations for your demo application. You can use Barman credentials to connect to the database servers and proxies: - -``` -sudo -iu barman -psql -h kaboom -p 6432 bdrdb -``` - -This code connects to the proxy on the kaboom host, which also runs a Postgres instance as part of the cluster. - -The next step is to create the table for your application to write to: - -``` -drop table if exists ping cascade; -CREATE TABLE ping (id SERIAL PRIMARY KEY, node TEXT, timestamp TEXT) ; -``` - -This code first drops the `ping` table. Then it re-creates the `ping` table with an id primary key and two text fields for a node and timestamp. The table should now be ready. To verify that it is, use **Control-b ← Control-b ↑** or **Control-b q 0** to move to the top left pane, which puts you on the kaboom server. In this pane, become the enterprisedb user so you can easily connect to the database: - -```shell -sudo -iu enterprisedb -``` - -You can now connect to the local database by running: - -```shell -psql bdrdb -``` - -This command connects you directly to the local database instance on kaboom. Use `\dt` to view the available tables: - -```console -bdrdb=# \dt - List of relations - Schema | Name | Type | Owner ---------+------+-------+-------- - public | ping | table | barman -(1 row) -``` - - -Running `\d ping` shows that the DDL to create ping is on the kaboom server: - -```console -bdrdb=# \d ping - Table "public.ping" - Column | Type | Collation | Nullable | Default ------------+---------+-----------+----------+---------------------------------- - id | integer | | not null | nextval('ping_id_seq'::regclass) - node | text | | | - timestamp | text | | | -Indexes: - "ping_pkey" PRIMARY KEY, btree (id) -``` - -If you want to be sure that this table is replicated, you can connect to another node in the cluster and look. The `\c` command in psql lets you connect to another server. To connect to the kaftan node, run: - -```shell -\c - - kaftan -``` - -You'll see a login message similar to this: - -```console -psql.bin (16.2.0, server 16.2.0) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) -You are now connected to database "bdrdb" as user "enterprisedb" on host "kaftan" (address "10.33.25.233") at port "5444". -bdrdb=# -``` - -Run `\dt` and `\d ping`, and you'll see the same results on the kaftan node. - -To reconnect to the kaboom node, run: - -```shell -\c - - kaboom -``` - -## Setting up a monitor - -Next, you want to monitor the activity of the ping table. Enter this SQL to display the 10 most recent entries: - -```sql -select * from ping order by timestamp desc limit 10; -``` - -To run this command more than once, use the `\watch` command in the shell, which executes the last query at regular intervals. To update every second, enter: - -```shell -\watch 1 -``` - -So far, there's nothing to see. You'll add activity soon. - -## Creating pings - -Return to the Barman host kapok by using **Control-b ↓ Control-b →** or **Control-b q 3**. - -This session is still logged into the psql session. Since you next want to run a shell script, you need to exit psql. Press **Control-d**. - -The shell prompt now reads: - -`barman@kapok:~$` - -If it says `admin@kapok` or `root@kapok`, run `sudo -iu barman` to become the Barman user again. - -The application you'll create is simple. It gets the node to write to and a timestamp for the ping. Then, as quickly as it can, it writes a new ping to the ping table. - -In the shell, enter: - -```shell -while true; do psql -h kaftan,kaolin,kaboom -p 6432 bdrdb -c "INSERT INTO ping(node, timestamp) select node_name, current_timestamp from bdr.local_node_summary;"; done -``` - -In a more readable form, that is: - -```shell -while true; - do psql -h kaftan,kaolin,kaboom -p 6432 bdrdb -c \ - "INSERT INTO ping(node, timestamp) select node_name, current_timestamp from bdr.local_node_summary;" - done -``` - -In a constant loop, you call the `psql` command, telling it to connect to any of the three proxies as hosts, giving the proxy port and selecting the bdrdb database. You also pass a command that inserts two values into the ping table. One of the values comes from `bdr.local_node_summary`, which contains the name of the node you're actually connected to. The other value is the current time. - -Once the loop is running, new entries appear in the table. You'll see them in the top-left pane where you set up the monitor. - -You can now start testing failover. - -## Changing the write leader - -For this part of the process, switch to the host kaftan, which is in the lower-left corner. Use **Control-b ←** or **Control-b q 2** to switch focus to it. - -To gain appropriate privileges to run pgd, at the PGD command line interface, run: - -```shell -sudo -iu enterprisedb -``` - -To see the state of the cluster, run: - -```shell -pgd groups list -v -``` - -You'll see output like this: - -```console -Group Name Parent Group Name Group Type Nodes ------------- ----------------- ---------- ----- -democluster global 0 -dc1_subgroup democluster data 3 -``` - -The global group `democluster` includes all the subgroups. The `dc1_subgroup` is the data cluster you're working with. That group name value is derived from the location given in the quick start when you configured this cluster. Each location gets its own subgroup so you can manage it independently of other locations, or clusters. - -Send a `group set-leader` command to the cluster group to change leader. Run this command: - -```shell -pgd group dc1_subgroup set-leader kaolin -``` -The node name is the host name for another data node in the dc1_subgroup group. - -You'll see one of two responses. If kaolin was the write leader already. you will see: - -```console -Status Message ------- --------------------------------------- -OK Node kaolin is already the write leader -``` - -This means that kaolin was already elected write leader, so switching has no effect. For this exercise, retry the switchover to another host, substituting kaboom or kaftan as the node name. - -When you select a host that wasn't the current write leader, you'll see the other response: - -```console -Status Message ------- ----------------------------- -OK Command executed successfully -``` - -If you look in the top-left pane, you'll see the inserts from the script switching and being written to the node you just switched to. - -!!!Info Observe the id number -Notice that the id number being generated is from a completely different range of values, too. That's because the system transparently made the sequence generating the ID a global sequence. For more about global sequences and how they work, see [Sequences](../sequences/). -!!! - -You might also notice an error in the lower-right pane, as an inflight update is canceled by the switch in write leader. The script then continues writing. - -## Losing a node - -Being able to switch leader is useful for planned maintenance; you tell the cluster to change configuration. What if unexpected changes happen? You'll create that scenario now. - -In the lower-left pane, set the leader to kaolin. - -```shell -pgd group dc1_subgroup set-leader kaolin -``` - -Then change focus to the top-right pane using **Control-b ↑ Control-b →** or **Control-b q 1**, which is the session on the kaolin host. - -Turn off the Postgres server by running: - -```shell -sudo systemctl stop postgres.service -``` - -In the top-left pane, you'll see the monitored table switch from kaolin to another node as the cluster subgroup picks a new leader. The script in the lower-right pane might show some errors as updates are canceled. However, as soon as a new leader is elected, it starts routing traffic to that leader. - -## Showing node states - -Switch to the lower-left pane using **Control-b ↓ Control-b ←** or **Control-b q 2**, and run: - -```shell -pgd nodes list -``` - -You'll see something like: - -```console -Node Name Group Name Node Kind Join State Node Status ---------- ------------ --------- ---------- ----------- -kaftan dc1_subgroup data ACTIVE Up -kaolin dc1_subgroup data ACTIVE Unreachable -kaboom dc1_subgroup data ACTIVE Up -``` - -The kaolin node is down, and updates are going to a different write leader. - -## Monitoring lag - -While kaolin is down, the logical replication at the heart of PGD is tracking how far out of sync kaolin is with the cluster. To see the details, run: - -```shell -psql bdrdb -c "select * from bdr.node_replication_rates;" -``` - -This command displays the current replication rates between servers: - -```console - peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interval ---------------+-------------+-----------+------------+-----------------+------------------+-----------------+------------+------------------ - 2710197610 | kaboom | 0/769F650 | 0/769F650 | 00:00:00 | 0 | 0 bytes | 1861 | 00:00:00 - | kaolin | 0/7656648 | 0/7656648 | 00:03:07.252266 | 299016 | 292 kB | | -(2 rows) - ``` - -Looking at this output, you can see kaolin has a three-minute replay lag and around 292KB of data to catch up on if it came back now. The longer kaolin is down, the larger the replay lag gets. If you rerun the monitoring command, you'll see the numbers went up: - -```console - peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interval ---------------+-------------+-----------+------------+-----------------+------------------+-----------------+------------+------------------ - 2710197610 | kaboom | 0/76B1D28 | 0/76B1D28 | 00:00:00 | 0 | 0 bytes | 1743 | 00:00:00 - | kaolin | 0/7656648 | 0/7656648 | 00:03:53.045704 | 374496 | 366 kB | | -(2 rows) -``` - -Another 46 seconds have passed, and the lag has grown by 74KB. Next, bring back the node, and see how the system recovers. - -## Restarting a node - -You can bring back the Postgres service on kaolin. Switch back to the top-right pane using **Control-b ↑ Control-b →** or **Control-b q 1**, and run: - -```shell -sudo systemctl start postgres.service -``` - -You won't see any change. Although the database service is back up and running, the cluster isn't holding an election, and so the leader remains in place. Switch to the lower-left pane using **Control-b ↓ Control-b ←** or **Control-b q 2**, and run: - -```shell -pgd nodes list -``` - -Now you'll see: - -```console -Node Name Group Name Node Kind Join State Node Status ---------- ------------ --------- ---------- ----------- -kaftan dc1_subgroup data ACTIVE Up -kaolin dc1_subgroup data ACTIVE Up -kaboom dc1_subgroup data ACTIVE Up -``` - -As soon as kaolin is back in the cluster, it begins synchronizing with the cluster. It does that by catching up on that replay data. Run: - -```shell -psql bdrdb -c "select * from bdr.node_replication_rates;" -``` - -The output looks like this: - -```plain - peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interval ---------------+-------------+-----------+------------+------------+------------------+-----------------+------------+------------------ - 2710197610 | kaboom | 0/8092938 | 0/8092938 | 00:00:00 | 0 | 0 bytes | 2321 | 00:00:00 - 2111777360 | kaolin | 0/8092938 | 0/8092938 | 00:00:00 | 0 | 0 bytes | 337426 | 00:00:00 -(2 rows) -``` - -As you can see, there's no replay lag now, as kaolin has completely caught up. - -With kaolin fully back in service, you can leave everything as it is. There's no need to change the server that's write leader. The failover mechanism is always ready to bring another server up to write leader when needed. - -If you want, you can make kaolin leader again by running: - -```shell -pgd group dc1_subgroup set leader kaolin -``` - -This command returns kaolin to write lead. The application's updates will follow, as the proxies track the write leader. - -## Proxy failover - -!!!Note -The command `pgd show-proxies` is deprecated as of PGD v5.7.0. It is still accessible as a legacy command. -!!! - -Proxies can also failover. To experience this, make sure your focus is still on the lower-left pane, and run: - -```shell -pgd show-proxies -``` - -You'll see: - -```console -Proxy Group Listen Addrs Listen Port Read Listen Addrs Read Listen Port ------- ------------ ------------ ----------- ----------------- ---------------- -kaboom dc1_subgroup [0.0.0.0] 6432 [0.0.0.0] 6433 -kaftan dc1_subgroup [0.0.0.0] 6432 [0.0.0.0] 6433 -kaolin dc1_subgroup [0.0.0.0] 6432 [0.0.0.0] 6433 -``` - -Enter `exit` to exit the enterprisedb user and return to the admin/root shell. You can now stop the proxy service on this node by running: - -```shell -systemctl stop pgd-proxy.service -``` - -A brief error appears in the lower-right window as the script switches to another proxy. The write leader doesn't change, though, so the switch of proxy doesn't show in the top-left pane where the monitor query is running. - -Bring the proxy service on kaftan back by running: - -```shell -systemctl start pgd-proxy.service -``` - -!!!Tip Exiting tmux -You can quickly exit tmux and all the associated sessions. First terminate any running processes, as they otherwise continue running after the session is killed. Press **Control-B** and then enter `:kill-session`. This approach is simpler than quitting each pane's session one at a time using **Control-D** or `exit`. -!!! - -## Other scenarios - -This example uses the quick start configuration of three data nodes and one backup node. You can configure a cluster to have two data nodes and a witness node, which is less resilient to a node failing. Or you can configure five data nodes, which is much more resilient to a node failing. With this configuration, you can explore how failover works for your applications. For clusters with multiple locations, the same basic rules apply: taking a server down elects a new write leader that proxies now point to. - -## Further reading - -* Read more about the management capabilities of the [PGD CLI](../cli/). -* Learn more about [monitoring replication using SQL](../monitoring/sql/#monitoring-replication-peers). diff --git a/product_docs/docs/pgd/5.8/quickstart/further_explore_replication.mdx b/product_docs/docs/pgd/5.8/quickstart/further_explore_replication.mdx deleted file mode 100644 index 17d628798f1..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/further_explore_replication.mdx +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Exploring replication with PGD -navTitle: Exploring replication -deepToC: true ---- - -## Explore replication with PGD - -With the cluster up and running, it's useful to run some basic checks to see how effectively it's replicating. - -The following example shows one quick way to do this, but you must ensure that any testing you perform is appropriate for your use case. - -### Preparation - -Ensure your cluster is ready to perform replication. If you haven't installed a cluster yet, use one of the [quick start](.) guides to get going. -1. Log in to the database on the first host. -1. Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`. - -When the query returns, the cluster is ready. - -### Create data -The simplest way to test that the cluster is replicating is to log in to a node, create a table, populate it, and see the data you populated appear on a second node. On the first node: -1. Create a table: - ```sql - CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); - ``` -1. Populate the table: - ```sql - INSERT INTO quicktest (value) SELECT random()*10000 - FROM generate_series(1,10000); - ``` -1. Monitor replication performance: - ```sql - select * from bdr.node_replication_rates; - ``` -1. Get a sum of the value column (for checking): - ```sql - select COUNT(*),SUM(value) from quicktest; - ``` - -### Check data - -1. To confirm the data was successfully replicated, log in to a second node. - 1. Get a sum of the value column (for checking): - ```sql - select COUNT(*),SUM(value) from quicktest; - ``` - 1. Compare with the result from the first node. -1. Log in to a third node. - 1. Get a sum of the value column (for checking): - ```sql - select COUNT(*),SUM(value) from quicktest; - ``` - 1. Compare with the result from the first and second nodes. - -## Worked example - -### Preparation - -The cluster in this example has three data nodes: kaboom, kaftan, and kaolin. The example uses kaboom as the first node. Log in to kaboom and then into kaboom's Postgres server: - -``` -cd democluster -ssh -F ssh_config kaboom -sudo -iu enterprisedb psql bdrdb -``` - -#### Ensure the cluster is ready - -To ensure that the cluster is ready to go, run: - -``` -select bdr.wait_slot_confirm_lsn(NULL, NULL); -__OUTPUT__ - wait_slot_confirm_lsn ------------------------ - -(1 row) -``` - -If the cluster is busy initializing, this query waits and returns when the cluster is ready. - -### Create data - -#### On the first node (kaboom), create a table - -Run: - -```sql -CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); -__OUTPUT__ -CREATE TABLE -``` - -#### On kaboom, populate the table - -This command generates a table of 10000 rows of random values: - -``` -INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); -__OUTPUT__ -INSERT 0 10000 -``` - -#### On kaboom, monitor performance - -As soon as possible, run the following command. It shows statistics about how quickly that data was replicated to the other two nodes. - -```sql -select * from bdr.node_replication_rates; -__OUTPUT__ - peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interval ---------------+-------------+-----------+------------+------------+------------------+-----------------+------------+------------------ - 3490219809 | kaftan | 0/F57D120 | 0/F57D120 | 00:00:00 | 0 | 0 bytes | 9158 | 00:00:00 - 2111777360 | kaolin | 0/F57D120 | 0/F57D120 | 00:00:00 | 0 | 0 bytes | 9293 | 00:00:00 -(2 rows) -``` - -The `replay_lag` values are 0, showing no lag. The LSN values are in sync, meaning the data is already replicated. - -#### On kaboom get a checksum - -Run: - -```sql -select COUNT(*),SUM(value) from quicktest; -``` - -This command calculates a sum of the values from the generated data: - -```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; -__OUTPUT__ - count | sum ---------+----------- - 100000 | 498884606 -(1 row) -``` - -Your sum will be different because the values in the table are random numbers, but the count will be 100000. - -### Check data - -The second host is kaftan. In another window or session, log in to kaftan's Postgres server: - -``` -cd democluster -ssh -F ssh_config kaftan -sudo -iu enterprisedb psql bdrdb -``` - -#### On the second node (kaftan), get a checksum - -Run: - -```sql -select COUNT(*),SUM(value) from quicktest; -``` - -This command gets the second node's values for the generated data: - -```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; -__OUTPUT__ - count | sum ---------+----------- - 100000 | 498884606 -(1 row) -``` - -#### Compare with the result from the first node (kaboom) - -The values are identical. - -You can repeat the process with the third node (kaolin), or generate new data on any node and see it replicate to the other nodes. - -#### Log in to the third node (kaolin) - -The third and last node is kaolin. In another window or session, log in to kaolin and then to kaolin's Postgres server: - -``` -cd democluster -ssh -F ssh_config kaolin -sudo -iu enterprisedb psql bdrdb -``` - -#### On kaolin, get a checksum - -Run: - -```sql -select COUNT(*),SUM(value) from quicktest; -``` - -This command gets kaolin's values for the generated data: - -```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; -__OUTPUT__ - count | sum ---------+----------- - 100000 | 498884606 -(1 row) -``` - -#### Compare the results - -Compare the result from the first and second nodes (kaboom and kaftan) with the result from kaolin. The values are identical on all three nodes. - diff --git a/product_docs/docs/pgd/5.8/quickstart/images/4sessions.png b/product_docs/docs/pgd/5.8/quickstart/images/4sessions.png deleted file mode 100644 index 00c0f63515a..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/images/4sessions.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4eeb9c661e2e722489a63b1323c6b7920806ea71d3f2af827c67f88eeed81ec3 -size 617093 diff --git a/product_docs/docs/pgd/5.8/quickstart/images/4sessionsinsertconflict.png b/product_docs/docs/pgd/5.8/quickstart/images/4sessionsinsertconflict.png deleted file mode 100644 index 0f61ee77203..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/images/4sessionsinsertconflict.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1f51ead5d78b4ac0cb0e1ed98929962c65f7c93150205d98af0b0afe489080b7 -size 708541 diff --git a/product_docs/docs/pgd/5.8/quickstart/images/4sessionsupdateconflict.png b/product_docs/docs/pgd/5.8/quickstart/images/4sessionsupdateconflict.png deleted file mode 100644 index eede0d289ec..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/images/4sessionsupdateconflict.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fbc5e3a8e525002694484ed2260752a20f89c217d6320238bddde4d83d758510 -size 493455 diff --git a/product_docs/docs/pgd/5.8/quickstart/images/4sshsessions.png b/product_docs/docs/pgd/5.8/quickstart/images/4sshsessions.png deleted file mode 100644 index f9909632e1b..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/images/4sshsessions.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a747b67161661d9157132809041ef428fb51bb09087f11eadbcad366ae5ac72e -size 779835 diff --git a/product_docs/docs/pgd/5.8/quickstart/index.mdx b/product_docs/docs/pgd/5.8/quickstart/index.mdx deleted file mode 100644 index 8ea592cd975..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/index.mdx +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: "Introducing PGD quick starts" -navTitle: "Quick start" -description: > - How to select your PGD quick start deployment and what to expect from the experience. -indexCards: none -navigation: -- quick_start_docker -- quick_start_linux -- quick_start_aws -- quick_start_cloud -- connecting_applications -- further_explore_replication -- further_explore_failover -- further_explore_conflicts -- next_steps ---- - -## Quick start - -EDB Postgres Distributed (PGD) is a multi-master replicating implementation of Postgres designed for high performance and availability. You can create database clusters made up of many bidirectionally synchronizing database nodes. The clusters can have a number of proxy servers that direct your query traffic to the most available nodes, adding further resilience to your cluster configuration. - -### Other deployment options - -- If you prefer to have a fully managed EDB Postgres Distributed experience, PGD is now available as an option on EDB Cloud Service, EDB's cloud platform for Postgres. See [EDB Cloud Service distributed high-availability clusters](/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability/). - -- If you prefer to deploy PGD on Kubernetes, you can use the EDB PGD Operator for Kubernetes. See [EDB PGD Operator for Kubernetes](/postgres_distributed_for_kubernetes/latest/quickstart). - -### What's in this quick start - -PGD is very configurable. To quickly evaluate and deploy PGD, use this quick start. It'll get you up and running with a fully configured PGD cluster using the same tools that you'll use to deploy to production. This quick start includes: - -- A short introduction to Trusted Postgres Architect (TPA) and how it helps you configure, deploy, and manage EDB Postgres Distributed -- A guide to selecting Docker, Linux hosts, or AWS quick starts - - The Docker quick start - - The Linux host quick start - - The AWS quick start -- Connecting applications to your cluster -- Further explorations with your cluster including - - Replication - - Conflicts - - Failover - -## Introducing PGD and TPA - -PGD is a multi-master replicating implementation of Postgres designed for high performance and availability. The installation of PGD is orchestrated by TPA. - -We created TPA to make installing and managing various Postgres configurations easily repeatable. TPA orchestrates creating and deploying Postgres. - -These quick starts are designed to let you quickly get a single region cluster. - -In these quick starts, you install TPA first. If you already have TPA installed, you can skip those steps. TPA is more of a tool than a simple installer. You can use the same installation of TPA to deploy many different configurations of Postgres clusters. - -You'll use TPA to generate a configuration file for a PGD demonstration cluster. This cluster will have three replicating database nodes, cohosting three high-availability proxies and one backup node. - -You will then use TPA to provision and deploy the required configuration and software to each node. - -## Selecting Docker, Linux hosts, or AWS quick starts - -Three quick starts are currently available: - -- Docker — Provisions, deploys, and hosts the cluster on Docker containers on a single machine. -- Linux hosts — Deploys and hosts the cluster on Linux servers that you already provisioned with an operating system and SSH connectivity. These can be actual physical servers or virtual machines, deployed on-premises or in the cloud. -- AWS — Provisions, deploys, and hosts the cluster on AWS. - -### Docker quick start - -The Docker quick start is ideal for those looking to initially explore PGD and its capabilities. This configuration of PGD isn't suitable for production use but can be valuable for testing the functionality and behavior of PGD clusters. You might also find it useful when familiarizing yourself with PGD commands and APIs to prepare for deploying on cloud, VM, or Linux hosts. - -- [Begin the Docker quick start](quick_start_docker). - -### Linux host quick start - -The Linux hosts quick start is suited if you intend to install PGD on your own hosts, where you have complete control of the hardware and software, or in a private cloud. The overall configuration is similar to the Docker configuration but is more persistent over system restarts and closer to a single-region production deployment of PGD. - -- [Begin the Linux host quick start](quick_start_linux). - -### AWS quick start - -The AWS quick start is more extensive and deploys the PGD cluster onto EC2 nodes on Amazon's cloud. The cluster's overall configuration is similar to the Docker quick start. However, instead of using Docker containers, it uses t3.micro instances of Amazon EC2 to provide the compute power. The AWS deployment is more persistent and not subject to the limitations of the Docker quick start deployment. However, it requires more initial setup to configure the AWS CLI. - -- [Begin the AWS quick start](quick_start_aws). - -## Further explorations with your cluster - -- [Connect applications to your PGD cluster](connecting_applications/). -- [Find out how a PGD cluster stands up to downtime of data nodes or proxies](further_explore_failover/). -- [Learn about how EDB Postgres Distributed manages conflicting updates](further_explore_conflicts/). -- [Move beyond the quick starts](next_steps/). diff --git a/product_docs/docs/pgd/5.8/quickstart/next_steps.mdx b/product_docs/docs/pgd/5.8/quickstart/next_steps.mdx deleted file mode 100644 index ceea92679da..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/next_steps.mdx +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Next steps with PGD" -navTitle: "Next steps" -description: > - Learning more about PGD. ---- - -## Going further with your PGD cluster - -### Architecture - -This quick start created a single region cluster of high-availability Postgres databases. This is the Always-on Single Location architecture, one of a range of available PGD architectures. Other architectures include Always-on Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. See [architectural options](../planning/architectures/). - -### Postgres versions - -This quick start deployed EDB Postgres Advanced Server to the database nodes. PGD can deploy three different kinds of Postgres distributions, EDB Postgres Advanced Server, EDB Postgres Extended Server, and open-source PostgreSQL. The selection of database affects PGD, offering [different capabilities](../planning/choosing_server/), depending on server. - -* Open-source PostgreSQL doesn't support CAMO. -* EDB Postgres Extended Server supports CAMO but doesn't offer Oracle compatibility. -* EDB Postgres Advanced Server supports CAMO and offers optional Oracle compatibility. - -### Further reading - -* Learn PGD's [terminology](../terminology/), from asynchronous replication to write scalability. -* Find out how [applications work](../appusage/) with PGD and how common Postgres features like [sequences](../sequences/) are globally distributed. -* Discover how PGD supports [rolling upgrades](../upgrades/) of your clusters. -* Take control of [routing](../routing/) and use SQL to control the PGD proxies. -* Engage with the [PGD CLI](../cli/) to manage and monitor your cluster. - - -## Deprovisioning the cluster - -When you're done testing the cluster, deprovision it. - -```shell -tpaexec deprovision democluster -``` - -* With a Docker deployment, deprovisioning tears down the Docker containers, network, and other local configuration. -* With an AWS deployment, deprovisioning removes the EC2 instances, VPC configuration, and other associated resources. Note that it leaves the S3 bucket it created. You must manually remove it. diff --git a/product_docs/docs/pgd/5.8/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5.8/quickstart/quick_start_aws.mdx deleted file mode 100644 index 2afe7714d78..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/quick_start_aws.mdx +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: "Deploying an EDB Postgres Distributed example cluster on AWS" -navTitle: "Deploying on AWS" -description: > - A quick demonstration of deploying a PGD architecture using TPA on Amazon EC2 -redirects: - - /pgd/latest/deployments/tpaexec/quick_start/ - - /pgd/latest/tpa/quick_start/ - - /pgd/latest/quick_start_aws/ ---- - - -This quick start sets up EDB Postgres Distributed with an Always-on Single Location architecture using Amazon EC2. - -## Introducing TPA and PGD - -We created TPA to make installing and managing various Postgres configurations easily repeatable. TPA orchestrates creating and deploying Postgres. In this quick start, you install TPA first. If you already have TPA installed, you can skip those steps. You can use TPA to deploy various configurations of Postgres clusters. - -PGD is a multi-master replicating implementation of Postgres designed for high performance and availability. The installation of PGD is orchestrated by TPA. You'll use TPA to generate a configuration file for a PGD demonstration cluster. This cluster uses Amazon EC2 instances configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) -node for backup. You can then use TPA to provision and deploy the required configuration and software to each node. - -## Preparation - -!!! Note -This set of steps is specifically for Ubuntu 22.04 LTS on Intel/AMD processors. -!!! - -### EDB account - -To install both TPA and PGD, you need an EDB account. - -[Sign up for a free EDB account](https://www.enterprisedb.com/accounts/register) if you don't already have one. Signing up gives you a trial subscription to EDB's software repositories. - -After you're registered, go to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. - -On your first visit to this page, select **Request Access** to generate your repo token. Copy the token using the **Copy Token** icon, and store it safely. - -### Install curl - -You use the `curl` command to retrieve installation scripts from repositories. On Ubuntu, curl isn't installed by default. To see if it's present, run `curl` in the terminal: - -```console -$ curl -Command 'curl' not found, but can be installed with: -sudo apt install curl -``` - -If not found, run: - -``` -sudo apt -y install curl -``` - -### Setting environment variables - -First, set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the value of your EDB repo token, obtained in the [EDB account](#edb-account) step. - -```shell -export EDB_SUBSCRIPTION_TOKEN= -``` - -You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. - -### Configure the repository - -All the software needed for this example is available from the EDB Postgres Distributed package repository. The following command downloads and runs a script to configure the EDB Postgres Distributed repository. This repository also contains the TPA packages. - -```shell -curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash -``` - -!!! Tip "Troubleshooting repo access" - The script should produce output starting with: - ```text - Executing the setup script for the 'enterprisedb/postgres_distributed' repository ... - ``` - If it produces no output or an error, double-check that you entered your token correctly. If the problem persists, [contact Support](https://support.enterprisedb.com) for assistance. - -## Installing Trusted Postgres Architect (TPA) - -You'll use TPA to provision and deploy PGD. If you previously installed TPA, you can move on to the [next step](#installing-pgd-using-tpa). You'll find full instructions for installing TPA in the [Trusted Postgres Architect documentation](/tpa/latest/INSTALL/), which we've also included here. - -### Linux environment - -[TPA supports several distributions of Linux](/tpa/latest/INSTALL/) as a host platform. These examples are written for Ubuntu 22.04, but steps are similar for other supported platforms. - -### Install the TPA package - -```shell -sudo apt install tpaexec -``` - -### Configuring TPA - -You need to configure TPA, which configures TPA's Python environment. Call `tpaexec` with the command `setup`: - -```shell -sudo /opt/EDB/TPA/bin/tpaexec setup -export PATH=$PATH:/opt/EDB/TPA/bin -``` - -You can add the `export` command to your shell's profile. - -### Testing the TPA installation - -You can verify TPA is correctly installed by running `selftest`: - -```shell -tpaexec selftest -``` -TPA is now installed. - -### AWS Credentials - -TPA uses your AWS credentials to perform the deployment onto AWS. Unless you -have a corporate-managed account, you need to [get your credentials from -AWS](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html). Corporate-managed accounts have their own process for obtaining credentials. - -Your credentials consist of an AWS Access Key ID and a Secret Access Key. You also need to select an AWS default region for your work. - -Set the environment variables `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_DEFAULT_REGION` to the values of your AWS credentials. To ensure they're always set, you can add these to your `.bashrc` or similar shell profile. - -```shell -$ export AWS_ACCESS_KEY_ID=THISISJUSTANEXAMPLE -$ export AWS_SECRET_ACCESS_KEY=d0ntU5E/Th1SAs1ts/jUs7anEXAMPLEKEY -$ export AWS_DEFAULT_REGION=us-west-2 -``` - -Your account needs the necessary permissions to create and manage the resources that TPA uses. [TPA AWS platform](/tpa/latest/platform-aws/) details the permissions that you need. Consult your AWS administrator if you need help with this. - -## Installing PGD using TPA - -### Generating a configuration file - -Run the [`tpaexec configure`](/tpa/latest/tpaexec-configure/) command to generate a configuration folder: - -```shell-session -tpaexec configure democluster \ - --architecture PGD-Always-ON \ - --platform aws \ - --region eu-west-1 \ - --edb-postgres-advanced 16 \ - --redwood \ - --location-names dc1 \ - --pgd-proxy-routing local \ - --no-git \ - --hostnames-unsorted -``` - -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD's Always-on architectures](../planning/architectures/). As part of the default architecture, -this configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) node for backup. - -Specify that you're using AWS (`--platform aws`) and eu-west-1 as the region (`--region eu-west-1`). - -TPA defaults to t3.micro instances on AWS. This is enough for this demonstration and also suitable for use with an [AWS free tier](https://aws.amazon.com/free/) account. - -!!! Warning AWS free tier limitations - AWS free tier limitations for EC2 are based on hours of instance usage. Depending on how much time you spend testing, you might exceed these limits and incur charges. - -By default, TPA configures Debian as the default OS for all nodes on AWS. - -!!! Note Deployment platforms - Other Linux platforms are supported as deployment targets for PGD. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details. - - Observe that you don't have to deploy PGD to the same platform you're using to run TPA! - -Specify that the data nodes will be running [EDB Postgres Advanced Server v16](/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`). - -You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes in each location. - -By default, TPA commits configuration changes to a Git repository. For this example, you don't need to do that, so you pass the `--no-git` flag. - -Finally, you ask TPA to generate repeatable hostnames for the nodes by passing `--hostnames-unsorted`. Otherwise, it selects hostnames at random from a predefined list of suitable words. - -This command creates a subdirectory in the current working directory called `democluster`. It contains the `config.yml` configuration file TPA uses to create the cluster. You can view it using: - -```shell -less democluster/config.yml -``` - -!!! SeeAlso "Further reading" - - View the full set of available options by running: - ```shell - tpaexec configure --architecture PGD-Always-ON --help - ``` - - More details on PGD-Always-ON configuration options in [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) - - [PGD-Always-ON](/tpa/latest/architecture-PGD-Always-ON/) in the TPA documentation - - [`tpaexec configure`](/tpa/latest/tpaexec-configure/) in the TPA documentation - - [AWS platform](/tpa/latest/platform-aws/) in the TPA documentation - -## Provisioning the cluster: - -Next, allocate the resources needed to run the configuration you just created using the [`tpaexec provision`](/tpa/latest/tpaexec-provision/) command: - -```shell -tpaexec provision democluster -``` - -Since you specified AWS as the platform (the default platform), TPA provisions EC2 instances, VPCs, subnets, routing tables, internet gateways, security groups, EBS volumes, elastic IPs, and so on. - -Because you didn't specify an existing one when configuring, TPA also prompts you to confirm the creation of an S3 bucket. - -!!! Warning Remember to remove the bucket when you're done testing! - TPA doesn't remove the bucket that it creates in this step when you later deprovision the cluster. Take note of the name now, so that you can be sure to remove it later. - -!!! SeeAlso "Further reading" - - [`tpaexec provision`](/tpa/latest/tpaexec-provision/) in the Trusted Postgres Architect documentation - -## Deploying the cluster - -With configuration in place and infrastructure provisioned, you can now [deploy](/tpa/latest/tpaexec-deploy/) the distributed cluster: - -```shell -tpaexec deploy democluster -``` - -TPA applies the configuration, installing the needed packages and setting up the actual EDB Postgres Distributed cluster. - -!!! SeeAlso "Further reading" - - [`tpaexec deploy`](/tpa/latest/tpaexec-deploy/) in the Trusted Postgres Architect documentation - -## Connecting to the cluster - -You're now ready to log in to one of the nodes of the cluster with SSH and then connect to the database. Part of the configuration process is to set up SSH logins for all the nodes, complete with keys. To use the SSH configuration, you need to be in the `democluster` directory created by the `tpaexec configure` command earlier: - -```shell -cd democluster -``` - -From there, you can run `ssh -F ssh_config ` to establish an SSH connection. You will connect to kaboom, the first database node in the cluster: - -```shell -ssh -F ssh_config kaboom -__OUTPUT__ -[admin@kaboom ~]# -``` - -Notice that you're logged in as admin on kaboom. - -You now need to adopt the identity of the enterprisedb user. This user is preconfigured and authorized to connect to the cluster's nodes. - -```shell -sudo -iu enterprisedb -__OUTPUT__ -enterprisedb@kaboom:~ $ -``` - -You can now run the `psql` command to access the bdrdb database: - -```shell -psql bdrdb -__OUTPUT__ -psql (16.2.0, server 16.2.0) -Type "help" for help. - -bdrdb=# -``` - -You're directly connected to the Postgres database running on the kaboom node and can start issuing SQL commands. - -To leave the SQL client, enter `exit`. - -### Using PGD CLI - -The pgd utility, also known as the PGD CLI, lets you control and manage your EDB Postgres Distributed cluster. It's already installed on the node. - -You can use it to check the cluster's health by running `pgd cluster show --health`: - -```shell -pgd cluster show --health -__OUTPUT__ -Check Status Details ------------------ ------ ----------------------------------------------- -Connections Ok All BDR nodes are accessible -Raft Ok Raft Consensus is working correctly -Replication Slots Ok All PGD replication slots are working correctly -Clock Skew Ok Clock drift is within permissible limit -Versions Ok All nodes are running the same PGD version -``` - -Or, you can use `pgd nodes list` to ask PGD to show you the data-bearing nodes in the cluster: - -```shell -pgd nodes list -__OUTPUT__ -Node Name Group Name Node Kind Join State Node Status ---------- ------------ --------- ---------- ----------- -kaftan dc1_subgroup data ACTIVE Up -kaolin dc1_subgroup data ACTIVE Up -kaboom dc1_subgroup data ACTIVE Up -``` - -## Explore your cluster - -* [Connect your database](connecting_applications) to applications. -* [Explore replication](further_explore_replication) with hands-on exercises. -* [Explore failover](further_explore_failover) with hands-on exercises. -* [Understand conflicts](further_explore_conflicts) by creating and monitoring them. -* Take the [next steps](next_steps) for working with your cluster. diff --git a/product_docs/docs/pgd/5.8/quickstart/quick_start_cloud.mdx b/product_docs/docs/pgd/5.8/quickstart/quick_start_cloud.mdx deleted file mode 100644 index 2e01eee8bf3..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/quick_start_cloud.mdx +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: "Deploying an EDB Postgres Distributed example cluster on Azure and Google cloud platforms" -navTitle: "Deploying on Azure and Google" -description: > - A quick guide to deploying a PGD architecture using TPA on Azure and Google clouds -redirects: - - /pgd/latest/quick_start_cloud/ -hideToC: True ---- - -## Deploying on Azure and Google clouds - -For most cloud platforms, such as Azure and Google Cloud Platform, you create Linux hosts on the cloud platform you're using. You can then use the [Deploying on Linux hosts](quick_start_linux) quick start to deploy PGD to those Linux hosts. - -* Azure users can follow [a Microsoft guide](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu) on how to provision Azure VMs loaded with Linux. -* Google Cloud Platform users can follow [a Google guide](https://cloud.google.com/compute/docs/create-linux-vm-instance) on how to provision GCP VMs with Linux loaded. - -Then continue with [Deploying on Linux hosts](quick_start_linux). - -For AWS users, see the [Deploying on AWS](quick_start_aws) quick start. Using this quick start, TPA -provisions the hosts needed to create your cluster. diff --git a/product_docs/docs/pgd/5.8/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5.8/quickstart/quick_start_docker.mdx deleted file mode 100644 index 81940ae1b9c..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/quick_start_docker.mdx +++ /dev/null @@ -1,289 +0,0 @@ ---- -title: "Deploying an EDB Postgres Distributed example cluster on Docker" -navTitle: "Deploying on Docker" -description: > - A quick demonstration of deploying a PGD architecture using TPA on Docker -redirects: - - /pgd/latest/quick_start_docker/ ---- - - -This quick start uses TPA to set up PGD with an Always-on Single Location architecture using local Docker containers. - -## Introducing TPA and PGD - -We created TPA to make installing and managing various Postgres configurations easily repeatable. TPA orchestrates creating and deploying Postgres. In this quick start, you install TPA first. If you already have TPA installed, you can skip those steps. You can use TPA to deploy various configurations of Postgres clusters. - -PGD is a multi-master replicating implementation of Postgres designed for high performance and availability. The installation of PGD is orchestrated by TPA. You will use TPA to generate a configuration file for a PGD demonstration cluster. - -This cluster uses local Docker containers to host the cluster's nodes: three replicating database nodes, three cohosted connection proxies, and one backup node. You can then use TPA to provision and deploy the required configuration and software to each node. - -This configuration of PGD isn't suitable for production use but can be valuable for testing the functionality and behavior of PGD clusters. You might also find it useful when familiarizing yourself with PGD commands and APIs to prepare for deployment on cloud, VM, or Linux hosts. - -!!! Note -This set of steps is specifically for Ubuntu 22.04 LTS on Intel/AMD processors. -!!! - -## Prerequisites - -To complete this example, you need a system with enough RAM and free storage. You also need curl and Docker installed. - -### RAM requirements - -You need a minimum of 4GB of RAM on the system. You need this much RAM because you will be running four containers, three of which will be hosting Postgres databases. - -### Free disk space - -You need at least 5GB of free storage, accessible by Docker, to deploy the cluster described by this example. We recommend that you have a bit more than that. - -### The curl utility - -You will download and run scripts during this quick start using the curl utility, which might not be installed by default. To ensure that curl is installed, run: - -```shell -sudo apt update -sudo apt install curl -``` - -### Docker Engine - -You will use Docker containers as the target platform for this PGD deployment. Install Docker Engine: - -```shell -sudo apt update -sudo apt install docker.io -``` - -!!! Important Running as a non-root user - Once Docker Engine is installed, be sure to add your user to the Docker group: - - ```shell - sudo usermod -aG docker - newgrp docker - ``` - - -## Preparation - -### EDB account - -To install both TPA and PGD, you need an EDB account. - -[Sign up for a free EDB account](https://www.enterprisedb.com/accounts/register) if you don't already have one. Signing up gives you a trial subscription to EDB's software repositories. - -After you're registered, go to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. - -On your first visit to this page, select **Request Access** to generate your repo token. Copy the token using the **Copy Token** icon, and store it safely. - -### Setting environment variables - -First, set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the value of your EDB repo token, obtained in the [EDB account](#edb-account) step. - -```shell -export EDB_SUBSCRIPTION_TOKEN= -``` - -You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. - -### Configure the repository - -All the software needed for this example is available from the EDB Postgres Distributed package repository. The following command downloads and runs a script to configure the EDB Postgres Distributed repository. This repository also contains the TPA packages. - -```shell -curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash -``` - -!!! Tip "Troubleshooting repo access" - The script should produce output starting with: - ```text - Executing the setup script for the 'enterprisedb/postgres_distributed' repository ... - ``` - If it produces no output or an error, double-check that you entered your token correctly. It the problem persists, [contact Support](https://support.enterprisedb.com) for assistance. - -## Installing Trusted Postgres Architect (TPA) - -You'll use TPA to provision and deploy PGD. If you previously installed TPA, you can move on to the [next step](#installing-pgd-using-tpa). You'll find full instructions for installing TPA in the [Trusted Postgres Architect documentation](/tpa/latest/INSTALL/), which we've also included here. - -### Linux environment - -[TPA supports several distributions of Linux](/tpa/latest/INSTALL/) as a host platform. These examples are written for Ubuntu 22.04, but steps are similar for other supported platforms. - -### Install the TPA package - -```shell -sudo apt install tpaexec -``` - -### Configuring TPA - -You now need to configure TPA, which configures TPA's Python environment. Call `tpaexec` with the command `setup`: - -```shell -sudo /opt/EDB/TPA/bin/tpaexec setup -export PATH=$PATH:/opt/EDB/TPA/bin -``` - -You can add the `export` command to your shell's profile. - -### Testing the TPA installation - -You can verify TPA is correctly installed by running `selftest`: - -```shell -tpaexec selftest -``` -TPA is now installed. - -## Installing PGD using TPA - -### Generating a configuration file - -Run the [`tpaexec configure`](/tpa/latest/tpaexec-configure/) command to generate a configuration folder: - -```shell-session -tpaexec configure democluster \ - --architecture PGD-Always-ON \ - --platform docker \ - --edb-postgres-advanced 16 \ - --redwood \ - --location-names dc1 \ - --pgd-proxy-routing local \ - --no-git \ - --hostnames-unsorted \ - --keyring-backend legacy -``` - -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which -sets up the configuration for [PGD's Always-on -architectures](../planning/architectures/). As part of the default architecture, -it configures your cluster with three data nodes, cohosting three [PGD -Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) -node for backup. - -Specify that you're using Docker (`--platform docker`). By default, TPA configures Rocky -Linux as the default image for all nodes. - -!!! Note Deployment platforms - Other Linux platforms are supported as deployment targets for PGD. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details. - - Observe that you don't have to deploy PGD to the same platform you're using to run TPA! - -Specify that the data nodes will be running [EDB Postgres Advanced Server v16](/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`). - -You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes in each location. - -By default, TPA commits configuration changes to a Git repository. For this example, you don't need to do that, so pass the `--no-git` flag. - -You also ask TPA to generate repeatable hostnames for the nodes by passing `--hostnames-unsorted`. Otherwise, it selects hostnames at random from a predefined list of suitable words. - -Finally, `--keyring-backend legacy` tells TPA to select the legacy version of the keyring backend. Secrets are stored with an older keyring backend, as the version of Ubuntu this example is based on doesn't support the newer keyring backend. - -This command creates a subdirectory called `democluster` in the current working directory. It contains the `config.yml` configuration file TPA uses to create the cluster. You can view it using: - -```shell -less democluster/config.yml -``` - -!!! SeeAlso "Further reading" - - View the full set of available options by running: - ```shell - tpaexec configure --architecture PGD-Always-ON --help - ``` - - More details on PGD-Always-ON configuration options in [Deploying with TPA](../deploy-config/deploy-tpa/) - - [PGD-Always-ON](/tpa/latest/architecture-PGD-Always-ON/) in the TPA documentation - - [`tpaexec configure`](/tpa/latest/tpaexec-configure/) in the TPA documentation - - [Docker platform](/tpa/latest/platform-docker/) in the TPA documentation - - -### Deploying the cluster - -You can now [deploy](/tpa/latest/tpaexec-deploy/) the distributed cluster. -For Docker deployments, deploying both provisions the required Docker containers and deploys the software to those containers: - -```shell -tpaexec deploy democluster -``` - -TPA applies the configuration, installing the needed packages and setting up the actual EDB Postgres Distributed cluster. - -!!! SeeAlso "Further reading" - - [`tpaexec deploy`](/tpa/latest/tpaexec-deploy/) in the Trusted Postgres Architect documentation - -## Connecting to the cluster - -You're now ready to log in to one of the nodes of the cluster with SSH and then connect to the database. Part of the configuration process is to set up SSH logins for all the nodes, complete with keys. To use the SSH configuration, you need to be in the `democluster` directory created by the `tpaexec configure` command earlier: - -```shell -cd democluster -``` - -From there, you can run `ssh -F ssh_config ` to establish an SSH connection. You will connect to kaboom, the first database node in the cluster: - -```shell -ssh -F ssh_config kaboom -__OUTPUT__ -[root@kaboom ~]# -``` - -Notice that you're logged in as `root` on `kaboom`. - -You now need to adopt the identity of the enterprisedb user. This user is preconfigured and authorized to connect to the cluster's nodes. - -```shell -sudo -iu enterprisedb -__OUTPUT__ -enterprisedb@kaboom:~ $ -``` - -You can now run the `psql` command to access the `bdrdb` database: - -```shell -psql bdrdb -__OUTPUT__ -psql (16.2.0, server 16.2.0) -Type "help" for help. - -bdrdb=# -``` - -You're directly connected to the Postgres database running on the kaboom node and can start issuing SQL commands. - -To leave the SQL client, enter `exit`. - -### Using PGD CLI - -The pgd utility, also known as the PGD CLI, lets you control and manage your EDB Postgres Distributed cluster. It's already installed on the node. - -You can use it to check the cluster's health by running `pgd cluster show --health`: - -```shell -pgd cluster show --health -__OUTPUT__ -Check Status Details ------------------ ------ ----------------------------------------------- -Connections Ok All BDR nodes are accessible -Raft Ok Raft Consensus is working correctly -Replication Slots Ok All PGD replication slots are working correctly -Clock Skew Ok Clock drift is within permissible limit -Versions Ok All nodes are running the same PGD version -``` - -Or, you can use `pgd nodes list` to ask PGD to show you the data-bearing nodes in the cluster: - -```shell -pgd nodes list -__OUTPUT__ -Node Name Group Name Node Kind Join State Node Status ---------- ------------ --------- ---------- ----------- -kaftan dc1_subgroup data ACTIVE Up -kaolin dc1_subgroup data ACTIVE Up -kaboom dc1_subgroup data ACTIVE Up -``` - -## Explore your cluster - -* [Connect to your database](connecting_applications) to applications. -* [Explore replication](further_explore_replication) with hands-on exercises. -* [Explore failover](further_explore_failover) with hands-on exercises. -* [Understand conflicts](further_explore_conflicts) by creating and monitoring them. -* Take the [next steps](next_steps) for working with your cluster. diff --git a/product_docs/docs/pgd/5.8/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5.8/quickstart/quick_start_linux.mdx deleted file mode 100644 index 2147435b16a..00000000000 --- a/product_docs/docs/pgd/5.8/quickstart/quick_start_linux.mdx +++ /dev/null @@ -1,375 +0,0 @@ ---- -title: "Deploying an EDB Postgres Distributed example cluster on Linux hosts" -navTitle: "Deploying on Linux hosts" -description: > - A quick demonstration of deploying a PGD architecture using TPA on Linux hosts -redirects: - - /pgd/latest/quick_start_bare/ ---- - -## Introducing TPA and PGD - -We created TPA to make installing and managing various Postgres configurations easily repeatable. TPA orchestrates creating and deploying Postgres. In this quick start, you install TPA first. If you already have TPA installed, you can skip those steps. You can use TPA to deploy various configurations of Postgres clusters. - -PGD is a multi-master replicating implementation of Postgres designed for high performance and availability. The installation of PGD is orchestrated by TPA. You will use TPA to generate a configuration file for a PGD demonstration cluster. - -The TPA Linux host option allows users of any cloud or VM platform to use TPA to configure EDB Postgres Distributed. All you need from TPA is for the target system to be configured with a Linux operating system and accessible using SSH. Unlike the other TPA platforms (Docker and AWS), the Linux host configuration doesn't provision the target machines. You need to provision them wherever you decide to deploy. - -This cluster uses Linux server instances to host the cluster's nodes. The nodes include three replicating database nodes, three cohosted connection proxies, and one backup node. TPA can then provision, prepare, and deploy the required EDB Postgres Distributed software and configuration to each node. - -!!! Note On host compatibility -This set of steps is specifically for users running Ubuntu 22.04 LTS on Intel/AMD processors. -!!! - -## Prerequisites - -### Configure your Linux hosts - -You need to provision four hosts for this quick start. Each host must have a [supported Linux operating system](/tpa/latest/reference/distributions/) installed. To eliminate prompts for password, each host also needs to be SSH accessible using certificate key pairs. - -!!! Note On machine provisioning -Azure users can follow [a Microsoft guide](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu) on how to provision Azure VMs loaded with Linux. Google Cloud Platform users can follow [a Google guide](https://cloud.google.com/compute/docs/create-linux-vm-instance) on how to provision GCP VMs with Linux loaded. You can use any virtual machine technology to host a Linux instance, too. Refer to your virtualization platform's documentation for instructions on how to create instances with Linux loaded on them. - -Whichever cloud or VM platform you use, you need to make sure that each instance is accessible by SSH and that each instance can connect to the other instances. They can connect through either the public network or over a VPC for the cloud platforms. You can connect through your local network for on-premises VMs. - -If you can't do this, you might want to consider the Docker or AWS quick start. These configurations are easier to set up and quicker to tear down. The [AWS quick start](quick_start_aws), for example, automatically provisions compute instances and creates a VPC for those instances. -!!! - -In this quick start, you will install PGD nodes onto four hosts configured in the cloud. Each of these hosts in this example is installed with Rocky Linux. Each has a public IP address to go with its private IP address. - -| Host name | Public IP | Private IP | -| ----------- | ------------------------ | -------------- | -| linuxhost-1 | 172.19.16.27 | 192.168.2.247 | -| linuxhost-2 | 172.19.16.26 | 192.168.2.41 | -| linuxhost-3 | 172.19.16.25 | 192.168.2.254 | -| linuxhost-4 |172.19.16.15 | 192.168.2.30 | - -These are example IP addresses. Substitute them with your own public and private IP addresses as you progress through the quick start. - -### Set up a host admin user - -Each machine requires a user account to use for installation. For simplicity, use a user with the same name on all the hosts. On each host, also configure the user so that you can SSH into the host without being prompted for a password. Be sure to give that user sudo privileges on the host. On the four hosts, the user rocky is already configured with sudo privileges. - -## Preparation - -### EDB account - -You need an EDB account to install both TPA and PGD. - -[Sign up for a free EDB account](https://www.enterprisedb.com/accounts/register) if you don't already have one. Signing up gives you a trial subscription to EDB's software repositories. - -After you're registered, go to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. - -On your first visit to this page, select **Request Access** to generate your repo token. Copy the token using the **Copy Token** icon, and store it safely. - - -### Setting environment variables - -First, set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the value of your EDB repo token, obtained in the [EDB account](#edb-account) step. - -``` -export EDB_SUBSCRIPTION_TOKEN= -``` - -You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. - -### Configure the repository - -All the software needed for this example is available from the EDB Postgres Distributed package repository. Download and run a script to configure the EDB Postgres Distributed repository. This repository also contains the TPA packages. - -``` -curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash -``` -## Installing Trusted Postgres Architect (TPA) - -You'll use TPA to provision and deploy PGD. If you previously installed TPA, you can move on to the [next step](#installing-pgd-using-tpa). You'll find full instructions for installing TPA in the [Trusted Postgres Architect documentation](/tpa/latest/INSTALL/), which we've also included here. - -### Linux environment - -[TPA supports several distributions of Linux](/tpa/latest/INSTALL/) as a host platform. These examples are written for Ubuntu 22.04, but steps are similar for other supported platforms. - -### Install the TPA package - -```shell -sudo apt install tpaexec -``` - -### Configuring TPA - -You now need to configure TPA, which configures TPA's Python environment. Call `tpaexec` with the command `setup`: - -```shell -sudo /opt/EDB/TPA/bin/tpaexec setup -export PATH=$PATH:/opt/EDB/TPA/bin -``` - -You can add the `export` command to your shell's profile. - -### Testing the TPA installation - -You can verify TPA is correctly installed by running `selftest`: - -```shell -tpaexec selftest -``` -TPA is now installed. - -## Installing PGD using TPA - -### Generating a configuration file - -Run the [`tpaexec configure`](/tpa/latest/tpaexec-configure/) command to generate a configuration folder: - -``` -tpaexec configure democluster \ - --architecture PGD-Always-ON \ - --platform bare \ - --edb-postgres-advanced 16 \ - --redwood \ - --no-git \ - --location-names dc1 \ - --pgd-proxy-routing local \ - --hostnames-unsorted -``` - -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](../backup/#physical-backup) node for backup. - -For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA determines the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems. - -Specify that the data nodes will be running [EDB Postgres Advanced Server v16](/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`). - -You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes in each location. - -By default, TPA commits configuration changes to a Git repository. For this example, you don't need to do that, so pass the `--no-git` flag. - -Finally, you ask TPA to generate repeatable hostnames for the nodes by passing `--hostnames-unsorted`. Otherwise, it selects hostnames at random from a predefined list of suitable words. - -This command creates a subdirectory in the current working directory called `democluster`. It contains the `config.yml` configuration file TPA uses to create the cluster. You can view it using: - -```shell -less democluster/config.yml -``` - -You now need to edit the configuration file to add details related to your Linux hosts, such as admin user names and public and private IP addresses. - -## Editing your configuration - -Using your preferred editor, open `democluster/config.yml`. - -Search for the line containing `ansible_user: root`. Change `root` to the name of the user you configured with SSH access and sudo privileges. Follow that with this line: - -```yaml - manage_ssh_hostkeys: yes -``` - -Your `instance_defaults` section now looks like this: - -```yaml -instance_defaults: - platform: bare - vars: - ansible_user: rocky - manage_ssh_hostkeys: yes -``` -Next, search for `node: 1`, which is the configuration settings of the first node, kaboom. - -After the `node: 1` line, add the public and private IP addresses of your node. Use `linuxhost-1` as the host for this node. Add the following to the file, substituting your IP addresses. Align the start of each line with the start of the `node:` line. - -```yaml - public_ip: 172.19.16.27 - private_ip: 192.168.2.247 -``` - -The whole entry for kaboom looks like this but with your own IP addresses: - -```yaml -- Name: kaboom - backup: kapok - location: dc1 - node: 1 - public_ip: 172.19.16.27 - private_ip: 192.168.2.247 - role: - - bdr - - pgd-proxy - vars: - bdr_child_group: dc1_subgroup - bdr_node_options: - route_priority: 100 -``` -Repeat this process for the three other nodes. - -Search for `node: 2`, which is the configuration settings for the node kaftan. Use `linuxhost-2` for this node. Substituting your IP addresses, add: - -```yaml - public_ip: 172.19.16.26 - private_ip: 192.168.2.41 -``` - -Search for `node: 3`, which is the configuration settings for the node kaolin. Use `linuxhost-3` for this node. Substituting your IP addresses, add: - -```yaml - public_ip: 172.19.16.25 - private_ip: 192.168.2.254 -``` - -Finally, search for `node: 4`, which is the configuration settings for the node kapok. Use `linuxhost-4` for this node. Substituting your IP addresses, add: - -```yaml - public_ip: 172.19.16.15 - private_ip: 192.168.2.30 -``` - -## Provisioning the cluster - -You can now run: - -``` -tpaexec provision democluster -``` - -This command prepares for deploying the cluster. (On other platforms, such as Docker and AWS, this command also creates the required hosts. When using Linux hosts, your hosts must already be configured.) - -!!! SeeAlso "Further reading" - - [`tpaexec provision`](/tpa/latest/tpaexec-provision/) in the Trusted Postgres Architect documentation - - -One part of this process for Linux hosts is creating key-pairs for the hosts for SSH operations later. With those key-pairs created, you need to copy the public part of the key-pair to the hosts. You can do this with `ssh-copy-id`, giving the democluster identity (`-i`) and the login to each host. For this example, these are the commands: - -```shell -ssh-copy-id -i democluster/id_democluster rocky@172.19.16.27 -ssh-copy-id -i democluster/id_democluster rocky@172.19.16.26 -ssh-copy-id -i democluster/id_democluster rocky@172.19.16.25 -ssh-copy-id -i democluster/id_democluster rocky@172.19.16.15 -``` - - -You can now create the `tpa_known_hosts` file, which allows the hosts to be verified. Use `ssh-keyscan` on each host (`-H`) and append its output to `tpa_known_hosts`: - -```shell -ssh-keyscan -H 172.19.16.27 >> democluster/tpa_known_hosts -ssh-keyscan -H 172.19.16.26 >> democluster/tpa_known_hosts -ssh-keyscan -H 172.19.16.25 >> democluster/tpa_known_hosts -ssh-keyscan -H 172.19.16.15 >> democluster/tpa_known_hosts -``` - -## Deploy your cluster - -You now have everything ready to deploy your cluster. To deploy, run: - -```shell -tpaexec deploy democluster -``` - -TPA applies the configuration, installing the needed packages and setting up the actual EDB Postgres Distributed cluster. - -!!! SeeAlso "Further reading" - - [`tpaexec deploy`](/tpa/latest/tpaexec-deploy/) in the Trusted Postgres Architect documentation - -## Connecting to the cluster - -You're now ready to log in to one of the nodes of the cluster with SSH and then connect to the database. Part of the configuration process set up SSH logins for all the nodes, complete with keys. To use the SSH configuration, you need to be in the `democluster` directory created by the `tpaexec configure` command earlier: - -```shell -cd democluster -``` - -From there, you can run `ssh -F ssh_config ` to establish an SSH connection. Connect to kaboom, the first database node in the cluster: - -```shell -ssh -F ssh_config kaboom -__OUTPUT__ -[rocky@kaboom ~]# -``` - -Notice that you're logged in as rocky, the admin user and ansible user you configured earlier, on kaboom. - -You now need to adopt the identity of the enterprisedb user. This user is preconfigured and authorized to connect to the cluster's nodes. - -```shell -sudo -iu enterprisedb -__OUTPUT__ -enterprisedb@kaboom:~ $ -``` - -You can now run the `psql` command to access the `bdrdb` database: - -```shell -psql bdrdb -__OUTPUT__ -psql (16.2.0, server 16.2.0) -Type "help" for help. - -bdrdb=# -``` - -You're directly connected to the Postgres database running on the kaboom node and can start issuing SQL commands. - -To leave the SQL client, enter `exit`. - -### Using PGD CLI - -The pgd utility, also known as the PGD CLI, lets you control and manage your EDB Postgres Distributed cluster. It's already installed on the node. - -You can use it to check the cluster's health by running `pgd cluster show --health`: - -```shell -pgd cluster show --health -__OUTPUT__ -Connections ------------ -Checks if all BDR nodes are accessible. - -Result: Ok, all BDR nodes are accessible - - -Raft ----- -Raft Consensus status. Checks if all data and witness nodes are participating -in raft and have the same leader. - -Result: Ok, raft Consensus is working correctly - - -Replication Slots ------------------ -Checks if all PGD replication slots are working correctly. - -Result: Ok, all PGD replication slots are working correctly - - -Clock Skew ----------- -Clock drift between nodes. Uses raft leader as reference node to calculate -clock drift. High clock drift can affect conflict resolution and potentially -cause inconsistency. - -Result: Ok, clock drift is within permissible limit - - -Versions --------- -Checks if all nodes are running the same PGD version. - -Result: Ok, all nodes are running the same PGD version -``` - -Or, you can use `pgd nodes list` to ask PGD to show you the data-bearing nodes in the cluster: - -```shell -pgd nodes list -__OUTPUT__ -Node Name Group Name Node Kind Join State Node Status ---------- ------------ --------- ---------- ----------- -kaftan dc1_subgroup data ACTIVE Up -kaolin dc1_subgroup data ACTIVE Up -kaboom dc1_subgroup data ACTIVE Up -``` - -## Explore your cluster - -* [Connect to your database](connecting_applications) to applications. -* [Explore replication](further_explore_replication) with hands-on exercises. -* [Explore failover](further_explore_failover) with hands-on exercises. -* [Understand conflicts](further_explore_conflicts) by creating and monitoring them. -* Take the [next steps](next_steps) for working with your cluster. diff --git a/product_docs/docs/pgd/5.8/reference/index.json b/product_docs/docs/pgd/5.8/reference/index.json deleted file mode 100644 index cfee353bdf6..00000000000 --- a/product_docs/docs/pgd/5.8/reference/index.json +++ /dev/null @@ -1,376 +0,0 @@ -{ - "bdrcamo_decision_journal": "/pgd/5.8/reference/catalogs-visible#bdrcamo_decision_journal", - "bdrcommit_scopes": "/pgd/5.8/reference/catalogs-visible#bdrcommit_scopes", - "bdrconflict_history": "/pgd/5.8/reference/catalogs-visible#bdrconflict_history", - "bdrconflict_history_summary": "/pgd/5.8/reference/catalogs-visible#bdrconflict_history_summary", - "bdrconsensus_kv_data": "/pgd/5.8/reference/catalogs-visible#bdrconsensus_kv_data", - "bdrcrdt_handlers": "/pgd/5.8/reference/catalogs-visible#bdrcrdt_handlers", - "bdrddl_replication": "/pgd/5.8/reference/pgd-settings#bdrddl_replication", - "bdrdepend": "/pgd/5.8/reference/catalogs-visible#bdrdepend", - "bdrfailover_replication_slots": "/pgd/5.8/reference/catalogs-visible#bdrfailover_replication_slots", - "bdrglobal_consensus_journal": "/pgd/5.8/reference/catalogs-visible#bdrglobal_consensus_journal", - "bdrglobal_consensus_journal_details": "/pgd/5.8/reference/catalogs-visible#bdrglobal_consensus_journal_details", - "bdrglobal_consensus_response_journal": "/pgd/5.8/reference/catalogs-visible#bdrglobal_consensus_response_journal", - "bdrglobal_lock": "/pgd/5.8/reference/catalogs-visible#bdrglobal_lock", - "bdrglobal_locks": "/pgd/5.8/reference/catalogs-visible#bdrglobal_locks", - "bdrgroup_camo_details": "/pgd/5.8/reference/catalogs-visible#bdrgroup_camo_details", - "bdrgroup_raft_details": "/pgd/5.8/reference/catalogs-visible#bdrgroup_raft_details", - "bdrgroup_replslots_details": "/pgd/5.8/reference/catalogs-visible#bdrgroup_replslots_details", - "bdrgroup_subscription_summary": "/pgd/5.8/reference/catalogs-visible#bdrgroup_subscription_summary", - "bdrgroup_versions_details": "/pgd/5.8/reference/catalogs-visible#bdrgroup_versions_details", - "bdrleader": "/pgd/5.8/reference/catalogs-visible#bdrleader", - "bdrlocal_consensus_snapshot": "/pgd/5.8/reference/catalogs-visible#bdrlocal_consensus_snapshot", - "bdrlocal_consensus_state": "/pgd/5.8/reference/catalogs-visible#bdrlocal_consensus_state", - "bdrlocal_node": "/pgd/5.8/reference/catalogs-visible#bdrlocal_node", - "bdrlocal_node_summary": "/pgd/5.8/reference/catalogs-visible#bdrlocal_node_summary", - "bdrlocal_sync_status": "/pgd/5.8/reference/catalogs-visible#bdrlocal_sync_status", - "bdrnode": "/pgd/5.8/reference/catalogs-visible#bdrnode", - "bdrnode_catchup_info": "/pgd/5.8/reference/catalogs-visible#bdrnode_catchup_info", - "bdrnode_catchup_info_details": "/pgd/5.8/reference/catalogs-visible#bdrnode_catchup_info_details", - "bdrnode_conflict_resolvers": "/pgd/5.8/reference/catalogs-visible#bdrnode_conflict_resolvers", - "bdrnode_group": "/pgd/5.8/reference/catalogs-visible#bdrnode_group", - "bdrnode_group_replication_sets": "/pgd/5.8/reference/catalogs-visible#bdrnode_group_replication_sets", - "bdrnode_group_summary": "/pgd/5.8/reference/catalogs-visible#bdrnode_group_summary", - "bdrnode_local_info": "/pgd/5.8/reference/catalogs-visible#bdrnode_local_info", - "bdrnode_log_config": "/pgd/5.8/reference/catalogs-visible#bdrnode_log_config", - "bdrnode_peer_progress": "/pgd/5.8/reference/catalogs-visible#bdrnode_peer_progress", - "bdrnode_replication_rates": "/pgd/5.8/reference/catalogs-visible#bdrnode_replication_rates", - "bdrnode_slots": "/pgd/5.8/reference/catalogs-visible#bdrnode_slots", - "bdrnode_summary": "/pgd/5.8/reference/catalogs-visible#bdrnode_summary", - "bdrqueue": "/pgd/5.8/reference/catalogs-visible#bdrqueue", - "bdrreplication_set": "/pgd/5.8/reference/catalogs-visible#bdrreplication_set", - "bdrreplication_set_table": "/pgd/5.8/reference/catalogs-visible#bdrreplication_set_table", - "bdrreplication_set_ddl": "/pgd/5.8/reference/catalogs-visible#bdrreplication_set_ddl", - "bdrreplication_sets": "/pgd/5.8/reference/catalogs-visible#bdrreplication_sets", - "bdrschema_changes": "/pgd/5.8/reference/catalogs-visible#bdrschema_changes", - "bdrsequence_alloc": "/pgd/5.8/reference/catalogs-visible#bdrsequence_alloc", - "bdrsequences": "/pgd/5.8/reference/catalogs-visible#bdrsequences", - "bdrstat_activity": "/pgd/5.8/reference/catalogs-visible#bdrstat_activity", - "bdrstat_commit_scope": "/pgd/5.8/reference/catalogs-visible#bdrstat_commit_scope", - "bdrstat_commit_scope_state": "/pgd/5.8/reference/catalogs-visible#bdrstat_commit_scope_state", - "bdrstat_raft_followers_state": "/pgd/5.8/reference/catalogs-visible#bdrstat_raft_followers_state", - "bdrstat_raft_state": "/pgd/5.8/reference/catalogs-visible#bdrstat_raft_state", - "bdrstat_receiver": "/pgd/5.8/reference/catalogs-visible#bdrstat_receiver", - "bdrstat_relation": "/pgd/5.8/reference/catalogs-visible#bdrstat_relation", - "bdrstat_routing_candidate_state": "/pgd/5.8/reference/catalogs-visible#bdrstat_routing_candidate_state", - "bdrstat_routing_state": "/pgd/5.8/reference/catalogs-visible#bdrstat_routing_state", - "bdrstat_subscription": "/pgd/5.8/reference/catalogs-visible#bdrstat_subscription", - "bdrstat_worker": "/pgd/5.8/reference/catalogs-visible#bdrstat_worker", - "bdrstat_writer": "/pgd/5.8/reference/catalogs-visible#bdrstat_writer", - "bdrsubscription": "/pgd/5.8/reference/catalogs-visible#bdrsubscription", - "bdrsubscription_summary": "/pgd/5.8/reference/catalogs-visible#bdrsubscription_summary", - "bdrtables": "/pgd/5.8/reference/catalogs-visible#bdrtables", - "bdrtaskmgr_work_queue": "/pgd/5.8/reference/catalogs-visible#bdrtaskmgr_work_queue", - "bdrtaskmgr_workitem_status": "/pgd/5.8/reference/catalogs-visible#bdrtaskmgr_workitem_status", - "bdrtaskmgr_local_work_queue": "/pgd/5.8/reference/catalogs-visible#bdrtaskmgr_local_work_queue", - "bdrtaskmgr_local_workitem_status": "/pgd/5.8/reference/catalogs-visible#bdrtaskmgr_local_workitem_status", - "bdrtrigger": "/pgd/5.8/reference/catalogs-visible#bdrtrigger", - "bdrtriggers": "/pgd/5.8/reference/catalogs-visible#bdrtriggers", - "bdrworkers": "/pgd/5.8/reference/catalogs-visible#bdrworkers", - "bdrwriters": "/pgd/5.8/reference/catalogs-visible#bdrwriters", - "bdrworker_tasks": "/pgd/5.8/reference/catalogs-visible#bdrworker_tasks", - "bdrbdr_version": "/pgd/5.8/reference/functions#bdrbdr_version", - "bdrbdr_version_num": "/pgd/5.8/reference/functions#bdrbdr_version_num", - "bdrget_relation_stats": "/pgd/5.8/reference/functions#bdrget_relation_stats", - "bdrget_subscription_stats": "/pgd/5.8/reference/functions#bdrget_subscription_stats", - "bdrlocal_node_id": "/pgd/5.8/reference/functions#bdrlocal_node_id", - "bdrlast_committed_lsn": "/pgd/5.8/reference/functions#bdrlast_committed_lsn", - "transaction_id": "/pgd/5.8/reference/functions#transaction_id", - "bdris_node_connected": "/pgd/5.8/reference/functions#bdris_node_connected", - "bdris_node_ready": "/pgd/5.8/reference/functions#bdris_node_ready", - "bdrconsensus_disable": "/pgd/5.8/reference/functions#bdrconsensus_disable", - "bdrconsensus_enable": "/pgd/5.8/reference/functions#bdrconsensus_enable", - "bdrconsensus_proto_version": "/pgd/5.8/reference/functions#bdrconsensus_proto_version", - "bdrconsensus_snapshot_export": "/pgd/5.8/reference/functions#bdrconsensus_snapshot_export", - "bdrconsensus_snapshot_import": "/pgd/5.8/reference/functions#bdrconsensus_snapshot_import", - "bdrconsensus_snapshot_verify": "/pgd/5.8/reference/functions#bdrconsensus_snapshot_verify", - "bdrget_consensus_status": "/pgd/5.8/reference/functions#bdrget_consensus_status", - "bdrget_raft_status": "/pgd/5.8/reference/functions#bdrget_raft_status", - "bdrraft_leadership_transfer": "/pgd/5.8/reference/functions#bdrraft_leadership_transfer", - "bdrwait_slot_confirm_lsn": "/pgd/5.8/reference/functions#bdrwait_slot_confirm_lsn", - "bdrwait_node_confirm_lsn": "/pgd/5.8/reference/functions#bdrwait_node_confirm_lsn", - "bdrwait_for_apply_queue": "/pgd/5.8/reference/functions#bdrwait_for_apply_queue", - "bdrget_node_sub_receive_lsn": "/pgd/5.8/reference/functions#bdrget_node_sub_receive_lsn", - "bdrget_node_sub_apply_lsn": "/pgd/5.8/reference/functions#bdrget_node_sub_apply_lsn", - "bdrreplicate_ddl_command": "/pgd/5.8/reference/functions#bdrreplicate_ddl_command", - "bdrrun_on_all_nodes": "/pgd/5.8/reference/functions#bdrrun_on_all_nodes", - "bdrrun_on_nodes": "/pgd/5.8/reference/functions#bdrrun_on_nodes", - "bdrrun_on_group": "/pgd/5.8/reference/functions#bdrrun_on_group", - "bdrglobal_lock_table": "/pgd/5.8/reference/functions#bdrglobal_lock_table", - "bdrwait_for_xid_progress": "/pgd/5.8/reference/functions#bdrwait_for_xid_progress", - "bdrlocal_group_slot_name": "/pgd/5.8/reference/functions#bdrlocal_group_slot_name", - "bdrnode_group_type": "/pgd/5.8/reference/functions#bdrnode_group_type", - "bdralter_node_kind": "/pgd/5.8/reference/functions#bdralter_node_kind", - "bdralter_subscription_skip_changes_upto": "/pgd/5.8/reference/functions#bdralter_subscription_skip_changes_upto", - "bdrglobal_advisory_lock": "/pgd/5.8/reference/functions#bdrglobal_advisory_lock", - "bdrglobal_advisory_unlock": "/pgd/5.8/reference/functions#bdrglobal_advisory_unlock", - "bdrmonitor_group_versions": "/pgd/5.8/reference/functions#bdrmonitor_group_versions", - "bdrmonitor_group_raft": "/pgd/5.8/reference/functions#bdrmonitor_group_raft", - "bdrmonitor_local_replslots": "/pgd/5.8/reference/functions#bdrmonitor_local_replslots", - "bdrwal_sender_stats": "/pgd/5.8/reference/functions#bdrwal_sender_stats", - "bdrget_decoding_worker_stat": "/pgd/5.8/reference/functions#bdrget_decoding_worker_stat", - "bdrlag_control": "/pgd/5.8/reference/functions#bdrlag_control", - "bdris_camo_partner_connected": "/pgd/5.8/reference/functions#bdris_camo_partner_connected", - "bdris_camo_partner_ready": "/pgd/5.8/reference/functions#bdris_camo_partner_ready", - "bdrget_configured_camo_partner": "/pgd/5.8/reference/functions#bdrget_configured_camo_partner", - "bdrwait_for_camo_partner_queue": "/pgd/5.8/reference/functions#bdrwait_for_camo_partner_queue", - "bdrcamo_transactions_resolved": "/pgd/5.8/reference/functions#bdrcamo_transactions_resolved", - "bdrlogical_transaction_status": "/pgd/5.8/reference/functions#bdrlogical_transaction_status", - "bdradd_commit_scope": "/pgd/5.8/reference/functions#bdradd_commit_scope", - "bdrcreate_commit_scope": "/pgd/5.8/reference/functions#bdrcreate_commit_scope", - "bdralter_commit_scope": "/pgd/5.8/reference/functions#bdralter_commit_scope", - "bdrdrop_commit_scope": "/pgd/5.8/reference/functions#bdrdrop_commit_scope", - "bdrremove_commit_scope": "/pgd/5.8/reference/functions#bdrremove_commit_scope", - "bdrdefault_conflict_detection": "/pgd/5.8/reference/pgd-settings#bdrdefault_conflict_detection", - "bdrdefault_sequence_kind": "/pgd/5.8/reference/pgd-settings#bdrdefault_sequence_kind", - "bdrdefault_replica_identity": "/pgd/5.8/reference/pgd-settings#bdrdefault_replica_identity", - "bdrrole_replication": "/pgd/5.8/reference/pgd-settings#bdrrole_replication", - "bdrddl_locking": "/pgd/5.8/reference/pgd-settings#bdrddl_locking", - "bdrtruncate_locking": "/pgd/5.8/reference/pgd-settings#bdrtruncate_locking", - "bdrglobal_lock_max_locks": "/pgd/5.8/reference/pgd-settings#bdrglobal_lock_max_locks", - "bdrglobal_lock_timeout": "/pgd/5.8/reference/pgd-settings#bdrglobal_lock_timeout", - "bdrglobal_lock_statement_timeout": "/pgd/5.8/reference/pgd-settings#bdrglobal_lock_statement_timeout", - "bdrglobal_lock_idle_timeout": "/pgd/5.8/reference/pgd-settings#bdrglobal_lock_idle_timeout", - "bdrlock_table_locking": "/pgd/5.8/reference/pgd-settings#bdrlock_table_locking", - "bdrpredictive_checks": "/pgd/5.8/reference/pgd-settings#bdrpredictive_checks", - "bdrreplay_progress_frequency": "/pgd/5.8/reference/pgd-settings#bdrreplay_progress_frequency", - "bdrstandby_slot_names": "/pgd/5.8/reference/pgd-settings#bdrstandby_slot_names", - "bdrwriters_per_subscription": "/pgd/5.8/reference/pgd-settings#bdrwriters_per_subscription", - "bdrmax_writers_per_subscription": "/pgd/5.8/reference/pgd-settings#bdrmax_writers_per_subscription", - "bdrxact_replication": "/pgd/5.8/reference/pgd-settings#bdrxact_replication", - "bdrpermit_unsafe_commands": "/pgd/5.8/reference/pgd-settings#bdrpermit_unsafe_commands", - "bdrbatch_inserts": "/pgd/5.8/reference/pgd-settings#bdrbatch_inserts", - "bdrmaximum_clock_skew": "/pgd/5.8/reference/pgd-settings#bdrmaximum_clock_skew", - "bdrmaximum_clock_skew_action": "/pgd/5.8/reference/pgd-settings#bdrmaximum_clock_skew_action", - "bdraccept_connections": "/pgd/5.8/reference/pgd-settings#bdraccept_connections", - "bdrstandby_slots_min_confirmed": "/pgd/5.8/reference/pgd-settings#bdrstandby_slots_min_confirmed", - "bdrwriter_input_queue_size": "/pgd/5.8/reference/pgd-settings#bdrwriter_input_queue_size", - "bdrwriter_output_queue_size": "/pgd/5.8/reference/pgd-settings#bdrwriter_output_queue_size", - "bdrmin_worker_backoff_delay": "/pgd/5.8/reference/pgd-settings#bdrmin_worker_backoff_delay", - "bdrcrdt_raw_value": "/pgd/5.8/reference/pgd-settings#bdrcrdt_raw_value", - "bdrcommit_scope": "/pgd/5.8/reference/pgd-settings#bdrcommit_scope", - "bdrcamo_local_mode_delay": "/pgd/5.8/reference/pgd-settings#bdrcamo_local_mode_delay", - "bdrcamo_enable_client_warnings": "/pgd/5.8/reference/pgd-settings#bdrcamo_enable_client_warnings", - "bdrdefault_streaming_mode": "/pgd/5.8/reference/pgd-settings#bdrdefault_streaming_mode", - "bdrlag_control_max_commit_delay": "/pgd/5.8/reference/pgd-settings#bdrlag_control_max_commit_delay", - "bdrlag_control_max_lag_size": "/pgd/5.8/reference/pgd-settings#bdrlag_control_max_lag_size", - "bdrlag_control_max_lag_time": "/pgd/5.8/reference/pgd-settings#bdrlag_control_max_lag_time", - "bdrlag_control_min_conforming_nodes": "/pgd/5.8/reference/pgd-settings#bdrlag_control_min_conforming_nodes", - "bdrlag_control_commit_delay_adjust": "/pgd/5.8/reference/pgd-settings#bdrlag_control_commit_delay_adjust", - "bdrlag_control_sample_interval": "/pgd/5.8/reference/pgd-settings#bdrlag_control_sample_interval", - "bdrlag_control_commit_delay_start": "/pgd/5.8/reference/pgd-settings#bdrlag_control_commit_delay_start", - "bdrtimestamp_snapshot_keep": "/pgd/5.8/reference/pgd-settings#bdrtimestamp_snapshot_keep", - "bdrdebug_level": "/pgd/5.8/reference/pgd-settings#bdrdebug_level", - "bdrtrace_level": "/pgd/5.8/reference/pgd-settings#bdrtrace_level", - "bdrtrack_subscription_apply": "/pgd/5.8/reference/pgd-settings#bdrtrack_subscription_apply", - "bdrtrack_relation_apply": "/pgd/5.8/reference/pgd-settings#bdrtrack_relation_apply", - "bdrtrack_apply_lock_timing": "/pgd/5.8/reference/pgd-settings#bdrtrack_apply_lock_timing", - "bdrenable_wal_decoder": "/pgd/5.8/reference/pgd-settings#bdrenable_wal_decoder", - "bdrreceive_lcr": "/pgd/5.8/reference/pgd-settings#bdrreceive_lcr", - "bdrlcr_cleanup_interval": "/pgd/5.8/reference/pgd-settings#bdrlcr_cleanup_interval", - "bdrglobal_connection_timeout": "/pgd/5.8/reference/pgd-settings#bdrglobal_connection_timeout", - "bdrglobal_keepalives": "/pgd/5.8/reference/pgd-settings#bdrglobal_keepalives", - "bdrglobal_keepalives_idle": "/pgd/5.8/reference/pgd-settings#bdrglobal_keepalives_idle", - "bdrglobal_keepalives_interval": "/pgd/5.8/reference/pgd-settings#bdrglobal_keepalives_interval", - "bdrglobal_keepalives_count": "/pgd/5.8/reference/pgd-settings#bdrglobal_keepalives_count", - "bdrglobal_tcp_user_timeout": "/pgd/5.8/reference/pgd-settings#bdrglobal_tcp_user_timeout", - "bdrforce_full_mesh": "/pgd/5.8/reference/pgd-settings#bdrforce_full_mesh", - "bdrraft_global_election_timeout": "/pgd/5.8/reference/pgd-settings#bdrraft_global_election_timeout", - "bdrraft_group_election_timeout": "/pgd/5.8/reference/pgd-settings#bdrraft_group_election_timeout", - "bdrraft_response_timeout": "/pgd/5.8/reference/pgd-settings#bdrraft_response_timeout", - "bdrraft_keep_min_entries": "/pgd/5.8/reference/pgd-settings#bdrraft_keep_min_entries", - "bdrraft_log_min_apply_duration": "/pgd/5.8/reference/pgd-settings#bdrraft_log_min_apply_duration", - "bdrraft_log_min_message_duration": "/pgd/5.8/reference/pgd-settings#bdrraft_log_min_message_duration", - "bdrraft_group_max_connections": "/pgd/5.8/reference/pgd-settings#bdrraft_group_max_connections", - "bdrbackwards_compatibility": "/pgd/5.8/reference/pgd-settings#bdrbackwards_compatibility", - "bdrtrack_replication_estimates": "/pgd/5.8/reference/pgd-settings#bdrtrack_replication_estimates", - "bdrlag_tracker_apply_rate_weight": "/pgd/5.8/reference/pgd-settings#bdrlag_tracker_apply_rate_weight", - "bdrenable_auto_sync_reconcile": "/pgd/5.8/reference/pgd-settings#bdrenable_auto_sync_reconcile", - "list-of-node-states": "/pgd/5.8/reference/nodes#list-of-node-states", - "node-management-commands": "/pgd/5.8/reference/nodes#node-management-commands", - "bdr_init_physical": "/pgd/5.8/reference/nodes#bdr_init_physical", - "bdr_config": "/pgd/5.8/reference/nodes#bdr_config", - "bdralter_node_group_option": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_node_group_option", - "bdralter_node_interface": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_node_interface", - "bdralter_node_option": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_node_option", - "bdralter_subscription_enable": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_subscription_enable", - "bdralter_subscription_disable": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_subscription_disable", - "bdrcreate_node": "/pgd/5.8/reference/nodes-management-interfaces#bdrcreate_node", - "bdrcreate_node_group": "/pgd/5.8/reference/nodes-management-interfaces#bdrcreate_node_group", - "bdrdrop_node_group": "/pgd/5.8/reference/nodes-management-interfaces#bdrdrop_node_group", - "bdrjoin_node_group": "/pgd/5.8/reference/nodes-management-interfaces#bdrjoin_node_group", - "bdrpart_node": "/pgd/5.8/reference/nodes-management-interfaces#bdrpart_node", - "bdrpromote_node": "/pgd/5.8/reference/nodes-management-interfaces#bdrpromote_node", - "bdrswitch_node_group": "/pgd/5.8/reference/nodes-management-interfaces#bdrswitch_node_group", - "bdrwait_for_join_completion": "/pgd/5.8/reference/nodes-management-interfaces#bdrwait_for_join_completion", - "bdralter_node_group_config": "/pgd/5.8/reference/nodes-management-interfaces#bdralter_node_group_config", - "bdrcreate_proxy": "/pgd/5.8/reference/routing#bdrcreate_proxy", - "bdralter_proxy_option": "/pgd/5.8/reference/routing#bdralter_proxy_option", - "bdrdrop_proxy": "/pgd/5.8/reference/routing#bdrdrop_proxy", - "bdrrouting_leadership_transfer": "/pgd/5.8/reference/routing#bdrrouting_leadership_transfer", - "cs.commit-scope-syntax": "/pgd/5.8/reference/commit-scopes#commit-scope-syntax", - "cs.commit_scope_degrade_operation": "/pgd/5.8/reference/commit-scopes#commit_scope_degrade_operation", - "cs.commit-scope-targets": "/pgd/5.8/reference/commit-scopes#commit-scope-targets", - "cs.origin_group": "/pgd/5.8/reference/commit-scopes#origin_group", - "cs.commit-scope-groups": "/pgd/5.8/reference/commit-scopes#commit-scope-groups", - "cs.any": "/pgd/5.8/reference/commit-scopes#any", - "cs.any-not": "/pgd/5.8/reference/commit-scopes#any-not", - "cs.majority": "/pgd/5.8/reference/commit-scopes#majority", - "cs.majority-not": "/pgd/5.8/reference/commit-scopes#majority-not", - "cs.all": "/pgd/5.8/reference/commit-scopes#all", - "cs.all-not": "/pgd/5.8/reference/commit-scopes#all-not", - "cs.confirmation-level": "/pgd/5.8/reference/commit-scopes#confirmation-level", - "cs.on-received": "/pgd/5.8/reference/commit-scopes#on-received", - "cs.on-replicated": "/pgd/5.8/reference/commit-scopes#on-replicated", - "cs.on-durable": "/pgd/5.8/reference/commit-scopes#on-durable", - "cs.on-visible": "/pgd/5.8/reference/commit-scopes#on-visible", - "cs.commit-scope-kinds": "/pgd/5.8/reference/commit-scopes#commit-scope-kinds", - "cs.synchronous-commit": "/pgd/5.8/reference/commit-scopes#synchronous-commit", - "cs.degrade-on-parameters": "/pgd/5.8/reference/commit-scopes#degrade-on-parameters", - "cs.group-commit": "/pgd/5.8/reference/commit-scopes#group-commit", - "cs.group-commit-parameters": "/pgd/5.8/reference/commit-scopes#group-commit-parameters", - "cs.abort-on-parameters": "/pgd/5.8/reference/commit-scopes#abort-on-parameters", - "cs.transaction_tracking-settings": "/pgd/5.8/reference/commit-scopes#transaction_tracking-settings", - "cs.conflict_resolution-settings": "/pgd/5.8/reference/commit-scopes#conflict_resolution-settings", - "cs.commit_decision-settings": "/pgd/5.8/reference/commit-scopes#commit_decision-settings", - "cs.commit_scope_degrade_operation-settings": "/pgd/5.8/reference/commit-scopes#commit_scope_degrade_operation-settings", - "cs.camo": "/pgd/5.8/reference/commit-scopes#camo", - "cs.lag-control": "/pgd/5.8/reference/commit-scopes#lag-control", - "cs.lag-control-parameters": "/pgd/5.8/reference/commit-scopes#lag-control-parameters", - "conflict-detection": "/pgd/5.8/reference/conflicts#conflict-detection", - "list-of-conflict-types": "/pgd/5.8/reference/conflicts#list-of-conflict-types", - "conflict-resolution": "/pgd/5.8/reference/conflicts#conflict-resolution", - "list-of-conflict-resolvers": "/pgd/5.8/reference/conflicts#list-of-conflict-resolvers", - "default-conflict-resolvers": "/pgd/5.8/reference/conflicts#default-conflict-resolvers", - "list-of-conflict-resolutions": "/pgd/5.8/reference/conflicts#list-of-conflict-resolutions", - "conflict-logging": "/pgd/5.8/reference/conflicts#conflict-logging", - "bdralter_table_conflict_detection": "/pgd/5.8/reference/conflict_functions#bdralter_table_conflict_detection", - "bdralter_node_set_conflict_resolver": "/pgd/5.8/reference/conflict_functions#bdralter_node_set_conflict_resolver", - "bdralter_node_set_log_config": "/pgd/5.8/reference/conflict_functions#bdralter_node_set_log_config", - "bdrcreate_replication_set": "/pgd/5.8/reference/repsets-management#bdrcreate_replication_set", - "bdralter_replication_set": "/pgd/5.8/reference/repsets-management#bdralter_replication_set", - "bdrdrop_replication_set": "/pgd/5.8/reference/repsets-management#bdrdrop_replication_set", - "bdralter_node_replication_sets": "/pgd/5.8/reference/repsets-management#bdralter_node_replication_sets", - "bdrreplication_set_add_table": "/pgd/5.8/reference/repsets-membership#bdrreplication_set_add_table", - "bdrreplication_set_remove_table": "/pgd/5.8/reference/repsets-membership#bdrreplication_set_remove_table", - "bdrreplication_set_add_ddl_filter": "/pgd/5.8/reference/repsets-ddl-filtering#bdrreplication_set_add_ddl_filter", - "bdrreplication_set_remove_ddl_filter": "/pgd/5.8/reference/repsets-ddl-filtering#bdrreplication_set_remove_ddl_filter", - "pgd_bench": "/pgd/5.8/reference/testingandtuning#pgd_bench", - "bdralter_sequence_set_kind": "/pgd/5.8/reference/sequences#bdralter_sequence_set_kind", - "bdrextract_timestamp_from_snowflakeid": "/pgd/5.8/reference/sequences#bdrextract_timestamp_from_snowflakeid", - "bdrextract_nodeid_from_snowflakeid": "/pgd/5.8/reference/sequences#bdrextract_nodeid_from_snowflakeid", - "bdrextract_localseqid_from_snowflakeid": "/pgd/5.8/reference/sequences#bdrextract_localseqid_from_snowflakeid", - "bdrtimestamp_to_snowflakeid": "/pgd/5.8/reference/sequences#bdrtimestamp_to_snowflakeid", - "bdrextract_timestamp_from_timeshard": "/pgd/5.8/reference/sequences#bdrextract_timestamp_from_timeshard", - "bdrextract_nodeid_from_timeshard": "/pgd/5.8/reference/sequences#bdrextract_nodeid_from_timeshard", - "bdrextract_localseqid_from_timeshard": "/pgd/5.8/reference/sequences#bdrextract_localseqid_from_timeshard", - "bdrtimestamp_to_timeshard": "/pgd/5.8/reference/sequences#bdrtimestamp_to_timeshard", - "bdrgalloc_chunk_info": "/pgd/5.8/reference/sequences#bdrgalloc_chunk_info", - "bdrgen_ksuuid_v2": "/pgd/5.8/reference/sequences#bdrgen_ksuuid_v2", - "bdrksuuid_v2_cmp": "/pgd/5.8/reference/sequences#bdrksuuid_v2_cmp", - "bdrextract_timestamp_from_ksuuid_v2": "/pgd/5.8/reference/sequences#bdrextract_timestamp_from_ksuuid_v2", - "bdrgen_ksuuid": "/pgd/5.8/reference/sequences#bdrgen_ksuuid", - "bdruuid_v1_cmp": "/pgd/5.8/reference/sequences#bdruuid_v1_cmp", - "bdrextract_timestamp_from_ksuuid": "/pgd/5.8/reference/sequences#bdrextract_timestamp_from_ksuuid", - "bdrautopartition": "/pgd/5.8/reference/autopartition#bdrautopartition", - "bdrdrop_autopartition": "/pgd/5.8/reference/autopartition#bdrdrop_autopartition", - "bdrautopartition_wait_for_partitions": "/pgd/5.8/reference/autopartition#bdrautopartition_wait_for_partitions", - "bdrautopartition_wait_for_partitions_on_all_nodes": "/pgd/5.8/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes", - "bdrautopartition_find_partition": "/pgd/5.8/reference/autopartition#bdrautopartition_find_partition", - "bdrautopartition_enable": "/pgd/5.8/reference/autopartition#bdrautopartition_enable", - "bdrautopartition_disable": "/pgd/5.8/reference/autopartition#bdrautopartition_disable", - "internal-functions": "/pgd/5.8/reference/autopartition#internal-functions", - "bdrautopartition_create_partition": "/pgd/5.8/reference/autopartition#bdrautopartition_create_partition", - "bdrautopartition_drop_partition": "/pgd/5.8/reference/autopartition#bdrautopartition_drop_partition", - "bdrcreate_conflict_trigger": "/pgd/5.8/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger", - "bdrcreate_transform_trigger": "/pgd/5.8/reference/streamtriggers/interfaces#bdrcreate_transform_trigger", - "bdrdrop_trigger": "/pgd/5.8/reference/streamtriggers/interfaces#bdrdrop_trigger", - "bdrtrigger_get_row": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_row", - "bdrtrigger_get_committs": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs", - "bdrtrigger_get_xid": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid", - "bdrtrigger_get_type": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_type", - "bdrtrigger_get_conflict_type": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type", - "bdrtrigger_get_origin_node_id": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id", - "bdrri_fkey_on_del_trigger": "/pgd/5.8/reference/streamtriggers/rowfunctions#bdrri_fkey_on_del_trigger", - "tg_name": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_name", - "tg_when": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_when", - "tg_level": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_level", - "tg_op": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_op", - "tg_relid": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_relid", - "tg_table_name": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_table_name", - "tg_table_schema": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_table_schema", - "tg_nargs": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_nargs", - "tg_argv": "/pgd/5.8/reference/streamtriggers/rowvariables#tg_argv", - "bdrautopartition_partitions": "/pgd/5.8/reference/catalogs-internal#bdrautopartition_partitions", - "bdrautopartition_rules": "/pgd/5.8/reference/catalogs-internal#bdrautopartition_rules", - "bdrddl_epoch": "/pgd/5.8/reference/catalogs-internal#bdrddl_epoch", - "bdrevent_history": "/pgd/5.8/reference/catalogs-internal#bdrevent_history", - "bdrevent_summary": "/pgd/5.8/reference/catalogs-internal#bdrevent_summary", - "bdrlocal_leader_change": "/pgd/5.8/reference/catalogs-internal#bdrlocal_leader_change", - "bdrnode_config": "/pgd/5.8/reference/catalogs-internal#bdrnode_config", - "bdrnode_config_summary": "/pgd/5.8/reference/catalogs-internal#bdrnode_config_summary", - "bdrnode_group_config": "/pgd/5.8/reference/catalogs-internal#bdrnode_group_config", - "bdrnode_group_routing_config_summary": "/pgd/5.8/reference/catalogs-internal#bdrnode_group_routing_config_summary", - "bdrnode_group_routing_info": "/pgd/5.8/reference/catalogs-internal#bdrnode_group_routing_info", - "bdrnode_group_routing_summary": "/pgd/5.8/reference/catalogs-internal#bdrnode_group_routing_summary", - "bdrnode_routing_config_summary": "/pgd/5.8/reference/catalogs-internal#bdrnode_routing_config_summary", - "bdrproxy_config": "/pgd/5.8/reference/catalogs-internal#bdrproxy_config", - "bdrproxy_config_summary": "/pgd/5.8/reference/catalogs-internal#bdrproxy_config_summary", - "bdrsequence_kind": "/pgd/5.8/reference/catalogs-internal#bdrsequence_kind", - "bdrsync_node_requests": "/pgd/5.8/reference/catalogs-internal#bdrsync_node_requests", - "bdrsync_node_requests_summary": "/pgd/5.8/reference/catalogs-internal#bdrsync_node_requests_summary", - "bdrbdr_get_commit_decisions": "/pgd/5.8/reference/functions-internal#bdrbdr_get_commit_decisions", - "bdrbdr_track_commit_decision": "/pgd/5.8/reference/functions-internal#bdrbdr_track_commit_decision", - "bdrconsensus_kv_fetch": "/pgd/5.8/reference/functions-internal#bdrconsensus_kv_fetch", - "bdrconsensus_kv_store": "/pgd/5.8/reference/functions-internal#bdrconsensus_kv_store", - "bdrdecode_message_payload": "/pgd/5.8/reference/functions-internal#bdrdecode_message_payload", - "bdrdecode_message_response_payload": "/pgd/5.8/reference/functions-internal#bdrdecode_message_response_payload", - "bdrdifference_fix_origin_create": "/pgd/5.8/reference/functions-internal#bdrdifference_fix_origin_create", - "bdrdifference_fix_session_reset": "/pgd/5.8/reference/functions-internal#bdrdifference_fix_session_reset", - "bdrdifference_fix_session_setup": "/pgd/5.8/reference/functions-internal#bdrdifference_fix_session_setup", - "bdrdifference_fix_xact_set_avoid_conflict": "/pgd/5.8/reference/functions-internal#bdrdifference_fix_xact_set_avoid_conflict", - "bdrdrop_node": "/pgd/5.8/reference/functions-internal#bdrdrop_node", - "bdrget_global_locks": "/pgd/5.8/reference/functions-internal#bdrget_global_locks", - "bdrget_node_conflict_resolvers": "/pgd/5.8/reference/functions-internal#bdrget_node_conflict_resolvers", - "bdrget_slot_flush_timestamp": "/pgd/5.8/reference/functions-internal#bdrget_slot_flush_timestamp", - "bdrinternal_alter_sequence_set_kind": "/pgd/5.8/reference/functions-internal#bdrinternal_alter_sequence_set_kind", - "bdrinternal_replication_set_add_table": "/pgd/5.8/reference/functions-internal#bdrinternal_replication_set_add_table", - "bdrinternal_replication_set_remove_table": "/pgd/5.8/reference/functions-internal#bdrinternal_replication_set_remove_table", - "bdrinternal_submit_join_request": "/pgd/5.8/reference/functions-internal#bdrinternal_submit_join_request", - "bdrisolation_test_session_is_blocked": "/pgd/5.8/reference/functions-internal#bdrisolation_test_session_is_blocked", - "bdrlocal_node_info": "/pgd/5.8/reference/functions-internal#bdrlocal_node_info", - "bdrmsgb_connect": "/pgd/5.8/reference/functions-internal#bdrmsgb_connect", - "bdrmsgb_deliver_message": "/pgd/5.8/reference/functions-internal#bdrmsgb_deliver_message", - "bdrnode_catchup_state_name": "/pgd/5.8/reference/functions-internal#bdrnode_catchup_state_name", - "bdrnode_kind_name": "/pgd/5.8/reference/functions-internal#bdrnode_kind_name", - "bdrpeer_state_name": "/pgd/5.8/reference/functions-internal#bdrpeer_state_name", - "bdrpg_xact_origin": "/pgd/5.8/reference/functions-internal#bdrpg_xact_origin", - "bdrrequest_replay_progress_update": "/pgd/5.8/reference/functions-internal#bdrrequest_replay_progress_update", - "bdrreset_relation_stats": "/pgd/5.8/reference/functions-internal#bdrreset_relation_stats", - "bdrreset_subscription_stats": "/pgd/5.8/reference/functions-internal#bdrreset_subscription_stats", - "bdrresynchronize_table_from_node": "/pgd/5.8/reference/functions-internal#bdrresynchronize_table_from_node", - "bdrseq_currval": "/pgd/5.8/reference/functions-internal#bdrseq_currval", - "bdrseq_lastval": "/pgd/5.8/reference/functions-internal#bdrseq_lastval", - "bdrseq_nextval": "/pgd/5.8/reference/functions-internal#bdrseq_nextval", - "bdrshow_subscription_status": "/pgd/5.8/reference/functions-internal#bdrshow_subscription_status", - "bdrshow_workers": "/pgd/5.8/reference/functions-internal#bdrshow_workers", - "bdrshow_writers": "/pgd/5.8/reference/functions-internal#bdrshow_writers", - "bdrsync_status_name": "/pgd/5.8/reference/functions-internal#bdrsync_status_name", - "bdrtaskmgr_set_leader": "/pgd/5.8/reference/functions-internal#bdrtaskmgr_set_leader", - "bdrtaskmgr_get_last_completed_workitem": "/pgd/5.8/reference/functions-internal#bdrtaskmgr_get_last_completed_workitem", - "bdrtaskmgr_work_queue_check_status": "/pgd/5.8/reference/functions-internal#bdrtaskmgr_work_queue_check_status", - "bdrpglogical_proto_version_ranges": "/pgd/5.8/reference/functions-internal#bdrpglogical_proto_version_ranges", - "bdrget_min_required_replication_slots": "/pgd/5.8/reference/functions-internal#bdrget_min_required_replication_slots", - "bdrget_min_required_worker_processes": "/pgd/5.8/reference/functions-internal#bdrget_min_required_worker_processes", - "bdrstat_get_activity": "/pgd/5.8/reference/functions-internal#bdrstat_get_activity", - "bdrworker_role_id_name": "/pgd/5.8/reference/functions-internal#bdrworker_role_id_name", - "bdrlag_history": "/pgd/5.8/reference/functions-internal#bdrlag_history", - "bdrget_raft_instance_by_nodegroup": "/pgd/5.8/reference/functions-internal#bdrget_raft_instance_by_nodegroup", - "bdrmonitor_camo_on_all_nodes": "/pgd/5.8/reference/functions-internal#bdrmonitor_camo_on_all_nodes", - "bdrmonitor_raft_details_on_all_nodes": "/pgd/5.8/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes", - "bdrmonitor_replslots_details_on_all_nodes": "/pgd/5.8/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes", - "bdrmonitor_subscription_details_on_all_nodes": "/pgd/5.8/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes", - "bdrmonitor_version_details_on_all_nodes": "/pgd/5.8/reference/functions-internal#bdrmonitor_version_details_on_all_nodes", - "bdrnode_group_member_info": "/pgd/5.8/reference/functions-internal#bdrnode_group_member_info", - "bdrcolumn_timestamps_create": "/pgd/5.8/reference/clcd#bdrcolumn_timestamps_create" -} \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/reference/routing.mdx b/product_docs/docs/pgd/5.8/reference/routing.mdx deleted file mode 100644 index 600be7e26fd..00000000000 --- a/product_docs/docs/pgd/5.8/reference/routing.mdx +++ /dev/null @@ -1,104 +0,0 @@ ---- -navTitle: Routing functions -title: Routing functions -indexdepth: 3 -rootisheading: false ---- - -### `bdr.create_proxy` - -Create a proxy configuration. - -#### Synopsis - -```sql -bdr.create_proxy(proxy_name text, node_group text, proxy_mode text); -``` - -#### Parameters - -| Name | Type | Default | Description | -|--------------|------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `proxy_name` | text | | Name of the new proxy. | -| `node_group` | text | | Name of the group to be used by the proxy. | -| `proxy_mode` | text | `'default'` | Mode of the proxy. It can be `'default'` (listen_port connections follow write leader, no read_listen_port), `'read-only'` (no listen_port, read_listen_port connections follow read-only nodes), or `'any'` (listen_port connections follow write_leader, read_listen_port connections follow read-only nodes). Default is `'default'`. | - -When proxy_mode is set to `'default'`, all read options in the proxy config are set to NULL. When it's set to `'read-only'`, all write options in the proxy config are set to NULL. When set to `'any'` all options are set to their defaults. - - -### `bdr.alter_proxy_option` - -Change a proxy configuration. - -#### Synopsis - -```sql -bdr.alter_proxy_option(proxy_name text, config_key text, config_value text); -``` - -#### Parameters - -| Name | Type | Default | Description | -|----------------|------|---------|-----------------------------------------------| -| `proxy_name` | text | | Name of the proxy to change. | -| `config_key` | text | | Key of the option in the proxy to change. | -| `config_value` | text | | New value to set for the given key. | - -The table shows the proxy options (`config_key`) that can be changed using this function. - -| Option | Description | -|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `listen_address` | Address for the proxy to listen on. Default is '{0.0.0.0}'. | -| `listen_port` | Port for the proxy to listen on. Default is '6432' in 'default' or 'any' mode and '0' in 'read-only' mode, which disables the write leader following port. | -| `max_client_conn` | Maximum number of connections for the proxy to accept. Default is '32767'. | -| `max_server_conn` | Maximum number of connections the proxy can make to the Postgres node. Default is '32767'. | -| `server_conn_timeout` | Connection timeout for server connections. Default is '2' (seconds). | -| `server_conn_keepalive` | Keepalive interval for server connections. Default is '10' (seconds). | -| `consensus_grace_period` | Duration for which proxy continues to route even upon loss of a Raft leader. If set to 0s, proxy stops routing immediately. Default is generally '6' (seconds) for local proxies and '12' (seconds) for global proxies. These values will be overridden if `raft_response_timeout`, `raft_global_election_timeout`, or `raft_group_election_timeout` are changed from their defaults. | -| `read_listen_address` | Address for the read-only proxy to listen on. Default is '{0.0.0.0}'. | -| `read_listen_port` | Port for the read-only proxy to listen on. Default is '6433' in 'read-only' or 'any' mode and '0' in 'default' mode, which disables the read-only port. | -| `read_max_client_conn` | Maximum number of connections for the read-only proxy to accept. Default is '32767'. | -| `read_max_server_conn` | Maximum number of connections the read-only proxy can make to the Postgres node. Default is '32767'. | -| `read_server_conn_keepalive` | Keepalive interval for read-only server connections. Default is '10' (seconds). | -| `read_server_conn_timeout` | Connection timeout for read-only server connections. Default is '2' (seconds). | -| `read_consensus_grace_period` | Duration for which read-only proxy continues to route even upon loss of a Raft leader. Default is 1 hour. | - -Changing any of these values requires a restart of the proxy. - -### `bdr.drop_proxy` - -Drop a proxy configuration. - -#### Synopsis - -```sql -bdr.drop_proxy(proxy_name text); -``` - -#### Parameters - -| Name | Type | Default | Description | -|--------------|------|---------|-----------------------------------------------| -| `proxy_name` | text | | Name of the proxy to drop. | - -### `bdr.routing_leadership_transfer` - -Changing the routing leader transfers the leadership of the node group to another node. - -#### Synopsis - -```sql -bdr.routing_leadership_transfer(node_group_name text, - leader_name text, - transfer_method text DEFAULT 'strict', - transfer_timeout interval DEFAULT '10s'); -``` - -#### Parameters - -| Name | Type | Default | Description | -|--------------------|----------|----------|---------------------------------------------------------------------------------------------| -| `node_group_name` | text | | Name of group where the leadership transfer is requested. | -| `leader_name` | text | | Name of node that will become write leader. | -| `transfer_method` | text | `'strict'` | Type of the transfer. It can be `'fast'` or the default, `'strict'`, which checks the maximum lag. | -| `transfer_timeout` | interval | '10s' | Timeout of the leadership transfer. Default is 10 seconds. | diff --git a/product_docs/docs/pgd/5.8/rel_notes/index.mdx b/product_docs/docs/pgd/5.8/rel_notes/index.mdx deleted file mode 100644 index 5834d3be413..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/index.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: EDB Postgres Distributed 5 release notes -navTitle: Release notes -description: Release notes for EDB Postgres Distributed 5 and later -indexCards: none -navigation: - - pgd_5.8.0_rel_notes - - pgd_5.7.0_rel_notes - - pgd_5.6.1_rel_notes - - pgd_5.6.0_rel_notes - - pgd_5.5.1_rel_notes - - pgd_5.5.0_rel_notes - - pgd_5.4.1_rel_notes - - pgd_5.4.0_rel_notes - - pgd_5.3.0_rel_notes - - pgd_5.2.0_rel_notes - - pgd_5.1.0_rel_notes - - pgd_5.0.1_rel_notes - - pgd_5.0.0_rel_notes -originalFilePath: product_docs/docs/pgd/5.8/rel_notes/src/meta.yml -editTarget: originalFilePath ---- - - -The EDB Postgres Distributed documentation describes the latest version of EDB Postgres Distributed 5, including minor releases and patches. The release notes provide information on what was new in each release. For new functionality introduced in a minor or patch release, the content also indicates the release that introduced the feature. - - -| Release Date | EDB Postgres Distributed | BDR extension | PGD CLI | PGD Proxy | -|---|---|---|---|---| -| 22 May 2025 | [5.8.0](./pgd_5.8.0_rel_notes) | 5.8.0 | 5.8.0 | 5.8.0 | -| 25 Feb 2025 | [5.7.0](./pgd_5.7.0_rel_notes) | 5.7.0 | 5.7.0 | 5.7.0 | -| 25 Nov 2024 | [5.6.1](./pgd_5.6.1_rel_notes) | 5.6.1 | 5.6.1 | 5.6.1 | -| 15 Oct 2024 | [5.6.0](./pgd_5.6.0_rel_notes) | 5.6.0 | 5.6.0 | 5.6.0 | -| 31 May 2024 | [5.5.1](./pgd_5.5.1_rel_notes) | 5.5.1 | 5.5.0 | 5.5.0 | -| 16 May 2024 | [5.5.0](./pgd_5.5.0_rel_notes) | 5.5.0 | 5.5.0 | 5.5.0 | -| 03 April 2024 | [5.4.1](./pgd_5.4.1_rel_notes) | 5.4.1 | 5.4.0 | 5.4.0 | -| 05 March 2024 | [5.4.0](./pgd_5.4.0_rel_notes) | 5.4.0 | 5.4.0 | 5.4.0 | -| 14 November 2023 | [5.3.0](./pgd_5.3.0_rel_notes) | 5.3.0 | 5.3.0 | 5.3.0 | -| 04 August 2023 | [5.2.0](./pgd_5.2.0_rel_notes) | 5.2.0 | 5.2.0 | 5.2.0 | -| 16 May 2023 | [5.1.0](./pgd_5.1.0_rel_notes) | 5.1.0 | 5.1.0 | 5.1.0 | -| 21 Mar 2023 | [5.0.1](./pgd_5.0.1_rel_notes) | 5.0.0 | 5.0.1 | 5.0.1 | -| 21 Feb 2023 | [5.0.0](./pgd_5.0.0_rel_notes) | 5.0.0 | 5.0.0 | 5.0.0 | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.0_rel_notes.mdx deleted file mode 100644 index 8de3704586c..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.0_rel_notes.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.0.0 release notes" -navTitle: "Version 5.0.0" ---- - -Released: 21 Feb 2023 - -EDB Postgres Distributed version 5.0.0 is a is a new major version of EDB Postgres Distributed. -This version brings major new features and compatibility changes. - -The highlights of this release include: - - * Flexible deployment architectures - * Enhanced routing capabilities - * Unified replication durability configuration - * Support for EDB Advanced Storage Pack - * Support for TDE with EDB Postgres Advanced 15 and EDB Postgres Extended 15 - * Integration with OpenTelemetry - * Improved transaction tracking performance (Group Commit, CAMO) - * Postgres 12 to 15 compatiblity - - -| Component | Version | Type | Description | -|-----------|---------|---------|-------------| -| PGD | 5.0.0 | Feature | Flexible Deployment Architectures
Redefined Always-ON to support wider variety of deployments.
| -| BDR | 5.0.0 | Feature | Enhanced routing capabilities
BDR cluster elects a write leader for every group (and associated location) using per group Raft when routing is enabled for the group. It takes care of write leader failover and provides SQL commands to change a write leader.
| -| BDR | 5.0.0 | Feature | Support for EDB Advanced Storage Pack
[EDB Advanced Storage Pack](/pg_extensions/advanced_storage_pack/) provides advanced storage options for PostgreSQL databases in the form of table access method (TAM) extensions. These storage options can enhance the performance and reliability of databases without requiring application changes.
| -| BDR | 5.0.0 | Feature | Unified replication durability configuration
The durability options such as Group Commit, CAMO, Eager Replication or Lag Control are now all configured through commit scope configuration.
| -| BDR | 5.0.0 | Feature | EDB Postgres Advanced and EDB Postgres Extended TDE support
EDB Postgres Distributed 5 fully supports the [Transparent Data Encryption](/tde/latest) feature in EDB Postgres Advanced and EDB Postgres Extended.
| -| BDR | 5.0.0 | Feature | Integration with OpenTelemetry
BDR extension can now send monitoring metrics as well as traces to the OpenTelemetry collector for better integration with existing monitoring solutions.
| -| BDR | 5.0.0 | Feature | Postgres 15 compatibility
EDB Postgres Distributed 5 is compatible with Postgres 12 to 15.
| -| BDR | 5.0.0 | Feature | Improved Cluster Event Management
The `bdr.worker_errors` and `bdr.state_journal_details` view were replaced by unified `bdr.event_summary` which also include changes in Raft role for the local node. In the future additional events may be added to it.
| -| BDR | 5.0.0 | Change | Improved transaction tracking performance
Transaction tracking now uses shared memory instead of `bdr.internal_node_pre_commit` catalog which considerably improves performance as it does not incur additional I/O.
| -| BDR | 5.0.0 | Feature | Support non-default replication sets with Decoding Worker
Allows Decoding Worker feature to be used in clusters using non-default replication sets like asymmetric replication setup.
| -| BDR | 5.0.0 | Feature | Add support for HASH partitioning in Autopartition
Extend autopartition/autoscale to support HASH partitioning. Many of things that are required for RANGE partitioning are not needed for HASH partitioning. For example, we expect to create all HASH partitions in one go (at least for the current work; later we may change this). We don't expect HASH partitions to be moved to a different tablespace or dropped. So data retention policies don't apply for HASH partitioning.
| -| BDR | 5.0.0 | Feature | Add a new benchmarking utility `pgd_bench`
The utility supports benchmarking CAMO transactions and in future releases will be used for benchmarking PGD specific workloads.
| -| BDR | 5.0.0 | Change | Nodes now have a node kind
This better differentiates different kinds of nodes such as data, witness, subscriber-only and standby.
-| BDR | 5.0.0 | Change | Separate Task Management from Autopartition
In this release, the autopartition work queue mechanism has been moved to a separate module called Task Manager (taskmgr). The task manager is responsible for creating new tasks and executing the ones created by the local node or the task manager leader node. The autopartition worker is thus renamed as taskmgr worker process in this release.

In the older PGD releases, the Raft leader was responsible for creating new work items. But that creates a problem because a witness node can become a Raft leader while it does not have the full view of the cluster objects. In this release, we have introduced a concept of Task Manager Leader node. The node is selected automatically by PGD, but for upgraded clusters, its important to set the `node_kind` property for all nodes in the cluster. The user is expected to do this manually after upgrading to the latest PGD version by calling bdr.alter_node_kind() SQL function for each node.
| -| BDR | 5.0.0 | Deprecation | `bdr.assess_lock_statement` and `bdr.assess_update_replica_identity` are deprecated. | -| Proxy | 5.0.0 | Feature | PGD built-in proxy
A TCP layer 4, pass through proxy for PGD cluster using routing capabilities of BDR.

| -| CLI | 5.0.0 | Feature | PGD cluster verification
CLI supports two new commands `verify-settings` and `verify-cluster`. `verify-settings` verifies the PostgreSQL configuration of each node in a PGD cluster against the recommendations. `verify-cluster` verifies the PGD cluster architectures against the flexible architecture deployment recommendations.

-| CLI | 5.0.0 | Feature | Proxy management and configuration
`pgd` supports `create-proxy`, `delete proxy`, `set-group-options`, `set-node-options`, `set-proxy-options`, `show-proxies`, `show-groups` and `switchover` to configure and manage Proxy per group.
| -| CLI | 5.0.0 | Change | Remove `show-camo` command and remove CAMO check from `check-health` command. Support for `commit scopes` in CLI will be added in a future release.| -| CLI | 5.0.0 | Change | Modify output of `show-nodes` and `show-raft` commands to accomodate routing capabilities. | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.1_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.1_rel_notes.mdx deleted file mode 100644 index 4c2be1d67c7..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.0.1_rel_notes.mdx +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.0.1 release notes" -navTitle: "Version 5.0.1" ---- - -Released: 21 Mar 2023 - -EDB Postgres Distributed version 5.0.1 is a patch version of EDB Postgres Distributed. -This version addresses security vulnerabilities in dependencies for PGD-Proxy and PGD-CLI. - -| Component | Version | Type | Description | -|-----------|---------|---------|-------------| -| CLI | 5.0.1 | Change | Upgrade 3rd party dependencies to fix Github dependabot alerts | -| Proxy | 5.0.1 | Change | Upgrade 3rd party dependencies to fix Github dependabot alerts | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.1.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.1.0_rel_notes.mdx deleted file mode 100644 index ebadaa671bb..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.1.0_rel_notes.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.1.0 release notes" -navTitle: "Version 5.1.0" ---- - -Released: 16 May 2023 - -EDB Postgres Distributed version 5.1.0 is a minor version of EDB Postgres Distributed. -This version addresses security vulnerabilities in dependencies for PGD Proxy and PGD CLI. - -## Highlights of EDB Postgres Distributed 5.1 - -* **Synchronous Commit** is now available in PGD’s unified COMMIT SCOPE syntax. Modeled on Postgres’s legacy synchronous commit option, PGD Synchronous Commit allows DBAs to take advantage of the finer-grained commit and sync management. This addition complements the existing Group Commit, CAMO, and Lag Control commit scope options. - -* Fixes to **priority-based proxy routing** now enable better handling of failover. You can now configure the grace period for proxies through PGD CLI, allowing you to tune proxy response to losing the Raft leader. To accompany that, Raft events are visible in the PGD CLI’s `show-events` command, showing the event, source, and subtype. - -* **`bdr.drop_node_group()`** adds support for removing empty node groups using the PGD SQL interface. - -!!! Important Recommended upgrade - We recommend that users of PGD 5.0 upgrade to PGD 5.1. - -!!! Note PostgreSQL version compatibility -This version is required for EDB Postgres Advanced Server versions 12.15, 13.11, 14.8, and later. -!!! - - -| Component | Version | Type | Description | -|-----------|---------|-----------------|--------------| -| BDR | 5.1.0 | Feature | Added pid to the log message emitted upon writer process death. | -| BDR | 5.1.0 | Feature | Added group name in the bdr.event_summary view. | -| BDR | 5.1.0 | Feature | Added SLES support. SLES 15sp4 is now supported. | -| BDR | 5.1.0 | Feature | Added subscription status column to the group subscription summary view.
This feature allows you to distinguish whether NULL values are due to a node being down or a subscription being disabled.| -| BDR | 5.1.0 | Feature | Added NOT group filter to Group Commit.
This feature allows you to invert the meaning of a group filter to include all nodes except the ones in specified groups.| -| BDR | 5.1.0 | Feature | Added `bdr.drop_node_group`.
You can now drop empty node groups.| -| BDR | 5.1.0 | Feature | Added SYNCHRONOUS_COMMIT to commit scopes.
This feature allows dynamic synchronous_commit-like behavior for replication.| -| BDR | 5.1.0 | Feature | Added event_node_name column to `bdr.event_summary`. | -| BDR | 5.1.0 | Feature | Added write leader election to event history.
Added information about the node that's elected as a write leader for each group in the event_history catalog. Also improved the reporting of raft instance ids in the event_detail of event_history.| -| BDR | 5.1.0 | Feature | Added ability to allow exporting and importing of other Raft instance snapshots.
This feature allows exporting and importing snapshots for other instances and not only top Raft instances.| -| BDR | 5.1.0 | Bug fix | Fixed memory leak in consensus process. (RT91830) | -| BDR | 5.1.0 | Bug fix | Fixed issue where a node can be inconsistent with the group after rejoining.
If a node was part of a subgroup, parted, and then rejoined to the group, it could be inconsistent with the group. The changes from some nodes of the group were replayed from a wrong starting point, resulting in potential data loss. | -| BDR | 5.1.0 | Bug fix | Fixed join and replication when SDW and standby_slot_names are set. (RT89702, RT89536)| -| BDR | 5.1.0 | Bug fix | Fixed spurious warnings when processing sequence options. (RT90668) | -| BDR | 5.1.0 | Bug fix | Fixed upgrades for nodes with CRDTs. | -| BDR | 5.1.0 | Bug fix | Adjusted lag tracking parameters for LCRs from pg_stat_replication. | -| BDR | 5.1.0 | Bug fix | Adjusted node routing options defaults based on node kind.
This change is related only to the display of the information and not the behavior. For example, witness nodes aren't marked as candidates for receiving writes. | -| BDR | 5.1.0 | Bug fix | All sequences are now converted to "distributed" during create node. | -| BDR | 5.1.0 | Bug fix | Fixed streaming transactions with `standby_slot_names`.
This might have led to a subscriber-only node getting ahead of a data node. | -| BDR | 5.1.0 | Bug fix | Made priority work properly for routing selection.
Previously, node priority was working only if there wasn't a previous leader, which is never the case on failover.| -| BDR | 5.1.0 | Bug fix | Fixed the recording of its join as complete for the first node. | -| BDR | 5.1.0 | Bug fix | Disabled tracing by default.
Tracing was enabled only for initial debugging and was meant to be disabled before 5.0 release. | -| BDR | 5.1.0 | Bug fix | Added support for reload configuration for the pglogical receiver.
When the server receives a reload signal, the pglogical receiver reloads and applies the configuration changes. | -| BDR | 5.1.0 | Bug fix | Improved missing instance error message in `bdr.monitor_group_raft()`. | -| BDR | 5.1.0 | Bug fix | Implemented consistent use of tcp keepalives across all BDR connections.
This change added the following GUCs:
`bdr.global_connection_timeout`
`bdr.global_keepalives`
`bdr.global_keepalives_idle`
`bdr.global_keepalives_interval`
`bdr.global_keepalives_count`
`bdr.global_tcp_user_timeout`
The defaults are set to fairly conservative values and are subject to adjustments in the future patches. | -| BDR | 5.1.0 | Bug fix | Closed Raft connections on no activity after a timeout.
This uses wal_sender_timeout/wal_receiver_timeout underneath. | -| BDR | 5.1.0 | Bug fix | Made backends that receive Raft messages easily identifiable.
Added information in the log message related to Raft backends. | -| BDR | 5.1.0 | Bug fix | Fixed issue whereby Parallel Apply might slow down under heavy load. -| BDR | 5.1.0 | Enhancement | Restarting sync workers is now avoided.
This fix is to prevent the node join from failing when config changes are made that signal the restart of subscription workers. | -| PGD Proxy | 5.1.0 | Enhancement | `application_name` is now set to proxy name if it wasn't set by the user in the connection string for internal db connections. | -| PGD Proxy | 5.1.0 | Enhancement | Implemented the new `consensus_grace_period` proxy option, which is the duration for which a proxy continues to route to the current write leader (if it's available) upon loss of a Raft leader. If the new Raft leader isn't elected during this period, the proxy stops routing. If set to `0s`, the proxy stops routing immediately. | -| PGD Proxy | 5.1.0 | Bug fix | Changed from blocking when write leader is unavailable to closing the new client connections. | -| CLI | 5.1.0 | Enhancement | Enhanced the `show-events` command to show Raft events, event source and subtype. | -| CLI | 5.1.0 | Enhancement | Improved clockskew estimation in `show-clockskew` and `check-health` commands. | -| CLI | 5.1.0 | Feature | Added support to view and set `consensus_grace_period` proxy option. | -| Utilities | 1.1.0 | Bug fix | Implemented handling of uninitialized physical replication slots issue. | \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.2.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.2.0_rel_notes.mdx deleted file mode 100644 index e45207586d6..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.2.0_rel_notes.mdx +++ /dev/null @@ -1,56 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.2.0 release notes" -navTitle: "Version 5.2.0" ---- - -Released: 04 Aug 2023 - -EDB Postgres Distributed version 5.2.0 is a minor version of EDB Postgres Distributed. - -## Highlights of EDB Postgres Distributed 5.2.0 - -* Parallel Apply is now available for PGD’s Commit at Most Once (CAMO) synchronous commit scope and improving replication performance. -* Parallel Apply for native Postgres asynchronous and synchronous replication has been improved for workloads where the same key is being modified concurrently by multiple transactions to maintain commit sequence and avoid deadlocks. -* PGD Proxy has added HTTP(S) APIs to allow the health of the proxy to be monitored directly for readiness and liveness. See [Proxy health check](../routing/monitoring/#proxy-health-check). - -!!! Important Recommended upgrade - We recommend that users of PGD 5.1 upgrade to PGD 5.2. - -!!! Note PostgreSQL version compatibility -This version is required for EDB Postgres Advanced Server versions 12.15, 13.11, 14.8, and later. -!!! - - -| Component | Version | Type | Description | -|-----------|---------|-----------------|--------------| -| BDR | 5.2.0 | Feature | Added Parallel Apply for synchronous commit scopes with CAMO. | -| BDR | 5.2.0 | Feature | Allow multiple SYNCHRONOUS_COMMIT clauses in a commit scope rule. | -| BDR | 5.2.0 | Enhancement | BDR extension now allows transaction streaming with SYNCHRONOUS_COMMIT and LAG CONTROL commit scopes. | -| BDR | 5.2.0 | Enhancement | Improved handling of concurrent workloads with Parallel Apply. | -| BDR | 5.2.0 | Enhancement | Modified `bdr.stat_subscription` for new columns. | -| BDR | 5.2.0 | Bug fix | Fixed an issue by allowing a logical join of node if there are foreign key constraints violations. (RT91745) | -| BDR | 5.2.0 | Bug fix | Changed `group_raft_details` view to avoid deadlock possibility. | -| BDR | 5.2.0 | Bug fix | Fixed an issue by adding ability to log the extension upgrade. | -| BDR | 5.2.0 | Bug fix | Added check for conflicting node names. | -| BDR | 5.2.0 | Bug fix | Fixed a crash during Raft manual snapshot restore. | -| BDR | 5.2.0 | Bug fix | Fixed an issue whereby BDR extension was attempting to establish consensus connections to parting or parted nodes. | -| BDR | 5.2.0 | Bug fix | Fixed `tcp_user_timeout` GUC to use the correct unit. | -| BDR | 5.2.0 | Bug fix | Fixed the consensus snapshot compatibility with PGD 3.7. (RT93022) | -| BDR | 5.2.0 | Bug fix | Fixed an issue whereby a crash occurred when BDR extension is used with pgaudit. | -| BDR | 5.2.0 | Bug fix | Fixed an issue by skipping parting synchronization to witness node. | -| BDR | 5.2.0 | Bug fix | Fixed an issue by now generating correct keepalive parameters in connection strings. | -| BDR | 5.2.0 | Bug fix | Enabled various scenarios of switching nodes between groups and their subgroups, for example, transition node from a group to any of the nested subgroups.| -| BDR | 5.2.0 | Bug fix | Reduced the amount of WAL produced by consensus on idle server. | -| BDR | 5.2.0 | Bug fix | Fixed deadlock on autopartition catalogs when a concurrent `DROP EXTENSION` is executing. | -| BDR | 5.2.0 | Bug fix | Fixed sporadic failure when dropping extension after node restart | -| BDR | 5.2.0 | Bug fix | Added a workaround for crash due to pgaudit bug. | -| BDR | 5.2.0 | Bug fix | Fixed deadlock between consensus and global monitoring queries. | -| BDR | 5.2.0 | Bug fix | Fixed query cancellation propagation across `bdr.run_on_all_nodes`. | -| BDR | 5.2.0 | Bug fix | Fixed an issue by disallowing invoking `bdr.run_on_nodes()`, `bdr.run_on_group()` and `bdr.run_on_all_nodes()` on parted nodes. | -| CLI | 5.2.0 | Enhancement | Added new GUCs verification in `verify-settings` command. | -| CLI | 5.2.0 | Bug fix | Fixed an issue by truncating the long value of GUC in tabular output of `verify-settings`. | -| CLI | 5.2.0 | Bug fix | Fixed `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` by upgrading database driver library version. | -| Proxy | 5.2.0 | Feature | Added HTTP(S) APIs for Proxy health check. | -| Proxy | 5.2.0 | Enhancement | Improved route change events handling mechanism. | -| Proxy | 5.2.0 | Enhancement | Added retry mechanism on consensus query error. | -| Proxy | 5.2.0 | Bug fix | Fixed `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` by upgrading database driver library version. | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.3.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.3.0_rel_notes.mdx deleted file mode 100644 index 012322c6942..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.3.0_rel_notes.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.3.0 release notes" -navTitle: "Version 5.3.0" ---- - -Released: 14 Nov 2023 - -EDB Postgres Distributed version 5.3.0 is a minor version of EDB Postgres Distributed. - -!!! Important Recommended upgrade -We recommend that all users of PGD 5 upgrade to PGD 5.3. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. -!!! - -## Highlights of EDB Postgres Distributed 5.3.0 - -* Support for PostgreSQL 16 server, EDB Postgres Extended Server 16, and EDB Postgres Advanced Server 16 - -## Compatibility - -!!! Note EDB Server version compatibility -This version requires the recently released Postgres versions 14.10, 15.4, -or 16.1 (or later) of EDB Postgres Advanced Server or EDB Postgres Extended -Server. No such restrictions exist for PostgreSQL Server. - -Package managers on Debian, RHEL, and SLES pull in the required EDB Postgres Advanced Server or EDB Postgres Extended -upgrades with an upgrade of EDB Postgres Distributed. -!!! - -## Features - -| Component | Version | Description | Addresses | -|-----------|---------|-------------|-----------| -| PGD | 5.3.0 | Added support for PostgreSQL 16 server, EDB Postgres Extended Server 16, and EDB Postgres Advanced Server 16. | | - -## Enhancements - -| Component | Version |Description | Addresses | -|-----------|---------|-------------|-----------| -| Proxy | 5.3.0 | Added the default service unit file to package | | -| BDR | 5.3.0 | Dependencies on EDB Postgres Advanced Server or EDB Postgres Extended are now reflected in packages. | | - -## Bug fixes - -| Component | Version |Description | Addresses | -|-----------|---------|-------------|-----------| -| BDR | 5.3.0 | Ensure that the WalSender process doesn't request locks on the partitions, thus avoiding a deadlock with user process waiting on sync commit. | RT97952 | -| BDR | 5.3.0 | Consider only CAMO transactions to be asynchronous when the CAMO setup was degraded to local mode. This solves the split-brain problem when deciding fate of transactions that happened during failover. | RT78928 | -| BDR | 5.3.0 | Handle partitions with different attribute numbers when batch inserting. | RT99115 | -| BDR | 5.3.0 | Fixed unsafe CAMO decisions in remote_write mode. || -| BDR | 5.3.0 | Taskmgr process now respects SIGINT. || -| BDR | 5.3.0 | Speeded up manager process startup by limiting the amount of WAL read for loading commit decisions. || -| BDR | 5.3.0 | Physical joins now clean up stale records in `bdr.taskmgr_local_work_queue`. || -| BDR | 5.3.0 | Fixed a bug in copying `bdr.autopartition_rules` during logical join. || -| BDR | 5.3.0 | Override `bdr.ddl_replication=off` in taskmgr worker. || -| BDR | 5.3.0 | Avoid a potential deadlock between `bdr.autopartition_wait_for_partitions()` and taskmgr. || -| BDR | 5.3.0 | Fixed writer missing updates in streaming mode with TDE enabled. || -| BDR | 5.3.0 | Block new EDB Postgres Advanced Server automatic partitioning on PGD cluster. || -| BDR | 5.3.0 | Allow existing automatically partitioned tables when cluster is created or upgraded. || -| BDR | 5.3.0 | Block PGD autopartition on EDB Postgras Advanced Server INTERVAL partitioned table. || -| BDR | 5.3.0 | Ensure that replication doesn't break when `bdr.autopartition()` is invoked on a mixed version cluster running with 3.7.23 and 4.3.3/5.3 || -| BDR | 5.3.0 | Fixed default selective replication behavior for data groups. The data groups are supposed to publish only replication sets of the group or any parent groups by default, which mirrors what they subscribe to. || -| BDR | 5.3.0 | Fixed memory leak in LCR file TDE encryption. || -| BDR | 5.3.0 | Fixed memory leak in streaming transaction processing. || -| BDR | 5.3.0 | Allow force parting of nodes that are already being parted normally. || -| BDR | 5.3.0 | PART_CATCHUP is now more resilient to replication slot and node_catchup_info conflict. || -| BDR | 5.3.0 | Fixed row filter failure for partitions created after a table column was dropped. | RT95149 | -| BDR | 5.3.0 | Avoid aborting a group commit transaction on receiving the first abort response. The other nodes are given a chance to respond, and transaction can succeed if the required responses are received. || -| BDR | 5.3.0 | Ensure that replication receiver worker reloads configuration correctly when `pg_reload_conf()` is called. || -| BDR | 5.3.0 | Prevent duplicate Raft request ID generation which could break replication. || -| BDR | 5.3.0 | Fixed several rare memory access issues that could potentially cause crashes of workers. || -| BDR | 5.3.0 | Fixed issue where explicit 2PC transactions aborted earlier when encountering conflicts. | RT92897 | -| CLI | 5.3.0 | Fixed `verify-settings` command if the `shared_preload_libraries` GUC contains file path. || -| CLI | 5.3.0 | Show replslots status as `Critical` in `check-health` command when PGD cluster is reduced to just a witness node. | | -| Proxy | 5.3.0 | Always set the server connection close behavior (setLinger(0)). Earlier it was set only on client error. | | -| Proxy | 5.3.0 | Fixed the logs fill up issue when all nodes are down in a PGD cluster | | -| Utilities | 1.2.0 | Removed the preupgrade step in `bdr_pg_upgrade` that sets the CONNECTION LIMIT to 0 at database level | | - diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.0_rel_notes.mdx deleted file mode 100644 index 2ba18fc19cf..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.0_rel_notes.mdx +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.4.0 release notes" -navTitle: "Version 5.4.0" ---- - -Released: 05 Mar 2024 - -EDB Postgres Distributed version 5.4.0 is a minor version of EDB Postgres Distributed. - -!!! Important Recommended upgrade -We recommend that all users of PGD 5 upgrade to PGD 5.4. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. -!!! - - -## Highlights of EDB Postgres Distributed 5.4.0 - -Highlights of this 5.4.0 release include improvements to: - -* Group Commit, aiming to optimize performance by minimizing the effect of a node's downtime and simplifying overall operating of PGD clusters. -* `apply_delay`, enabling the creation of a delayed read-only [replica](/pgd/latest/nodes/subscriber_only/overview/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. - -## Compatibility - -!!! Note EDB Server version compatibility -This version requires the recently released Postgres versions 14.10, 15.4, -or 16.1 (or later) of EDB Postgres Advanced Server or EDB Postgres Extended -Server. No such restrictions exist for PostgreSQL Server. - -Package managers on Debian, RHEL, and SLES pull in the required EDB Postgres -Advanced Server or EDB Postgres Extended upgrades with an upgrade of EDB -Postgres Distributed. -!!! - -## Features - -| Component | Version | Description | Addresses | -|-----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| -| BDR | 5.4.0 | PGD now automatically detects and synchronizes all available nodes to the furthest ahead node for transactions originating from failed or disconnected node. | | -| BDR | 5.4.0 | PGD now automatically resolves pending Group Commit transactions when the originating node fails or disconnects, ensuring uninterrupted transaction processing within the cluster. | | -| BDR | 5.4.0 | Added ability to set the `apply_delay` group option on subgroups, enabling adding of delayed subscriber-only nodes. | | -| BDR | 5.4.0 | Loading data using EDB\*Loader (except direct mode) is now supported. | | - -## Bug fixes - -| Component | Version | Description | Addresses | -|-----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------| -| BDR | 5.4.0 | Fixed memory leaks when running a query on some or all nodes. | | -| BDR | 5.4.0 | Resolved an issue of high CPU usage for consensus processes. | RT97649 | -| BDR | 5.4.0 | Improved WAL retention logic when a part_node occurs. | | -| BDR | 5.4.0 | Witness nodes will now automatically not synchronize structure when joining a group. | | -| BDR | 5.4.0 | bdr.create_node() / bdr.alter_node() now give a hint when an invalid node kind is used. | | -| BDR | 5.4.0 | Fixed transactions PREPARE/COMMIT/ABORT order with Parallel Apply enabled. | | -| BDR | 5.4.0 | DDL replication now takes into account more of the Postgres configuration options that are set in the original session or transaction to provide more consistent results of the DDL execution. Added `standard_conforming_strings`, `edb_redwood_date`, `default_with_rowids`, and `check_function_bodies`. | | -| BDR | 5.4.0 | Improved `pgd_bench` cluster initialization and command line help output. | | -| BDR | 5.4.0 | Restoring a node group from a consensus snapshot now correctly applies option changes (number of writers, streaming, and apply_delay) to local subscriptions. | | -| BDR | 5.4.0 | Fixed debug logging of pg_ctl enabling output capture for debugging purposes in `bdr_init_physical`. | | -| BDR | 5.4.0 | Fixed assertion failure when TargetColumnMissing conflict occurs in a Group Commit transaction. | | -| BDR | 5.4.0 | Fixed detection of UpdateOriginChange conflict to be more accurate. | | -| BDR | 5.4.0 | Added support for timeout for normal Group Commit transaction. | | -| BDR | 5.4.0 | Fixed error handling in writer when there are lock timeouts, conflicts, or deadlocks with and without Group Commit transactions. | | -| BDR | 5.4.0 | Now allow the origin of Group Commit transactions to wait for responses from all the required nodes before taking an abort decision. | | -| BDR | 5.4.0 | Eager transactions abort correctly after Raft was disabled or not working and has recovered. | RT101055 | -| BDR | 5.4.0 | Increased default `bdr.raft_keep_min_entries` to 1000 from 100. | | -| BDR | 5.4.0 | Now allow the origin of Group Commit transactions to wait for responses from all the required nodes before taking an abort decision. | | -| BDR | 5.4.0 | Now run ANALYZE on the internal Raft tables. | RT97735 | -| BDR | 5.4.0 | Fixed segfault in I2PC concurrent abort case. | RT93962 | -| BDR | 5.4.0 | Now avoid bypassing other extensions in BdrProcessUtility when processing COPY..TO. | RT99345 | -| BDR | 5.4.0 | Ensured that consensus connection are handled correctly. | RT97649 | -| BDR | 5.4.0 | Fixed memory leaks while running monitoring queries. | RT99231, RT95314 | -| BDR | 5.4.0 | The `bdr.metrics_otel_http_url` and `bdr.trace_otel_http_url` options are now validated at assignment time. | | -| BDR | 5.4.0 | When `bdr.metrics_otel_http_url` and `bdr.trace_otel_http_url` don't include paths, `/v1/metrics` and `/v1/traces` are used, respectively. | | -| BDR | 5.4.0 | Setting `bdr.trace_enable` to `true` is no longer required to enable OTEL metrics collection. | | -| Proxy | 5.4.0 | Now use route_dsn and perform sslpassword processing while extracting write leader address. | RT99700 | -| Proxy | 5.4.0 | Now log client and server addresses at debug level in proxy logs. | | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.1_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.1_rel_notes.mdx deleted file mode 100644 index 9db05402b11..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.4.1_rel_notes.mdx +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.4.1 release notes" -navTitle: "Version 5.4.1" ---- - -Released: 03 Apr 2024 - -EDB Postgres Distributed version 5.4.1 is a patch release containing bug fixes for EDB Postgres Distributed. - -!!! Important Recommended upgrade -We recommend that all users of PGD 5 upgrade to PGD 5.4.1. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. -!!! - - -## Bug fixes - -| Component | Version | Description | Tickets | -|-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| -| BDR | 5.4.1 |
Fixed WAL retention logic to prevent disk space issues on a PGD node with version 5.4.0 and Postgres 16, PGE 16, and EDB Postgres Advanced Server 16.
A fix was implemented to ensure proper cleanup of write-ahead logs (WAL) even after reaching a size of 4 GB on a node. A change in version 5.4.0 resulted in WAL being retained indefinitely after reaching this threshold. This issue is specific to PGD 5.4.0 in conjunction with Postgres 16, PGE 16, and EDB Postgres Advanced Server 16.
| | - - - diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.0_rel_notes.mdx deleted file mode 100644 index 7b47fd026e0..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.0_rel_notes.mdx +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.5.0 release notes" -navTitle: "Version 5.5.0" ---- - -Released: 16 May 2024 - -EDB Postgres Distributed version 5.5.0 is a minor version of EDB Postgres Distributed. - -!!! Important Recommended upgrade -We recommend that all users of PGD 5 upgrade to PGD 5.5. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. -!!! - - -## Highlights of EDB Postgres Distributed 5.5.0 - -Highlights of this 5.5.0 release include: - -* Read scalability enhancements in PGD Proxy which allow [read-only queries to be routed](/pgd/latest/routing/readonly/) to nodes that are members of a read-only pool. This feature can improve the overall performance of the PGD cluster. - -## Compatibility - -!!! Note EDB server version compatibility -This version requires the recently released Postgres versions 14.10, 15.4, -or 16.1 (or later) of EDB Postgres Advanced Server or EDB Postgres Extended -Server. No such restrictions exist for Community Postgres Server. - -Package managers on Debian, RHEL, and SLES pull in the required EDB Postgres -Advanced Server or EDB Postgres Extended upgrades with an upgrade of EDB -Postgres Distributed. -!!! - -## Features - -| Component | Version | Description | Ticket | -|-----------|---------|------------------------------------------------------------------------------------------------|--------| -| BDR | 5.5.0 | Added support for read-only proxy routing. | | -| BDR | 5.5.0 | Improved stability of routing leader selection by using Raft heartbeat for connectivity check. | | -| CLI | 5.5.0 | Added PGD CLI binaries for macOS. | | -| Proxy | 5.5.0 | Added support for read-only proxy routing. | | - - -## Enhancements - -| Component | Version | Description | Ticket | -|-----------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------|----------------| -| BDR | 5.5.0 | Improved bulk INSERT/UPDATE/DELETE performance by sending multiple messages together in a group rather than individually. | | -| BDR | 5.5.0 | Changes received by the writer now aren't saved to a temporary file. | | -| BDR | 5.5.0 | BDR now logs completion of an extension upgrade. | | -| BDR | 5.5.0 | Added restrictions for group commit options. | | -| BDR | 5.5.0 | Each autopartition task is now executed in its own transaction. | RT101407/35476 | -| BDR | 5.5.0 | DETACH CONCURRENTLY is now used to drop partitions. | RT101407/35476 | -| BDR | 5.5.0 | Node group creation on a node bad state is now disallowed. | | -| BDR | 5.5.0 | Granted additional object permissions to role `bdr_read_all_stats`. | | -| BDR | 5.5.0 | Improved stability of manager worker and Raft consensus by not throwing error on non-fatal dynamic shared memory read failures. | | -| BDR | 5.5.0 | Improved stability of Raft consensus and workers by handling dynamic shared memory errors in the right place. | | -| BDR | 5.5.0 | The number of changes processed by writer in a large transaction is now exposed in [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters). | | -| BDR | 5.5.0 | `bdr_init_physical` now stops the initial replication connection and starts it only when needed. | RT102828/35305 | -| BDR | 5.5.0 | `bdr_superuser` is now granted use of `pg_file_settings` and `pg_show_all_file_settings()`. | | -| CLI | 5.5.0 | Added new read scalability related options to JSON output of `show-proxies ` and `show-groups` commands. | | -| CLI | 5.5.0 | Added new option called `proxy-mode` to `create-proxy` command for read scalability support. | | -| CLI | 5.5.0 | Added Raft leader in tabular output of `show-groups` command. | | - - -## Bug fixes - -| Component | Version | Description | Ticket | -|-----------|---------|------------------------------------------------------------------------------------------------------------------------------|----------------| -| BDR | 5.5.0 | Improved handling of node group configuration parameter "check_constraints". | RT99956/31896 | -| BDR | 5.5.0 | Fixed incorrect parsing of pre-commit message that caused nodes to diverge on commit decision for group commit transactions. | | -| BDR | 5.5.0 | Fixed an issue to prevent potential segfault in `bdr.monitor_group_versions()` | RT102290/34051 | -| BDR | 5.5.0 | BDR now correctly elects a new leader when the current leader gets route_writes turned off. | | -| BDR | 5.5.0 | `bdr.remove_commit_scope()` now handles non-existent commit scope. | | -| BDR | 5.5.0 | An improved queue flush process now prevents unexpected writer terminations. | RT98966/35447 | -| BDR | 5.5.0 | Fixed multi-row conflict accidentally deleting the wrong tuple multiple times . | | -| BDR | 5.5.0 | Fixed receiver to send status update when writer is blocked, avoiding slot disconnect. | | -| BDR | 5.5.0 | Fixed minor memory leak during `bdr_join_node_group_sql`. | | -| BDR | 5.5.0 | Node joining with witness and standby nodes as source nodes is now disallowed. | | -| BDR | 5.5.0 | Now use `bdr.default_sequence_kind` when updating sequence kind of existing sequences upon node creation. | | -| BDR | 5.5.0 | Fixed a bug preventing some trusted extension management commands (CREATE/ALTER) from being replicated. | | -| BDR | 5.5.0 | Fixed a non-critical segfault which could occur in upgrades from BDR 3.7. | | -| BDR | 5.5.0 | Fixed an issue to manage rights elevation for trusted extensions. | | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.1_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.1_rel_notes.mdx deleted file mode 100644 index 4379675a399..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.5.1_rel_notes.mdx +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "EDB Postgres Distributed 5.5.1 release notes" -navTitle: "Version 5.5.1" ---- - -Released: 31 May 2024 - -EDB Postgres Distributed version 5.5.1 is a patch release containing bug fixes for EDB Postgres Distributed. - -!!! Important Recommended upgrade -We recommend that all users of PGD 5 upgrade to PGD 5.5.1. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. -!!! - - -## Bug fixes - -| Component | Version | Description | Ticket | -|-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| -| BDR | 5.5.1 |
Fixed potential data inconsistency issue with mixed-version usage during a rolling upgrade.
Backward-incompatible change in PGD 5.5.0 may lead to inconsistencies when replicating from a newer PGD 5.5.0 node to an older version of the PGD node, specifically during the mixed-mode rolling upgrade.
This release addresses a backward-compatibility issue in mixed-version operation, enabling seamless rolling upgrades.
| | -| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default.
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.0_rel_notes.mdx deleted file mode 100644 index e3171bef93b..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.0_rel_notes.mdx +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: EDB Postgres Distributed 5.6.0 release notes -navTitle: Version 5.6.0 -originalFilePath: product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.0.yml -editTarget: originalFilePath ---- - -Released: 15 October 2024 - -EDB Postgres Distributed 5.6.0 includes a number of enhancements and bug fixes. - -## Highlights - -- Improved observability with new monitoring functions and SQL views. -- Improvements to commit scopes including: - - GROUP COMMIT and SYNCHRONOUS COMMIT support graceful degrading using DEGRADE ON. - - ORIGIN_GROUP support and commit scope inheritance simplify commit scope creation. - - Improved synchronous commit behavior around deadlocks. - - Metrics for commit scope performance and state. -- Optimized Topology support for Subscriber-only groups and nodes. (preview) -- Improved Postgres compliance with support for: - - Exclusion Constraints - - REINDEX replications - - createrole_self_grant - - column reference in DEFAULT expressions - - CREATE SCHEMA AUTHORIZATION -- Streaming Transaction support with Decoding Worker. - -## Enhancements - - - - - - - - - - - - - - - - - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.6.0
Decoding Worker supports Streaming Transactions

One of the main advantages of streaming is that the WAL sender sends the partial transaction before it commits, which reduces replication lag. Now, with streaming support, the WAL decoder does the same thing, but it streams to the LCRs segments. Eventually, the WAL sender will read the LCRs and mimic the same behavior of streaming large transactions before they commit. This provides the benefits of decoding worker, such as reduced CPU and disk space, as well as the benefits of streaming, such as reduced lag and disk space, since ".spill" files are not generated. -The WAL decoder always streams the transaction to LCRs, but based on downstream requests, the WAL sender either streams the transaction or just mimics the normal BEGIN..COMMIT scenario. -In addition to the normal LCRs segment files, we create streaming files with the starting names TR_TXN_<file-name-format> and CAS_TXN_<file-name-format> for each streamed transaction.

-
BDR5.6.0
Introduce several new monitoring views

There are several view providing new information as well as making some -existing information easier to discover:

-
    -
  • bdr.stat_commit_scope : Cumulative statistics for commit scopes.
  • -
  • bdr.stat_commit_scope_state : Information about current use of commit scopes by backends.
  • -
  • bdr.stat_receiver : Per subscription receiver statistics.
  • -
  • bdr.stat_writer : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction.
  • -
  • bdr.stat_raft_state : The state of the Raft consensus on the local node.
  • -
  • bdr.stat_raft_followers_state : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes.
  • -
  • bdr.stat_worker : Detailed information about PGD workers, including what the operation manager worker is currently doing.
  • -
  • bdr.stat_routing_state : The state of the connection routing which PGD Proxy uses to route the connections.
  • -
  • bdr.stat_routing_candidate_state : Information about routing candidate nodes on the Raft leader node (empty on other nodes).
  • -
-
BDR5.6.0
Support conflict detection for exclusion constraints

This allows defining EXCLUDE constraint on table replicated by PGD either with -CREATE TABLE or with ALTER TABLE and uses similar conflict detection to resolve -conflicts as for UNIQUE constraints.

-
BDR5.6.0
Detect and resolve deadlocks between synchronous replication wait-for-disconnected sessions and replication writer.

This will cancel synchronous replication wait on disconnected sessions if it deadlocks against replication, preventing deadlocks on failovers when using synchronous replication. This only affects commit scopes, not synchronous replication configured via synchronous_standby_names.

-
BDR5.6.0
Add bdr.bdr_show_all_file_settings() and bdr.bdr_file_settings view

Fix: Correct privileges for bdr_superuser. Creating wrapper SECURITY DEFINER functions in the bdr schema and granting access to bdr_superuser to use those:

-
    -
  • bdr.bdr_show_all_file_settings
  • -
  • bdr.bdr_file_settings
  • -
-
BDR5.6.0
Add create/drop_commit_scope functions

Add functions for creating and dropping commit scopes that will eventually deprecate the non-standard functions for adding and removing commit scopes. Notify the user that these will be deprecated in a future version, suggesting the use of the new versions.

-
BDR5.6.0
Grant additional object permissions to role "bdr_monitor".

Permissions for the following objects have been updated to include SELECT permissions for role "bdr_monitor": bdr.node_config

-
BDR5.6.0
Add bdr.raft_vacuum_interval and bdr.raft_vacuum_full_interval GUCs to control frequency of automatic Raft catalog vacuuming.

This update introduces GUCs to regulate the frequency of automatic vacuuming on the specified catalogs. The GUC bdr.raft_vacuum_interval determines the frequency at which tables are examined for VACUUM and ANALYZE. Autovacuum GUCs and table reloptions are utilized to ascertain the necessity of VACUUM/ANALYZE. -The bdr.raft_vacuum_full_interval initiates VACUUM FULL on the tables. Users have the ability to deactivate VACUUM FULL if regular VACUUM suffices to manage bloat.

-
40412
BDR5.6.0
Add "node_name" to "bdr.node_config_summary"

Add "node_name" to the view "bdr.node_config_summary". This makes it consistent with other summary views, which report the name of the object (node, group, etc.) for which the summary is being generated.

-
BDR5.6.0
bdr_init_physical: improve local node connection failure logging

Ensure that bdr_init_physical emits details about connection failure if the "--local-dsn" parameter is syntactically correct but invalid, e.g., due to an incorrect host or port setting.

-
BDR5.6.0
bdr_config: add PG_FLAVOR output

bdr_config now shows the PostgreSQL "flavor" which BDR was built against, one of:

-
    -
  • COMMUNITY
  • -
  • EPAS
  • -
  • EXTENDED
  • -
  • BDRPG
  • -
-
BDR5.6.0
Enhance warning messages

Enhance messages issued during DML and DDL lock acquisition.

-
BDR5.6.0
Do not send Raft snapshot very aggressively

Avoid sending Raft snapshots too frequently as it can slow down follower nodes. Limit the snapshot rate to once in every election timeout, unless there is no other communication between the nodes, in which case send a snapshot every 1/3rd of the election timeout. This will help all nodes keep pace with the leader and improve CPU utilization.

-
37725
BDR5.6.0
Group-Specific Configuration Options

It is now possible to set akk top-level and subgroup level options. The following options are available for top-both groups:

-
    -
  • check_constraints
  • -
  • enable_wal_decoder
  • -
  • num_writers
  • -
  • streaming_mode
  • -
  • enable_raft -Subgroups inherit settings from their parent group, but can override them if set in the subgroup.
  • -
-
37725
BDR5.6.0
Subscriber-only node groups have a leader

Subscriber-only node groups have a leader elected by top-level Raft. There is now a bdr.leader catalog that tracks leadership of subgroups and subscriber-only nodes. If the node that is the leader of a subscriber-only node group goes down or becomes unreachable, a new leader is elected from that group.

-
BDR5.6.0
Optimized topology for subscriber-only nodes via the leader of the subscriber-only node group

Subscriber-only nodes earlier used to have subscriptions to each data node. Now if optimized topology is enabled, only the leaders of subscriber-only node groups have subscriptions to routing leaders of data node subsgroups. The subscriber only nodegroup leaders route data to other nodes of that subscriber-only nodegroup. This reduces the load on all data nodes so they do not have to send data to all subscriber-only nodes. The GUC bdr.force_full_mesh=off enables this optimized topology. This GUC variable is on by default, retaining pre-5.6.0 behavior.

-
BDR5.6.0
Introduce new subscription types to support optimized topology

New subscription types that forward data from all nodes of the subgroup via a routing leader (mode: l), and those that forward data from the entire cluster via a subscriber-only group leader (mode: w) are introduced.

-
BDR5.6.0
Introduce version number and timestamp for write leader

A write leader has a version. Every time a new leader is elected, the version is incremented and timestamp noted via Raft. This is to build a foundation for better conflict resolution.

-
BDR5.6.0
Allow use of column reference in DEFAULT expressions

Using column references in default expressions is now supported, this is particularly -useful with generated columns, for example: -ALTER TABLE gtest_tableoid ADD COLUMN c regclass GENERATED ALWAYS AS (tableoid) STORED;

-
BDR5.6.0
Support replication of REINDEX

Both REINDEX and REINDEX CONCURRENTLY are now replicated commands.

-
BDR5.6.0
Fix receiver worker being stuck when exiting

Receiver worker could get stuck when exiting, waiting for a writer that never -actually started. This could on rare occasions break replication after -configuration changes until Postgres was restarted.

-
BDR5.6.0
Reduce performance impact of PGD specific configuration parameters that are sent to client

Changes to values of variables bdr.last_committed_lsn, transaction_id -and bdr.local_node_id are automatically reported to clients when using -CAMO or GROUP COMMIT. This has now been optimized to use less resources.

-
BDR5.6.0
Allow use of commit scopes defined in parent groups

When there is a commit scope defined for top-level group, it can be used by -any node in a subgroup and does not need to be redefined for every subgroup -anymore. This is particularly useful when combined with ORIGIN\_GROUP -keyword to reduce the complexity of commit scope setup.

-
PGD CLI5.6.0
Use bdr.bdr_file_settings view in verify-settings

Use bdr.bdr_file_settings view to get the current settings for the proxy.

-
- - -## Bug Fixes - - - - - - - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.6.0
Fixed buffer overrun in the writer

Include an extra zero byte at the end of a column value allocation in shared memory queue insert/update/delete messages.

-
98966
BDR5.6.0Fixes for some race conditions to prevent node sync from entering a hung state with the main subscription disabled.
BDR5.6.0
Do not accidentally drop the autopartition rule when a column of the autopartitioned table is dropped.

When ALTER TABLE .. DROP COLUMN is used, the object_access_hook is fired with classId set to RelationRelationId, but the subId is set to the attribute number to differentiate it from the DROP TABLE command.

-

Therefore, we need to check the subId field to make sure that we are not performing actions that should only be triggered when a table is dropped.

-
40258
BDR5.6.0
Adjust bdr.alter_table_conflict_detection() to propagate correctly to all nodes

Ensure that the propagation of bdr.alter_table_conflict_detection() (as well as the related, deprecated bdr.column_timestamps_(en|dis)able() functions) is carried out correctly to all logical standbys. Previously, this propagation did not occur if the logical standby was not directly attached to the node on which the functions were executed.

-
40258
BDR5.6.0
Prevent a node group from being created with a duplicate name

Ensure that a nodegroup is not inadvertently created with the same name as an existing nodegroup. Failure to do so may result in a complete shutdown of the top-level Raft on all nodes, with no possibility of recovery.

-
BDR5.6.0
Prevent spurious "local info ... not found" errors when parting nodes

Handle the absence of the expected node record gracefully when a node is being removed, the local node record might have already been deleted, but an attempt could be made to update it anyway. This resulted in harmless "BDR node local info for node ... not found" errors.

-
BDR5.6.0
Prevent a corner-case situation from being misdiagnosed as a PGD version problem

Improve Raft error messages to handle cases where nodes may not be correctly participating in Raft.

-
BDR5.6.0
Handling duplicate requests in RAFT preventing protocol breakage

When processing RAFT entries, it's crucial to handle duplicate requests properly to prevent Raft protocol issues. Duplicate requests can occur when a client retries a request that has already been accepted and applied by the Raft leader. The problem arose when the leader failed to detect the duplicate request due to historical evidence being pruned.

-
37725
BDR5.6.0
Handling Raft Snapshots: Consensus Log

When installing or importing a Raft snapshot, discard the consensus log unless it contains an entry matching the snapshot's last included entry and term.

-
37725
BDR5.6.0
Be more restrictive about which index to use during replication for REPLICA IDENTITY FULL tables

This fixes various index related errors during replication like: -'could not lookup equality operator for type, optype in opfamily' -or 'function "amgettuple" is not defined for index "brinidx"'

-
BDR5.6.0
Support createrole_self_grant

The createrole_self_grant configuration option affects inherited grants -by newly created roles. In previous versions CREATE ROLE/CREATE USER -replication would not take this into consideration, resulting in different -role privileges on different nodes.

-
BDR5.6.0
Allow CREATE SCHEMA AUTHORIZATION ... combined with other create operations

Previously, this would throw "cannot change current role within security-restricted operation" error

-
BDR5.6.0
Use base type instead of domain type while casting values

This prevents errors when replicating UPDATEs for domains defined as NOT VALID -where tables contain data which would not be allowed by current definition -of such domain.

-
Utilities5.6.0bdr_pg_upgrade - Create logical slot with twophase set to true for PG 14+
- - diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.1_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.1_rel_notes.mdx deleted file mode 100644 index a758841cb3a..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.6.1_rel_notes.mdx +++ /dev/null @@ -1,92 +0,0 @@ ---- -title: EDB Postgres Distributed 5.6.1 release notes -navTitle: Version 5.6.1 -originalFilePath: product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.1.yml -editTarget: originalFilePath ---- - -Released: 25 November 2024 - -EDB Postgres Distributed 5.6.1 includes a number of enhancements and bug fixes. - -## Highlights - -- Postgres 17 support -- ARM64 processor support - -## Features - - - - -
ComponentVersionDescriptionAddresses
BDR5.6.1
Added Postgres 17 support

Support for Postgres 17 has been added for all flavors (PostgreSQL, EDB Postgres Extended, -and EDB Postgres Advanced Server) starting with version 17.2.

-
BDR5.6.1
Added ARM64 processor Support

Support ARM architecture for EDB Postgres Distributed on Debian 12 and RHEL 9.

-
- - -## Enhancements - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.6.1
Added bdr.wait_node_confirm_lsn().

The function bdr.wait_node_confirm_lsn() has been introduced to wait until a specific node -reaches a designated Log Sequence Number (LSN). It first checks the confirmed_flush_lsn of -the replication slot for the specified node. If that information is not available, the function -connects to the node and queries pg_replication_origin_progress(), using the invoking node as -the origin. -If the nodename parameter is NULL, the function will wait for all nodes to reach the specified -LSN. If the target LSN is NULL, it will wait for the current wal_flush_lsn.

-
BDR5.6.1
Improvements made in SO Node Management and Progress Tracking.

An update addresses the movement of group slots in SO nodes, ensuring they don't appear as peers in -progress updates. Improvements include enhanced watermark management for SO leaders in the Optimized Topology -configuration, where write leaders now include watermarks in their updates. Watermarks are broadcasted -to simplify progress tracking on idle clusters. The peer progress mapping for SO nodes has been corrected, -and the tap test for group slot movement has been revised. -Additionally, the bdr_get_all_origins function now considers SO node origins.

-
BDR5.6.1
LSN Progress in Optimized Topology Configurations is now communicated.

While there are no connections from non-leader data nodes to subscriber-only nodes in an optimized -topology configuration, the LSN progress of all data nodes is periodically communicated to these -subscriber-only nodes through logical replication.

-
BDR5.6.1
Some DDL commands are now allowed by bdr.permit_unsafe_commands when set.

The bdr.permit_unsafe_commands parameter now allows some DDL commands that were previously disallowed. Specifically ALTER COLUMN...TYPE...USING can now be permitted if the user knows the operation is safe.

-
- - -## Bug Fixes - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.6.1
Addressed walsender crash that happend during configuration reload.

Ensure that pglogical GUCs are overridden only when operating within the pglogical worker. -If this is not the case, MyPGLogicalWorker will be NULL, resulting in a segmentation fault -when the walsender attempts a configuration reload from the -pgl_wait_for_standby_confirmation() function.

-
42100
BDR5.6.1
Fixed unintended eager connection related to consensus connections among Subscriber Only group members

The msgbroker module used to establish consensus connections lazily, meaning that connections -were created only when the first message was sent to a specific destination. This method -negatively affected the latency of Raft leader elections. The behavior was modified to create -connections to consensus peers eagerly. However, this change resulted in an unintended -consequence: a fully meshed consensus network among subscriber-only nodes, which may conflict -with customer network designs. This patch keeps the eager connection setup but limits it to -voting nodes only, reverting to a lazy connection setup for non-voting nodes.

-
42041
BDR5.6.1
Fixed autopatition task scheduling.

To improve reliability, shuffle the scheduling of autopartition tasks. This way, tasks -that are prone to failure won't consistently impact the success of other tasks.

-
41998
BDR5.6.1
Fixed parting subscription with standbys.

The parting subscription used to hang, failing to wait for standbys when the -bdr.standby_slot_names parameter was defined.

-
41821
BDR5.6.1
Fixed parting SO node with multiple origins.

All relevant origins must be removed when parting SO node. -With Optimized Topology, parting an SO node should result in removing all origins it -has, not just the one related to its SO group leader. -When parting a data node, even though there is no subscription to it -from SO node, the origin should be removed. -DO not make SO node target of a part catchup subscription when Optimized Topology is enabled.

-
BDR5.6.1
Stopped creation of slots for subscriber only nodes on witness nodes.

Subscriber only nodes should not have slots on witness nodes.

-
BDR5.6.1
Ensure no waiting for DEGRADE timeout when in an already degraded state.

When using commit scope with DEGRADE clause, if system detects that it's in degraded state, transactions should start in the DEGRADE mode. This ensures that the timeout is not applied on every commit.

-
PGD Proxy5.6.1
Fixed routing strategy for read nodes.

Corrected routing strategy for read nodes after a network partition.

-
- - diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.7.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.7.0_rel_notes.mdx deleted file mode 100644 index 094d7ebc4e8..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.7.0_rel_notes.mdx +++ /dev/null @@ -1,123 +0,0 @@ ---- -title: EDB Postgres Distributed 5.7.0 release notes -navTitle: Version 5.7.0 -originalFilePath: product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.7.0.yml -editTarget: originalFilePath ---- - -Released: 25 February 2025 - -Updated: 26 March 2025 - -EDB Postgres Distributed 5.7.0 includes a number of enhancements and bug fixes. - -## Highlights - -- **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/latest/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. -- **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/latest/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. -- **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/latest/cli/command_ref/node/upgrade -) command in the PGD CLI to upgrade a node to the latest versions of PGD and Postgres. -- **Ubuntu 24.04 supported**: PGD 5.7.0 now supports Ubuntu 24.04. (23 March 2025) - -## Features - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.7.0
Added support for failover slots for logical replication.

When a PGD node fails or becomes unavailable, consumers can now continue to consume changes from some other node in the cluster. The feature can be turned on by setting the top group option failover_slot_scope to global. The consumer needs to handle duplicate transactions, but PGD -guarantees that every transaction is decoded and sent at least once.

-
BDR5.7.0Ensured that the `remote_commit_time` and `remote_commit_lsn` are properly reported in the conflict reports.42273
PGD CLI5.7.0
Added new CLI command structure for easier access.

The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in the CLI command reference.

-
PGD CLI5.7.0
Added a new local assesment feature for local non-PGD nodes to the CLI

The new feature allows you to assess the local node for compatibility with PGD. The feature is available as pgd assess. Full details are available in the CLI command reference.

-
PGD CLI5.7.0
Added pgd node upgrade functionality to the PGD CLI.

The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of bdr_pg_upgrade into the CLI and is run locally. See pgd node upgrade and inplace upgrades for more information.

-
BDR5.7.0
Fixed an issue whereby concurrent joins of subscriber-only nodes occasionally stopped responding.

A node could end up waiting for the local state of another concurrently joined node to advance, which caused the system to stop responding.

-
42964
BDR5.7.0
Ubuntu 24.04 is now supported.

Packages are now available for Ubuntu 24.04 for all PGD components.

-
- - -## Enhancements - - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.7.0
Narrowed down bdr.node_slots output to list only relevant slots.

The slots are displayed based on the node type, its role, and the cluster topology.

-
BDR5.7.0Added `target_type` column to `bdr.node_slots` view to indicate slot purpose/status.
BDR5.7.0
Improved performance of DML and other operations on partitioned tables.

Many code paths touching partitioned tables were performing costly checks to support additional table access methods. This code was adjusted to make the most common case (heap) the fastest.

-
BDR5.7.0
Improved bdr_init_physical to be able to run without superuser.

Now only the bdr_superuser is required.

-
PGD CLI5.7.0
Added new CLI commands for adding removing and updating commit scopes.

The new commands are pgd commit-scope show, pgd commit-scope create, pgd commit-scope update and pgd commit-scope drop. Full details are available in the CLI command reference.

-
PGD CLI5.7.0
Added support for legacy CLI command structure in the updated PGD CLI.

The legacy CLI command structure is still supported in the updated PGD CLI. The legacy command support is available for a limited time and will be removed in a future release. It is implemented as a wrapper around the new commands.

-
PGD CLI5.7.0
Added new subcommands to PGD CLI node and group for getting options.

The new subcommands are pgd node get-options and pgd group get-options. Full details are available in the CLI command reference.

-
PGD CLI5.7.0
Added new output formatting options psql and markdown to the PGD CLI.

The new options allow you to format the output of the CLI commands in a psql-like or markdown format. Format options are now json, psql, modern, markdown, simple and defaults to simple.

-
BDR5.7.0
Removed redundant WARNING when executing ALTER SEQUENCE.

The message now only appears when creating the sequence.

-
- - -## Bug Fixes - - - - - - - - - - - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.7.0Fixed a server panic by ensuring that catalog lookups aren't performed in a callback that gets called very late in the commit cycle.
BDR5.7.0Fixed an unintentional timed wait for a synchronous logical replication that resulted in unusually high latency for some transactions.42273
BDR5.7.0
The bdr.group_camo_details view now only lists data nodes belonging to the CAMO group.

The bdr.group_camo_details view now only lists data nodes belonging to the CAMO commit scope group. Previously, the view could include data nodes were not part of the CAMO group, logical standby nodes, Subscriber-Only nodes and witness nodes.

-
45354
BDR5.7.0
Improved PostgreSQL 17 compatibility.

Internal tables managed by autopartition that were created before the upgrade to PG17 were missing a rowtype entry into pg_depend. This issue caused errors after upgrading to PG17.

-
44401
BDR5.7.0
Fixed a bug where parting stopped responding with consensus timeout or consensus errors.

This fix also ensures parting doesn't stop responding when any of the nodes restart when part is in progress. Origin LSNs don't show up as 0/0 in log messages.

-
42056
BDR5.7.0
Fixed a memory leak in a global hook.

This bug could cause memory to leak even if bdr.so is just loaded and the extension isn't installed.

-
43678
BDR5.7.0
Fixed a bug where subgroup member part prevented new nodes from joining.

This fix ensures that if a subgroup member is parted while Raft subgroup routing is active, then another node can subsequently join that subgroup.

-
BDR5.7.0
Fixed a case where galloc sequence overflow wasn't being caught correctly.

This bug resulted in nextval call being stuck.

-
44755
BDR5.7.0
Fixed group slot movement for witness and data nodes in presence of subscriber-only nodes and witness nodes.

For PGD 5.6.0 and later, there are no subscriptions from a subscriber-only node to a witness node. This caused a problem with movement of group slot on data nodes and witness nodes in the presence of subscriber-only nodes in the cluster. This bug could cause WAL to be held on both witness nodes and data nodes when there's a subscriber-only node in the cluster.

-
BDR5.7.0Fixed a bug where conflict resolution functions were executed also on the witness nodes.
BDR5.7.0
Fixed a bug where bdr_init_physical stopped responding when synchronous_commit is set to remote_write/remote_apply.

bdr_init_physical disables synchronous replication on a new node by resetting synchronous_standby_names to an empty string. A warning message reminds you to set synchronous_standby_names as needed.

-
44760
PGD Proxy5.7.0
Fixed proxy regression by improving dsn name support for read nodes

A regression in the way read nodes were identified in the proxy in 6.5.1 was fixed -by enabling support for different values in the dsn field's host and the node_name.

-
BDR5.7.0
Fixed a bug to handle additional replication sets on witness nodes.

The witness node now only cares about the top replication set and thus allows it to miss replications sets and not error out.

-
41776, 44527
BDR5.7.0
Changed origin deletion to be done asynchronously when optimized topology is enabled.

In an optimized topology, origin names now use the generation ID of the node. This fixes an inconsistency in which some transactions can be lost or sent twice when a node is parted.

-
BDR5.7.0
Fixed a crash during upgrades in a mixed-version cluster.

Upgrading from versions earlier than 5.6.0 to 5.6.0 and later in a mixed-version cluster with a standby or a node joining/parting could cause a crash.

-
BDR5.7.0
Unsupported ALTER TABLE command on materialized views is no longer replicated.

Replication no longer becomes stuck when the command is issued.

-
BDR5.7.0Disallowed `GRANT` on BDR objects to non-BDR users.
BDR5.7.0Improved maintenance of the `bdr.leader` table.
- - -## Deprecations - - - - -
ComponentVersionDescriptionAddresses
PGD CLI5.7.0
Deprecated proxy commands in new PGD CLI command structure.

The proxy commands are deprecated in the new CLI command structure. The proxy commands are still available in the legacy CLI command structure. Proxy options can be set using the pgd group set-option command.

-
PGD CLI5.7.0
Removed yaml output as an option in PGD CLI

The yaml output option is removed from the PGD CLI. The json output option is still available.

-
- - -## Other - - - -
ComponentVersionDescriptionAddresses
BDR5.7.0
Added a GUC to support upgrade to 5.7.0 for clusters with optimized topology (a preview feature).

An upgrade to 5.7.0 from clusters that have bdr.force_full_mesh set to off to enable optimized topology -(a preview feature) must first set this GUC to on and then upgrade. After the entire cluster upgrades, -this GUC can be set to off again to enable optimized topology. -Having this GUC set to off during upgrade isn't supported.

-
- - diff --git a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.8.0_rel_notes.mdx b/product_docs/docs/pgd/5.8/rel_notes/pgd_5.8.0_rel_notes.mdx deleted file mode 100644 index fced287651b..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/pgd_5.8.0_rel_notes.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: EDB Postgres Distributed 5.8.0 release notes -navTitle: Version 5.8.0 -originalFilePath: product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.8.0.yml -editTarget: originalFilePath ---- - -Released: 22 May 2025 - -EDB Postgres Distributed 5.8.0 includes a number of enhancements and bug fixes. - -## Highlights - -- PGD CLI improvements enhance usability and functionality. -- Additional functions simplify sequence management. -- Includes a fix for CVE-2025-2506. - -## Enhancements - - - - - - - - - - -
ComponentVersionDescriptionAddresses
PGD CLI5.8.0
The --summary and --options flags for pgd node show CLI command.

Add the --summary and --options flags to pgd node show command to filter the output of the pgd node show command. -This also maintains symmetry with other show commands.

-
PGD CLI5.8.0
More GUCs verfied in pgd cluster verify CLI command.

Add the bdr.lock_table_locking and bdr.truncate_locking GUCs to list of GUCs verfied in pgd cluster verify command.

-
BDR5.8.0
We now ensure that commit scope logic only runs on data nodes.

While commit scope processing does not have any direct affect on -non-data nodes, by skipping it altogether we can avoid potentially -confusing error messages.

-
BDR5.8.0
Added bdr.galloc_chunk_info() function to simplify sequences.

The bdr.galloc_chunk_info() function provides information about the chunk -allocation for a given sequence. This function returns the chunk ID, the -sequence ID, and the chunk size. This function is useful for debugging and -understanding how sequences are allocated in BDR.

-
PGD CLI5.8.0
Improve the CLI debug messages.

Improve the formating of the log messages to be more readable and symmetrical with Postgres log messages.

-
PGD CLI5.8.0
Added a new column for pgd cluster verify --settings CLI command output.

Add the recommended_value column to the result of the pgd cluster verify --settings command. -The column will not be displayed in tabular output but will be displayed in JSON output.

-
PGD CLI5.8.0
Display sorted output for CLI.

The output for the commands with tabular output will be sorted by the resource name. -For the commands that display more than one resource, the output will be sorted by each resource column in order.

-
BDR5.8.0
Improved pgd_bench error message related to CAMO.

If executed with --mode=camo, but the provided test script is not wrapped -in an explicit transaction, pgd_bench will not be able to retrieve the -expected transaction_id value. Now the emitted error message contains -a hint about a possible missing transaction.

-
- - -## Security Fixes - - - -
ComponentVersionDescriptionAddresses
BDR5.8.0
Addressed CVE-2025-2506, which could enable a user with CONNECT access to obtain read access to replicated tables.

An issue, CVE-2025-2506, was discovered in pglogical which is present in later versions of BDR and PGD. The issue could enable a user with CONNECT access to obtain read access to replicated tables.

-
CVE-2025-2506
- - -## Bug Fixes - - - - - - - - - - - - - - - - -
ComponentVersionDescriptionAddresses
BDR5.8.0
Added "safety nets" for CDC failover.

CDC failover now has additional safety nets to ensure that the consumer -does not start replication from a node that is not the creator of the -replication slot. This is to prevent data loss or duplicate transactions. -The changes also add additional checks to ensure that the consumer does -not start replication from a node that does not have the required WAL -files to decode the transactions that are missing on the consumer but -were included in the initial snapshot that the newly joined node had -obtained (physical or logical).

-
BDR5.8.0
Fixed replication failure with concurrent updates on a non-unique index.

Updated to compare tuples on lookup to ensure it is the same when handling non-unique indexes.

-
43523, 43802, 45244, 47815, 48007
BDR5.8.0
Improved handling of connection information and password obfuscation.

The shared memory information for connection information may get obfuscated when a password is present -and become useless. Instead of reading from there, we are now using the primary_conninfo GUC -which is now available on all supported PG versions.

-
41776
PGD CLI5.8.0
Fixed the CLI pgd cluster show command's behavior with clock drift errors and a degraded cluster.

The pgd cluster show command would exit with an error regarding clock drift if only one node was up and running in a N node cluster, and not show the associated health and summary information. -The command now returns output for, health and summary, while reporting an appropriate error for clock-drift.

-
PGD CLI5.8.0
Fixed the CLI pgd node show command crashing if a non-existent node is given.

The pgd node show command crashed if a non-existent node was given to the command. -The command now fails gracefully with an appropriate error message.

-
PGD CLI5.8.0
Fixed the timestamp parsing issue for pgd replication show CLI command.

The pgd replication show command previously crashed when formatting EPAS timestamps.

-
47280
BDR5.8.0
Prevent segfault in bdr.taskmgr_set_leader.

The node_name argument to bdr.taskmgr_set_leader is required. The function now throws an appropriate error in case node_name := NULL is passed in.

-
BDR5.8.0
Improved deadlock avoidance where bdr_init_physical and monitoring queries are running concurrently.

We have replaced TRUNCATEs with DELETEs from all BDR catalogs on a local node drop. -This is to avoid deadlock in bdr_init_physical if the user happens to run monitoring -queries during node joining / cleaning unwanted source node data.

-
46952
BDR5.8.0
Ensure a new joiner processes the watermark message in the CATCHUP phase.

Setting nci->min_lsn to XactLastCommitEnd of watermark message Tx to ensure -CATCHUP phase finishes on new joiner only after watermark is processed.

-
BDR5.8.0
Fix Raft leader election timeout/failure after upgrade

Ensure that any custom value set in the deprecated GUC bdr.raft_election_timeout -is applied to its replacement bdr.raft_global_election_timeout.

-
BDR5.8.0
Ensure that disabled subscriptions on subscriber-only nodes are not re-enabled

During subscription reconfiguration, if there is no change required to a subscription, -do not enable it since it could have been disabled explicitly by the user. -Skip reconfiguring subscriptions if there are no leadership changes.

-
46519
BDR5.8.0
Subscriber-only nodes will not take a lock when running DDL

Subscriber-only nodes will no longer attempt to take a lock on the cluster when running DDL. The DDL will be executed locally and not replicated to other nodes.

-
47233
BDR5.8.0
Fixed deadlock issue in bdr_init_physical.

Fixed deadlock between bdr_init_physical cleaning unwanted node data and concurrent monitoring queries.

-
46952
BDR5.8.0
Fixed a consistency issue in node join where a joining node could possibly miss some data sent to it from the source node.

Fixed an issue when a new node joining the cluster finishes CATCHUP phase before getting its replication progress against all data nodes. This could have caused a new node to be out of sync with the cluster.

-
- - diff --git a/product_docs/docs/pgd/5.8/rel_notes/src/meta.yml b/product_docs/docs/pgd/5.8/rel_notes/src/meta.yml deleted file mode 100644 index 966a486f925..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/src/meta.yml +++ /dev/null @@ -1,73 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/meta-schema.json - -product: EDB Postgres Distributed -shortname: pgd -title: EDB Postgres Distributed 5 release notes -description: Release notes for EDB Postgres Distributed 5 and later -intro: | - The EDB Postgres Distributed documentation describes the latest version of EDB Postgres Distributed 5, including minor releases and patches. The release notes provide information on what was new in each release. For new functionality introduced in a minor or patch release, the content also indicates the release that introduced the feature. -columns: -- 0: - label: Release Date - key: shortdate -- 1: - label: "EDB Postgres Distributed" - key: version-link -- 2: - label: "BDR extension" - key: "$BDR" -- 3: - label: "PGD CLI" - key: "$PGD CLI" -- 4: - label: "PGD Proxy" - key: "$PGD Proxy" -components: [ "BDR", "PGD CLI", "PGD Proxy", "Utilities" ] -precursor: -- date: 31 May 2024 - version: "5.5.1" - "BDR": "5.5.1" - "PGD CLI": 5.5.0 - "PGD Proxy": 5.5.0 -- date: 16 May 2024 - version: "5.5.0" - "BDR": 5.5.0 - "PGD CLI": 5.5.0 - "PGD Proxy": 5.5.0 -- date: 03 April 2024 - version: "5.4.1" - "BDR": 5.4.1 - "PGD CLI": 5.4.0 - "PGD Proxy": 5.4.0 -- date: 05 March 2024 - version: "5.4.0" - "BDR": 5.4.0 - "PGD CLI": 5.4.0 - "PGD Proxy": 5.4.0 -- date: 14 November 2023 - version: "5.3.0" - "BDR": 5.3.0 - "PGD CLI": 5.3.0 - "PGD Proxy": 5.3.0 -- date: 04 August 2023 - version: "5.2.0" - "BDR": 5.2.0 - "PGD CLI": 5.2.0 - "PGD Proxy": 5.2.0 -- date: 16 May 2023 - version: "5.1.0" - "BDR": 5.1.0 - "PGD CLI": 5.1.0 - "PGD Proxy": 5.1.0 -- date: 21 Mar 2023 - version: "5.0.1" - "BDR": 5.0.0 - "PGD CLI": 5.0.1 - "PGD Proxy": 5.0.1 -- date: 21 Feb 2023 - version: "5.0.0" - "BDR": 5.0.0 - "PGD CLI": 5.0.0 - "PGD Proxy": 5.0.0 - - \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.0.yml b/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.0.yml deleted file mode 100644 index f51a4e4d786..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.0.yml +++ /dev/null @@ -1,355 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json -product: EDB Postgres Distributed -version: 5.6.0 -date: 15 October 2024 -components: - "BDR": 5.6.0 - "PGD CLI": 5.6.0 - "PGD Proxy": 5.6.0 - "Utilities": 5.6.0 -intro: | - EDB Postgres Distributed 5.6.0 includes a number of enhancements and bug fixes. -highlights: | - - Improved observability with new monitoring functions and SQL views. - - Improvements to commit scopes including: - - GROUP COMMIT and SYNCHRONOUS COMMIT support graceful degrading using DEGRADE ON. - - ORIGIN_GROUP support and commit scope inheritance simplify commit scope creation. - - Improved synchronous commit behavior around deadlocks. - - Metrics for commit scope performance and state. - - Optimized Topology support for Subscriber-only groups and nodes. (preview) - - Improved Postgres compliance with support for: - - Exclusion Constraints - - REINDEX replications - - createrole_self_grant - - column reference in DEFAULT expressions - - CREATE SCHEMA AUTHORIZATION - - Streaming Transaction support with Decoding Worker. -relnotes: -- relnote: Decoding Worker supports Streaming Transactions - component: BDR - details: | - One of the main advantages of streaming is that the WAL sender sends the partial transaction before it commits, which reduces replication lag. Now, with streaming support, the WAL decoder does the same thing, but it streams to the LCRs segments. Eventually, the WAL sender will read the LCRs and mimic the same behavior of streaming large transactions before they commit. This provides the benefits of decoding worker, such as reduced CPU and disk space, as well as the benefits of streaming, such as reduced lag and disk space, since ".spill" files are not generated. - The WAL decoder always streams the transaction to LCRs, but based on downstream requests, the WAL sender either streams the transaction or just mimics the normal BEGIN..COMMIT scenario. - In addition to the normal LCRs segment files, we create streaming files with the starting names `TR_TXN_` and `CAS_TXN_` for each streamed transaction. - jira: BDR-5123 - addresses: "" - type: Enhancement - impact: High -- relnote: Introduce several new monitoring views - component: BDR - details: | - There are several view providing new information as well as making some - existing information easier to discover: - - [`bdr.stat_commit_scope`](/pgd/latest/reference/catalogs-visible#bdrstat_commit_scope) : Cumulative statistics for commit scopes. - - [`bdr.stat_commit_scope_state`](/pgd/latest/reference/catalogs-visible#bdrstat_commit_scope_state) : Information about current use of commit scopes by backends. - - [`bdr.stat_receiver`](/pgd/latest/reference/catalogs-visible#bdrstat_receiver) : Per subscription receiver statistics. - - [`bdr.stat_writer`](/pgd/latest/reference/catalogs-visible#bdrstat_writer) : Per writer statistics. There can be multiple writers for each subscription. This also includes additional information about the currently applied transaction. - - [`bdr.stat_raft_state`](/pgd/latest/reference/catalogs-visible#bdrstat_raft_state) : The state of the Raft consensus on the local node. - - [`bdr.stat_raft_followers_state`](/pgd/latest/reference/catalogs-visible#bdrstat_raft_followers_state) : The state of the followers on the Raft leader node (empty on other nodes), also includes approximate clock drift between nodes. - - [`bdr.stat_worker`](/pgd/latest/reference/catalogs-visible#bdrstat_worker) : Detailed information about PGD workers, including what the operation manager worker is currently doing. - - [`bdr.stat_routing_state`](/pgd/latest/reference/catalogs-visible#bdrstat_routing_state) : The state of the connection routing which PGD Proxy uses to route the connections. - - [`bdr.stat_routing_candidate_state`](/pgd/latest/reference/catalogs-visible#bdrstat_routing_candidate_state) : Information about routing candidate nodes on the Raft leader node (empty on other nodes). - jira: BDR-5316 - type: Enhancement - impact: High -- relnote: Support conflict detection for exclusion constraints - component: BDR - details: | - This allows defining `EXCLUDE` constraint on table replicated by PGD either with - `CREATE TABLE` or with `ALTER TABLE` and uses similar conflict detection to resolve - conflicts as for `UNIQUE` constraints. - jira: BDR-4851 - type: Enhancement - impact: High -- relnote: Fixed buffer overrun in the writer - component: BDR - details: | - Include an extra zero byte at the end of a column value allocation in shared memory queue insert/update/delete messages. - jira: BDR-5188 - addresses: 98966 - type: Bug Fix - impact: High -- relnote: Fixes for some race conditions to prevent node sync from entering a hung state with the main subscription disabled. - component: BDR - jira: BDR-5041 - addresses: "" - type: Bug Fix - impact: High -- relnote: Detect and resolve deadlocks between synchronous replication wait-for-disconnected sessions and replication writer. - component: BDR - details: | - This will cancel synchronous replication wait on disconnected sessions if it deadlocks against replication, preventing deadlocks on failovers when using synchronous replication. This only affects commit scopes, not synchronous replication configured via `synchronous_standby_names`. - jira: BDR-5445, BDR-5445, BDR-4104 - addresses: "" - type: Enhancement - impact: High -- relnote: Do not accidentally drop the autopartition rule when a column of the autopartitioned table is dropped. - component: BDR - details: | - When ALTER TABLE .. DROP COLUMN is used, the object_access_hook is fired with `classId` set to RelationRelationId, but the `subId` is set to the attribute number to differentiate it from the DROP TABLE command. - - Therefore, we need to check the subId field to make sure that we are not performing actions that should only be triggered when a table is dropped. - jira: BDR-5418 - addresses: 40258 - type: Bug Fix - impact: High -- relnote: Adjust `bdr.alter_table_conflict_detection()` to propagate correctly to all nodes - component: BDR - details: | - Ensure that the propagation of `bdr.alter_table_conflict_detection()` (as well as the related, deprecated `bdr.column_timestamps_(en|dis)able()` functions) is carried out correctly to all logical standbys. Previously, this propagation did not occur if the logical standby was not directly attached to the node on which the functions were executed. - jira: BDR-3850 - addresses: 40258 - type: Bug Fix - impact: High -- relnote: Prevent a node group from being created with a duplicate name - component: BDR - details: | - Ensure that a nodegroup is not inadvertently created with the same name as an existing nodegroup. Failure to do so may result in a complete shutdown of the top-level Raft on all nodes, with no possibility of recovery. - jira: BDR-5355 - addresses: "" - type: Bug Fix - impact: High -- relnote: Add bdr.bdr_show_all_file_settings() and bdr.bdr_file_settings view - component: BDR - details: | - Fix: Correct privileges for bdr_superuser. Creating wrapper SECURITY DEFINER functions in the bdr schema and granting access to bdr_superuser to use those: - - bdr.bdr_show_all_file_settings - - bdr.bdr_file_settings - jira: BDR-5070 - addresses: "" - type: Enhancement - impact: High -- relnote: Add create/drop_commit_scope functions - component: BDR - details: | - Add functions for creating and dropping commit scopes that will eventually deprecate the non-standard functions for adding and removing commit scopes. Notify the user that these will be deprecated in a future version, suggesting the use of the new versions. - jira: BDR-4073 - addresses: "" - type: Enhancement - impact: High -- relnote: Grant additional object permissions to role "bdr_monitor". - component: BDR - details: | - Permissions for the following objects have been updated to include SELECT permissions for role "bdr_monitor": bdr.node_config - jira: BDR-4885, BDR-5354 - addresses: "" - type: Enhancement - impact: High -- relnote: Add `bdr.raft_vacuum_interval` and `bdr.raft_vacuum_full_interval` GUCs to control frequency of automatic Raft catalog vacuuming. - component: BDR - details: | - This update introduces GUCs to regulate the frequency of automatic vacuuming on the specified catalogs. The GUC `bdr.raft_vacuum_interval` determines the frequency at which tables are examined for VACUUM and ANALYZE. Autovacuum GUCs and table reloptions are utilized to ascertain the necessity of VACUUM/ANALYZE. - The `bdr.raft_vacuum_full_interval` initiates VACUUM FULL on the tables. Users have the ability to deactivate VACUUM FULL if regular VACUUM suffices to manage bloat. - jira: BDR-5424 - addresses: 40412 - type: Enhancement - impact: High -- relnote: Prevent spurious "local info ... not found" errors when parting nodes - component: BDR - details: | - Handle the absence of the expected node record gracefully when a node is being removed, the local node record might have already been deleted, but an attempt could be made to update it anyway. This resulted in harmless "BDR node local info for node ... not found" errors. - jira: BDR-5350 - addresses: "" - type: Bug Fix - impact: High -- relnote: Prevent a corner-case situation from being misdiagnosed as a PGD version problem - component: BDR - details: | - Improve Raft error messages to handle cases where nodes may not be correctly participating in Raft. - jira: BDR-5362 - addresses: "" - type: Bug Fix - impact: High -- relnote: Add "node_name" to "bdr.node_config_summary" - component: BDR - details: | - Add "node_name" to the view "bdr.node_config_summary". This makes it consistent with other summary views, which report the name of the object (node, group, etc.) for which the summary is being generated. - jira: BDR-4818 - addresses: "" - type: Enhancement - impact: High -- relnote: "bdr_init_physical: improve local node connection failure logging" - component: BDR - details: | - Ensure that bdr_init_physical emits details about connection failure if the "--local-dsn" parameter is syntactically correct but invalid, e.g., due to an incorrect host or port setting. - jira: - addresses: "" - type: Enhancement - impact: High -- relnote: "`bdr_config`: add PG_FLAVOR output" - component: BDR - details: | - `bdr_config` now shows the PostgreSQL "flavor" which BDR was built against, one of: - - COMMUNITY - - EPAS - - EXTENDED - - BDRPG - jira: BDR-4428 - addresses: - type: Enhancement - impact: High -- relnote: Enhance warning messages - component: BDR - details: | - Enhance messages issued during DML and DDL lock acquisition. - jira: BDR-4200 - addresses: "" - type: Enhancement - impact: High -- relnote: Handling duplicate requests in RAFT preventing protocol breakage - component: BDR - details: | - When processing RAFT entries, it's crucial to handle duplicate requests properly to prevent Raft protocol issues. Duplicate requests can occur when a client retries a request that has already been accepted and applied by the Raft leader. The problem arose when the leader failed to detect the duplicate request due to historical evidence being pruned. - jira: BDR-5275, BDR-4091 - addresses: 37725 - type: Bug Fix - impact: High -- relnote: "Handling Raft Snapshots: Consensus Log" - component: BDR - details: | - When installing or importing a Raft snapshot, discard the consensus log unless it contains an entry matching the snapshot's last included entry and term. - jira: BDR-5285 - addresses: 37725 - type: Bug Fix - impact: High -- relnote: Do not send Raft snapshot very aggressively - component: BDR - details: | - Avoid sending Raft snapshots too frequently as it can slow down follower nodes. Limit the snapshot rate to once in every election timeout, unless there is no other communication between the nodes, in which case send a snapshot every 1/3rd of the election timeout. This will help all nodes keep pace with the leader and improve CPU utilization. - jira: BDR-5288 - addresses: 37725 - type: Enhancement - impact: High -- relnote: Group-Specific Configuration Options - component: BDR - details: | - It is now possible to set akk top-level and subgroup level options. The following options are available for top-both groups: - - check\_constraints - - enable\_wal\_decoder - - num\_writers - - streaming\_mode - - enable\_raft - Subgroups inherit settings from their parent group, but can override them if set in the subgroup. - jira: BDR-4954 - addresses: 37725 - type: Enhancement - impact: High -- relnote: Subscriber-only node groups have a leader - component: BDR - details: | - Subscriber-only node groups have a leader elected by top-level Raft. There is now a bdr.leader catalog that tracks leadership of subgroups and subscriber-only nodes. If the node that is the leader of a subscriber-only node group goes down or becomes unreachable, a new leader is elected from that group. - jira: BDR-5089 - type: Enhancement - impact: High -- relnote: Optimized topology for subscriber-only nodes via the leader of the subscriber-only node group - component: BDR - details: | - Subscriber-only nodes earlier used to have subscriptions to each data node. Now if optimized topology is enabled, only the leaders of subscriber-only node groups have subscriptions to routing leaders of data node subsgroups. The subscriber only nodegroup leaders route data to other nodes of that subscriber-only nodegroup. This reduces the load on all data nodes so they do not have to send data to all subscriber-only nodes. The GUC `bdr.force_full_mesh=off` enables this optimized topology. This GUC variable is on by default, retaining pre-5.6.0 behavior. - jira: BDR-5214 - type: Enhancement - impact: High -- relnote: Introduce new subscription types to support optimized topology - component: BDR - details: | - New subscription types that forward data from all nodes of the subgroup via a routing leader (mode: l), and those that forward data from the entire cluster via a subscriber-only group leader (mode: w) are introduced. - jira: BDR-5186 - type: Enhancement - impact: High -- relnote: Introduce version number and timestamp for write leader - component: BDR - details: | - A write leader has a version. Every time a new leader is elected, the version is incremented and timestamp noted via Raft. This is to build a foundation for better conflict resolution. - jira: BDR-3589 - type: Enhancement - impact: High -- relnote: Be more restrictive about which index to use during replication for REPLICA IDENTITY FULL tables - component: BDR - details: | - This fixes various index related errors during replication like: - 'could not lookup equality operator for type, optype in opfamily' - or 'function "amgettuple" is not defined for index "brinidx"' - jira: BDR-5523 , BDR-5361 - type: Bug Fix - impact: High -- relnote: Allow use of column reference in DEFAULT expressions - component: BDR - details: | - Using column references in default expressions is now supported, this is particularly - useful with generated columns, for example: - `ALTER TABLE gtest_tableoid ADD COLUMN c regclass GENERATED ALWAYS AS (tableoid) STORED;` - jira: BDR-5385 - type: Enhancement - impact: High -- relnote: Support `createrole_self_grant` - component: BDR - details: | - The `createrole_self_grant` configuration option affects inherited grants - by newly created roles. In previous versions `CREATE ROLE`/`CREATE USER` - replication would not take this into consideration, resulting in different - role privileges on different nodes. - jira: BDR-5403 - type: Bug fix - impact: High -- relnote: Allow `CREATE SCHEMA AUTHORIZATION ...` combined with other create operations - component: BDR - details: | - Previously, this would throw "cannot change current role within security-restricted operation" error - jira: BDR-5368 - type: Bug fix - impact: High -- relnote: Support replication of REINDEX - component: BDR - details: | - Both REINDEX and REINDEX CONCURRENTLY are now replicated commands. - jira: BDR-5363 - type: Enhancement - impact: High -- relnote: Use base type instead of domain type while casting values - component: BDR - details: | - This prevents errors when replicating UPDATEs for domains defined as NOT VALID - where tables contain data which would not be allowed by current definition - of such domain. - jira: BDR-5369 - type: Bug fix - impact: High -- relnote: Fix receiver worker being stuck when exiting - component: BDR - details: | - Receiver worker could get stuck when exiting, waiting for a writer that never - actually started. This could on rare occasions break replication after - configuration changes until Postgres was restarted. - jira: - type: Enhancement - impact: High -- relnote: Reduce performance impact of PGD specific configuration parameters that are sent to client - component: BDR - details: | - Changes to values of variables `bdr.last_committed_lsn`, `transaction_id` - and `bdr.local_node_id` are automatically reported to clients when using - CAMO or GROUP COMMIT. This has now been optimized to use less resources. - jira: BDR-3212 - type: Enhancement - impact: High -- relnote: Allow use of commit scopes defined in parent groups - component: BDR - details: | - When there is a commit scope defined for top-level group, it can be used by - any node in a subgroup and does not need to be redefined for every subgroup - anymore. This is particularly useful when combined with `ORIGIN\_GROUP` - keyword to reduce the complexity of commit scope setup. - jira: BDR-5433 - type: Enhancement - impact: High -- relnote: bdr_pg_upgrade - Create logical slot with twophase set to true for PG 14+ - component: Utilities - jira: BDR-5306 - type: Bug Fix - impact: High -- relnote: Use bdr.bdr_file_settings view in verify-settings - component: PGD CLI - details: | - Use bdr.bdr_file_settings view to get the current settings for the proxy. - jira: BDR-5049 - type: Enhancement - impact: High \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.1.yml b/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.1.yml deleted file mode 100644 index f2722a32ac9..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.6.1.yml +++ /dev/null @@ -1,157 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json -product: EDB Postgres Distributed -version: 5.6.1 -date: 25 November 2024 -components: - "BDR": 5.6.1 - "PGD CLI": 5.6.1 - "PGD Proxy": 5.6.1 - "Utilities": 5.6.1 -intro: | - EDB Postgres Distributed 5.6.1 includes a number of enhancements and bug fixes. -highlights: | - - Postgres 17 support - - ARM64 processor support -relnotes: -- relnote: Added Postgres 17 support - component: BDR - details: | - Support for Postgres 17 has been added for all flavors (PostgreSQL, EDB Postgres Extended, - and EDB Postgres Advanced Server) starting with version 17.2. - jira: BDR-5410 - addresses: "" - type: Feature - impact: High -- relnote: Added ARM64 processor Support - component: BDR - details: | - Support ARM architecture for EDB Postgres Distributed on Debian 12 and RHEL 9. - jira: BDR-5410 - addresses: "" - type: Feature - impact: High -- relnote: Addressed walsender crash that happend during configuration reload. - component: BDR - details: | - Ensure that pglogical GUCs are overridden only when operating within the pglogical worker. - If this is not the case, MyPGLogicalWorker will be NULL, resulting in a segmentation fault - when the walsender attempts a configuration reload from the - pgl_wait_for_standby_confirmation() function. - jira: BDR-5661 - addresses: "42100" - type: Bug-fix - impact: High -- relnote: Fixed unintended eager connection related to consensus connections among Subscriber Only group members - component: BDR - details: | - The msgbroker module used to establish consensus connections lazily, meaning that connections - were created only when the first message was sent to a specific destination. This method - negatively affected the latency of Raft leader elections. The behavior was modified to create - connections to consensus peers eagerly. However, this change resulted in an unintended - consequence: a fully meshed consensus network among subscriber-only nodes, which may conflict - with customer network designs. This patch keeps the eager connection setup but limits it to - voting nodes only, reverting to a lazy connection setup for non-voting nodes. - jira: BDR-5666 - addresses: "42041" - type: Bug-fix - impact: High -- relnote: Fixed autopatition task scheduling. - component: BDR - details: | - To improve reliability, shuffle the scheduling of autopartition tasks. This way, tasks - that are prone to failure won't consistently impact the success of other tasks. - jira: BDR-5638 - addresses: "41998" - type: Bug-fix - impact: High -- relnote: Fixed parting subscription with standbys. - component: BDR - details: | - The parting subscription used to hang, failing to wait for standbys when the - bdr.standby_slot_names parameter was defined. - jira: BDR-5658 - addresses: "41821" - type: Bug-fix - impact: High -- relnote: Added `bdr.wait_node_confirm_lsn()`. - component: BDR - details: | - The function `bdr.wait_node_confirm_lsn()` has been introduced to wait until a specific node - reaches a designated Log Sequence Number (LSN). It first checks the `confirmed_flush_lsn` of - the replication slot for the specified node. If that information is not available, the function - connects to the node and queries `pg_replication_origin_progress()`, using the invoking node as - the origin. - If the `nodename` parameter is NULL, the function will wait for all nodes to reach the specified - LSN. If the `target` LSN is NULL, it will wait for the current `wal_flush_lsn`. - jira: BDR-5200 - addresses: "" - type: Enhancement - impact: High -- relnote: Improvements made in SO Node Management and Progress Tracking. - component: BDR - details: | - An update addresses the movement of group slots in SO nodes, ensuring they don't appear as peers in - progress updates. Improvements include enhanced watermark management for SO leaders in the Optimized Topology - configuration, where write leaders now include watermarks in their updates. Watermarks are broadcasted - to simplify progress tracking on idle clusters. The peer progress mapping for SO nodes has been corrected, - and the tap test for group slot movement has been revised. - Additionally, the `bdr_get_all_origins` function now considers SO node origins. - jira: BDR-5549 - addresses: "" - type: Enhancement - impact: High -- relnote: LSN Progress in Optimized Topology Configurations is now communicated. - component: BDR - details: | - While there are no connections from non-leader data nodes to subscriber-only nodes in an optimized - topology configuration, the LSN progress of all data nodes is periodically communicated to these - subscriber-only nodes through logical replication. - jira: BDR-5549 - addresses: "" - type: Enhancement - impact: High -- relnote: Fixed parting SO node with multiple origins. - component: BDR - details: | - All relevant origins must be removed when parting SO node. - With Optimized Topology, parting an SO node should result in removing all origins it - has, not just the one related to its SO group leader. - When parting a data node, even though there is no subscription to it - from SO node, the origin should be removed. - DO not make SO node target of a part catchup subscription when Optimized Topology is enabled. - jira: BDR-5552 - addresses: "" - type: Bug-fix - impact: High -- relnote: Stopped creation of slots for subscriber only nodes on witness nodes. - component: BDR - details: | - Subscriber only nodes should not have slots on witness nodes. - jira: BDR-5618 - addresses: "" - type: Bug-fix - impact: High -- relnote: Some DDL commands are now allowed by `bdr.permit_unsafe_commands` when set. - component: BDR - details: | - The `bdr.permit_unsafe_commands` parameter now allows some DDL commands that were previously disallowed. Specifically `ALTER COLUMN...TYPE...USING` can now be permitted if the user knows the operation is safe. - jira: "" - addresses: "" - type: Enhancement - impact: High -- relnote: Ensure no waiting for DEGRADE timeout when in an already degraded state. - component: BDR - details: | - When using commit scope with DEGRADE clause, if system detects that it's in degraded state, transactions should start in the DEGRADE mode. This ensures that the timeout is not applied on every commit. - jira: BDR-5651 - addresses: "" - type: Bug-fix - impact: High -- relnote: Fixed routing strategy for read nodes. - component: PGD Proxy - details: | - Corrected routing strategy for read nodes after a network partition. - jira: BDR-5216 - addresses: "" - type: Bug-fix - impact: Medium diff --git a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.7.0.yml b/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.7.0.yml deleted file mode 100644 index cf310d475e5..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.7.0.yml +++ /dev/null @@ -1,306 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json -product: EDB Postgres Distributed -version: 5.7.0 -date: 25 February 2025 -updated: 26 March 2025 -components: - "BDR": 5.7.0 - "PGD CLI": 5.7.0 - "PGD Proxy": 5.7.0 - "Utilities": 5.7.0 -intro: | - EDB Postgres Distributed 5.7.0 includes a number of enhancements and bug fixes. -highlights: | - - **Improved 3rd Party CDC Tool Integration**: PGD 5.7.0 now supports [failover of logical slots used by CDC tools](/pgd/latest/cdc-failover) with standard plugins (such as test_decoding, pgoutput, and wal2json) within a PGD cluster. This enhancement eliminates the need for 3rd party subscribers to reseed their tables during a lead Primary change. - - **PGD Compatibility Assessment**: Ensure a seamless migration to PGD with the new [Assess](/pgd/latest/cli/command_ref/assess/) command in the PGD CLI. This tool proactively reports any PostgreSQL incompatibilities—especially those affecting logical replication—so you can address them before upgrading to PGD. - - **Upgrade PGD and Postgres with a Single Command**: Leverage the new [`pgd node upgrade`](/pgd/latest/cli/command_ref/node/upgrade - ) command in the PGD CLI to upgrade a node to the latest versions of PGD and Postgres. - - **Ubuntu 24.04 supported**: PGD 5.7.0 now supports Ubuntu 24.04. (23 March 2025) -relnotes: -- relnote: Improved performance of DML and other operations on partitioned tables. - component: BDR - details: | - Many code paths touching partitioned tables were performing costly checks to support additional table access methods. This code was adjusted to make the most common case (heap) the fastest. - jira: BDR-5860 - addresses: "" - type: Enhancement - impact: Medium -- relnote: Unsupported `ALTER TABLE` command on materialized views is no longer replicated. - component: BDR - details: | - Replication no longer becomes stuck when the command is issued. - jira: BDR-5997 - addresses: "" - type: Bug Fix - impact: Lowest -- relnote: Improved PostgreSQL 17 compatibility. - component: BDR - details: | - Internal tables managed by autopartition that were created before the upgrade to PG17 were missing a `rowtype` entry into `pg_depend`. This issue caused errors after upgrading to PG17. - jira: BDR-5893 - addresses: 44401 - type: Bug Fix - impact: Medium -- relnote: Removed redundant `WARNING` when executing `ALTER SEQUENCE`. - component: BDR - details: | - The message now only appears when creating the sequence. - jira: BDR-6066 - addresses: "" - type: Enhancement - impact: Lowest -- relnote: Fixed a server panic by ensuring that catalog lookups aren't performed in a callback that gets called very late in the commit cycle. - component: BDR - jira: BDR-5832 - addresses: "" - type: Bug Fix - impact: High -- relnote: Fixed an unintentional timed wait for a synchronous logical replication that resulted in unusually high latency for some transactions. - component: BDR - jira: BDR-5809 - addresses: 42273 - type: Bug Fix - impact: High -- relnote: Added support for failover slots for logical replication. - component: BDR - details: | - When a PGD node fails or becomes unavailable, consumers can now continue to consume changes from some other node in the cluster. The feature can be turned on by setting the top group option `failover_slot_scope` to `global`. The consumer needs to handle duplicate transactions, but PGD - guarantees that every transaction is decoded and sent at least once. - jira: BDR-5673, BDR-5925 - addresses: "" - type: Feature - impact: High -- relnote: Ensured that the `remote_commit_time` and `remote_commit_lsn` are properly reported in the conflict reports. - component: BDR - jira: BDR-5808 - addresses: 42273 - type: Feature - impact: High -- relnote: Fixed an issue whereby concurrent joins of subscriber-only nodes occasionally stopped responding. - component: BDR - details: | - A node could end up waiting for the local state of another concurrently joined node to advance, which caused the system to stop responding. - jira: BDR-5789 - addresses: 42964 - type: Feature - impact: Medium -- relnote: Fixed a bug where parting stopped responding with consensus timeout or consensus errors. - component: BDR - details: | - This fix also ensures parting doesn't stop responding when any of the nodes restart when part is in progress. Origin LSNs don't show up as 0/0 in log messages. - jira: BDR-5777 - addresses: 42056 - type: Bug Fix - impact: Medium -- relnote: Fixed a memory leak in a global hook. - component: BDR - details: | - This bug could cause memory to leak even if `bdr.so` is just loaded and the extension isn't installed. - jira: BDR-5821 - addresses: 43678 - type: Bug Fix - impact: Medium -- relnote: Fixed a bug where subgroup member part prevented new nodes from joining. - component: BDR - details: | - This fix ensures that if a subgroup member is parted while Raft subgroup routing is active, then another node can subsequently join that subgroup. - jira: BDR-5781 - addresses: "" - type: Bug Fix - impact: Medium -- relnote: Fixed a case where galloc sequence overflow wasn't being caught correctly. - component: BDR - details: | - This bug resulted in `nextval` call being stuck. - jira: BDR-5930 - addresses: 44755 - type: Bug Fix - impact: Medium -- relnote: Fixed group slot movement for witness and data nodes in presence of subscriber-only nodes and witness nodes. - component: BDR - details: | - For PGD 5.6.0 and later, there are no subscriptions from a subscriber-only node to a witness node. This caused a problem with movement of group slot on data nodes and witness nodes in the presence of subscriber-only nodes in the cluster. This bug could cause WAL to be held on both witness nodes and data nodes when there's a subscriber-only node in the cluster. - jira: BDR-5992 - addresses: - type: Bug Fix - impact: Medium -- relnote: Disallowed `GRANT` on BDR objects to non-BDR users. - component: BDR - jira: BDR-5759 - addresses: "" - type: Bug Fix - impact: Lowest -- relnote: Improved `bdr_init_physical` to be able to run without superuser. - component: BDR - details: | - Now only the `bdr_superuser` is required. - jira: BDR-5231 - addresses: "" - type: Enhancement - impact: Medium -- relnote: Fixed a bug where conflict resolution functions were executed also on the witness nodes. - component: BDR - jira: BDR-5807 - addresses: "" - type: Bug Fix - impact: Medium -- relnote: Fixed a bug to handle additional replication sets on witness nodes. - component: BDR - details: | - The witness node now only cares about the top replication set and thus allows it to miss replications sets and not error out. - jira: BDR-5880 - addresses: "41776, 44527" - type: Bug Fix - impact: Low -- relnote: Improved maintenance of the `bdr.leader` table. - component: BDR - jira: BDR-5703 - addresses: "" - type: Bug Fix - impact: Lowest -- relnote: Narrowed down `bdr.node_slots` output to list only relevant slots. - component: BDR - details: | - The slots are displayed based on the node type, its role, and the cluster topology. - jira: BDR-5253 - addresses: "" - type: Enhancement - impact: High -- relnote: Added `target_type` column to `bdr.node_slots` view to indicate slot purpose/status. - component: BDR - jira: BDR-5253 - addresses: "" - type: Enhancement - impact: High -- relnote: Fixed a bug where `bdr_init_physical` stopped responding when `synchronous_commit` is set to `remote_write/remote_apply`. - component: BDR - details: | - `bdr_init_physical` disables synchronous replication on a new node by resetting `synchronous_standby_names` to an empty string. A warning message reminds you to set `synchronous_standby_names` as needed. - jira: BDR-5918 - addresses: 44760 - type: Bug Fix - impact: Medium -- relnote: Added a GUC to support upgrade to 5.7.0 for clusters with optimized topology (a preview feature). - component: BDR - details: | - An upgrade to 5.7.0 from clusters that have `bdr.force_full_mesh` set to `off` to enable optimized topology - (a preview feature) must first set this GUC to `on` and then upgrade. After the entire cluster upgrades, - this GUC can be set to `off` again to enable optimized topology. - Having this GUC set to `off` during upgrade isn't supported. - jira: BDR-5872 - addresses: "" - type: Other - impact: Low -- relnote: Changed origin deletion to be done asynchronously when optimized topology is enabled. - component: BDR - details: | - In an optimized topology, origin names now use the generation ID of the node. This fixes an inconsistency in which some transactions can be lost or sent twice when a node is parted. - jira: BDR-5872 - addresses: "" - type: Bug Fix - impact: Low -- relnote: Fixed a crash during upgrades in a mixed-version cluster. - component: BDR - details: | - Upgrading from versions earlier than 5.6.0 to 5.6.0 and later in a mixed-version cluster with a standby or a node joining/parting could cause a crash. - jira: BDR-6087 - addresses: "" - type: Bug Fix - impact: Low -- relnote: Added new CLI command structure for easier access. - component: PGD CLI - details: | - The new CLI command structure is more intuitive and easier to use. The new structure is a "noun-verb" format, where the noun is the object you want to work with and the verb is the action you want to perform. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). - jira: "" - addresses: "" - type: Feature - impact: High -- relnote: Added new CLI commands for adding removing and updating commit scopes. - component: PGD CLI - details: | - The new commands are `pgd commit-scope show`, `pgd commit-scope create`, `pgd commit-scope update` and `pgd commit-scope drop`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). - jira: "" - addresses: "" - type: Enhancement - impact: Medium -- relnote: Added support for legacy CLI command structure in the updated PGD CLI. - component: PGD CLI - details: | - The legacy CLI command structure is still supported in the updated PGD CLI. The legacy command support is available for a limited time and will be removed in a future release. It is implemented as a wrapper around the new commands. - jira: "" - addresses: "" - type: Enhancement - impact: Medium -- relnote: Added a new local assesment feature for local non-PGD nodes to the CLI - component: PGD CLI - details: | - The new feature allows you to assess the local node for compatibility with PGD. The feature is available as `pgd assess`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). - jira: "" - addresses: "" - type: Feature - impact: High -- relnote: Added `pgd node upgrade` functionality to the PGD CLI. - component: PGD CLI - details: | - The new command allows you to upgrade a node to the latest version of PGD and Postgres. It integrates the operation of `bdr_pg_upgrade` into the CLI and is run locally. See [pgd node upgrade](/pgd/latest/cli/command_ref/node/upgrade) and [inplace upgrades](/pgd/latest/upgrades/inplace_upgrade) for more information. - jira: "" - addresses: "" - type: Feature - impact: High -- relnote: Added new subcommands to PGD CLI `node` and `group` for getting options. - component: PGD CLI - details: | - The new subcommands are `pgd node get-options` and `pgd group get-options`. Full details are available in [the CLI command reference](/pgd/latest/cli/command_ref). - jira: "" - addresses: "" - type: Enhancement - impact: Medium -- relnote: Deprecated proxy commands in new PGD CLI command structure. - component: PGD CLI - details: | - The proxy commands are deprecated in the new CLI command structure. The proxy commands are still available in the legacy CLI command structure. Proxy options can be set using the `pgd group set-option` command. - jira: "" - addresses: "" - type: Deprecation - impact: Medium -- relnote: Added new output formatting options `psql` and `markdown` to the PGD CLI. - component: PGD CLI - details: | - The new options allow you to format the output of the CLI commands in a psql-like or markdown format. Format options are now `json`, `psql`, `modern`, `markdown`, `simple` and defaults to `simple`. - jira: "" - addresses: "" - type: Enhancement - impact: Medium -- relnote: Removed `yaml` output as an option in PGD CLI - component: PGD CLI - details: | - The `yaml` output option is removed from the PGD CLI. The `json` output option is still available. - jira: "" - addresses: "" - type: Deprecation - impact: Low -- relnote: Fixed proxy regression by improving dsn name support for read nodes - component: PGD Proxy - details: | - A regression in the way read nodes were identified in the proxy in 6.5.1 was fixed - by enabling support for different values in the `dsn` field's host and the node_name. - jira: BDR-5795 - addresses: "" - type: Bug Fix - impact: Medium -- relnote: The `bdr.group_camo_details` view now only lists data nodes belonging to the CAMO group. - component: BDR - details: | - The `bdr.group_camo_details` view now only lists data nodes belonging to the CAMO commit scope group. Previously, the view could include data nodes were not part of the CAMO group, logical standby nodes, Subscriber-Only nodes and witness nodes. - jira: BDR-6049 - addresses: 45354 - type: Bug Fix - impact: High -- relnote: Ubuntu 24.04 is now supported. - component: BDR - details: | - Packages are now available for Ubuntu 24.04 for all PGD components. - jira: BDR-5790 - addresses: "" - type: Feature - impact: Medium diff --git a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.8.0.yml b/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.8.0.yml deleted file mode 100644 index facbeb436b4..00000000000 --- a/product_docs/docs/pgd/5.8/rel_notes/src/relnote_5.8.0.yml +++ /dev/null @@ -1,251 +0,0 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json -product: EDB Postgres Distributed -version: 5.8.0 -date: 22 May 2025 - -components: - "BDR": 5.8.0 - "PGD CLI": 5.8.0 - "PGD Proxy": 5.8.0 - "Utilities": 5.8.0 -intro: | - EDB Postgres Distributed 5.8.0 includes a number of enhancements and bug fixes. -highlights: | - - PGD CLI improvements enhance usability and functionality. - - Additional functions simplify sequence management. - - Includes a fix for CVE-2025-2506. -relnotes: -- relnote: Improved handling of connection information and password obfuscation. - component: BDR - details: | - The shared memory information for connection information may get obfuscated when a password is present - and become useless. Instead of reading from there, we are now using the `primary_conninfo` GUC - which is now available on all supported PG versions. - jira: BDR-5923 - addresses: "41776" - type: Bug fix - impact: High - -- relnote: We now ensure that commit scope logic only runs on data nodes. - component: BDR - details: | - While commit scope processing does not have any direct affect on - non-data nodes, by skipping it altogether we can avoid potentially - confusing error messages. - jira: BDR-6325 - addresses: "" - type: Enhancement - impact: Medium - -- relnote: Prevent segfault in bdr.taskmgr_set_leader. - component: BDR - details: | - The node_name argument to bdr.taskmgr_set_leader is required. The function now throws an appropriate error in case node_name := NULL is passed in. - jira: BDR-6401 - addresses: "" - type: Bug fix - impact: Medium - -- relnote: Added "safety nets" for CDC failover. - details: | - CDC failover now has additional safety nets to ensure that the consumer - does not start replication from a node that is not the creator of the - replication slot. This is to prevent data loss or duplicate transactions. - The changes also add additional checks to ensure that the consumer does - not start replication from a node that does not have the required WAL - files to decode the transactions that are missing on the consumer but - were included in the initial snapshot that the newly joined node had - obtained (physical or logical). - component: BDR - jira: BDR-6125 - addresses: "" - type: Bug fix - impact: Highest - -- relnote: Added `bdr.galloc_chunk_info()` function to simplify sequences. - details: | - The `bdr.galloc_chunk_info()` function provides information about the chunk - allocation for a given sequence. This function returns the chunk ID, the - sequence ID, and the chunk size. This function is useful for debugging and - understanding how sequences are allocated in BDR. - component: BDR - impact: Medium - jira: BDR-6144 - addresses: "" - type: Enhancement - -- relnote: Fixed replication failure with concurrent updates on a non-unique index. - component: BDR - details: | - Updated to compare tuples on lookup to ensure it is the same when handling non-unique indexes. - jira: BDR-5811 - addresses: "43523, 43802, 45244, 47815, 48007" - type: Bug fix - impact: Highest - -- relnote: Improved deadlock avoidance where bdr_init_physical and monitoring queries are running concurrently. - component: BDR - details: | - We have replaced TRUNCATEs with DELETEs from all BDR catalogs on a local node drop. - This is to avoid deadlock in bdr_init_physical if the user happens to run monitoring - queries during node joining / cleaning unwanted source node data. - jira: BDR-6313 - addresses: "46952" - type: Bug fix - impact: Medium - -- relnote: Ensure a new joiner processes the watermark message in the CATCHUP phase. - component: BDR - details: | - Setting nci->min_lsn to XactLastCommitEnd of watermark message Tx to ensure - CATCHUP phase finishes on new joiner only after watermark is processed. - jira: BDR-6397 - addresses: "" - type: Bug fix - impact: Medium - -- relnote: Improve the CLI debug messages. - details: | - Improve the formating of the log messages to be more readable and symmetrical with Postgres log messages. - component: PGD CLI - jira: BDR-6101 - type: Enhancement - impact: Medium - -- relnote: The `--summary` and `--options` flags for `pgd node show` CLI command. - details: | - Add the `--summary` and `--options` flags to `pgd node show` command to filter the output of the `pgd node show` command. - This also maintains symmetry with other `show` commands. - component: PGD CLI - jira: BDR-6145 - addresses: "" - type: Enhancement - impact: High - -- relnote: More GUCs verfied in `pgd cluster verify` CLI command. - details: | - Add the `bdr.lock_table_locking` and `bdr.truncate_locking` GUCs to list of GUCs verfied in `pgd cluster verify` command. - component: PGD CLI - jira: BDR-5308 - addresses: "" - type: Enhancement - impact: High - -- relnote: Added a new column for `pgd cluster verify --settings` CLI command output. - details: | - Add the `recommended_value` column to the result of the `pgd cluster verify --settings` command. - The column will not be displayed in tabular output but will be displayed in JSON output. - component: PGD CLI - jira: BDR-5308 - addresses: "" - type: Enhancement - impact: Medium - -- relnote: Display sorted output for CLI. - details: | - The output for the commands with tabular output will be sorted by the resource name. - For the commands that display more than one resource, the output will be sorted by each resource column in order. - component: PGD CLI - jira: BDR-6094 - addresses: "" - type: Enhancement - impact: Medium - -- relnote: Fixed the CLI `pgd cluster show` command's behavior with clock drift errors and a degraded cluster. - details: | - The `pgd cluster show` command would exit with an error regarding clock drift if only one node was up and running in a N node cluster, and not show the associated `health` and `summary` information. - The command now returns output for, `health` and `summary`, while reporting an appropriate error for `clock-drift`. - component: PGD CLI - jira: BDR-6135 - addresses: "" - type: Bug Fix - impact: High - -- relnote: Fixed the CLI `pgd node show` command crashing if a non-existent node is given. - details: | - The `pgd node show` command crashed if a non-existent node was given to the command. - The command now fails gracefully with an appropriate error message. - component: PGD CLI - jira: BDR-6292 - addresses: "" - type: Bug Fix - impact: High - -- relnote: Improved pgd_bench error message related to CAMO. - details: | - If executed with `--mode=camo`, but the provided test script is not wrapped - in an explicit transaction, pgd_bench will not be able to retrieve the - expected `transaction_id` value. Now the emitted error message contains - a hint about a possible missing transaction. - component: BDR - impact: Low - jira: BDR-6411 - type: Enhancement - -- relnote: Fixed deadlock issue in bdr_init_physical. - component: BDR - details: | - Fixed deadlock between bdr_init_physical cleaning unwanted node data and concurrent monitoring queries. - jira: BDR-6313 - addresses: 46952 - type: Bug Fix - impact: Low - -- relnote: Fixed a consistency issue in node join where a joining node could possibly miss some data sent to it from the source node. - component: BDR - details: | - Fixed an issue when a new node joining the cluster finishes CATCHUP phase before getting its replication progress against all data nodes. This could have caused a new node to be out of sync with the cluster. - jira: BDR-6397 - addresses: "" - type: Bug Fix - impact: Low - -- relnote: Fix Raft leader election timeout/failure after upgrade - component: BDR - details: | - Ensure that any custom value set in the deprecated GUC `bdr.raft_election_timeout` - is applied to its replacement `bdr.raft_global_election_timeout`. - jira: BDR-6068 - addresses: "" - type: Bug Fix - impact: Medium - -- relnote: Ensure that disabled subscriptions on subscriber-only nodes are not re-enabled - component: BDR - details: | - During subscription reconfiguration, if there is no change required to a subscription, - do not enable it since it could have been disabled explicitly by the user. - Skip reconfiguring subscriptions if there are no leadership changes. - jira: BDR-6270 - addresses: "46519" - type: Bug Fix - impact: Medium - -- relnote: Fixed the timestamp parsing issue for `pgd replication show` CLI command. - details: | - The `pgd replication show` command previously crashed when formatting EPAS timestamps. - component: PGD CLI - jira: BDR-6347 - addresses: "47280" - type: Bug Fix - impact: High - -- relnote: Subscriber-only nodes will not take a lock when running DDL - details: | - Subscriber-only nodes will no longer attempt to take a lock on the cluster when running DDL. The DDL will be executed locally and not replicated to other nodes. - component: BDR - jira: BDR-3767 - addresses: "47233" - type: Bug Fix - impact: Medium - - -- relnote: Addressed CVE-2025-2506, which could enable a user with CONNECT access to obtain read access to replicated tables. - component: BDR - details: | - An issue, [CVE-2025-2506](/security/advisories/cve20252506/), was discovered in pglogical which is present in later versions of BDR and PGD. The issue could enable a user with CONNECT access to obtain read access to replicated tables. - jira: BDR-6274 - addresses: "CVE-2025-2506" - type: Security - impact: Highest - diff --git a/product_docs/docs/pgd/5.8/routing/administering.mdx b/product_docs/docs/pgd/5.8/routing/administering.mdx deleted file mode 100644 index ca8631f0b24..00000000000 --- a/product_docs/docs/pgd/5.8/routing/administering.mdx +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Administering PGD Proxy -navTitle: Administering ---- - -## Switching the write leader - -Switching the write leader is a manual operation that you can perform to change the node that's the write leader. -It can be useful when you want to perform maintenance on the current write leader node or when you want to change the write leader for any other reason. -When changing write leader, there are two modes: `strict` and `fast`. -In `strict` mode, the lag is checked before switching the write leader. It waits until the lag is less than `route_writer_max_lag` before starting the switchover. This is the default. -In `fast` mode, the write leader is switched immediately. -You can also set a timeout parameter to specify the time when the method is strict. (Defaults to 30s) - -!!!Note -The set-leader operation is not a guaranteed operation. If, due to a timeout or for other reasons, the switch to the given target node fails, PGD may elect another node as write leader in its place. This other node can include the current write leader node. PGD always tries to elect a new write leader if the set-leader operation fails. -!!! - -### Using SQL - -You can perform a switchover operation that explicitly changes the node that's the write leader to another node. - -Use the [`bdr.routing_leadership_transfer()`](/pgd/latest/reference/routing#bdrrouting_leadership_transfer) function. - -For example, to switch the write leader to node `node1` in group `group1`, use the following SQL command: - -```sql -SELECT bdr.routing_leadership_transfer('group1', 'node1','strict','10s'); -``` - -This command switches the write leader using `strict` mode and waits for up to 10 seconds for the switchover to complete. Those are default settings, so you can omit them, as follows: - -```sql -SELECT bdr.routing_leadership_transfer('group1', 'node1'); -``` - -### Using PGD CLI - -You can use the [`group set-leader`](/pgd/latest/cli/command_ref/group/set-leader/) command to perform a switchover operation. - -For example, to switch the write leader from node `node1` to node `node2` in group `group1`, use the following command: - -```sh -pgd group group1 set-leader node1 --strict --timeout 10s -``` - -This command switches the write leader using `strict` mode and waits for up to 10 seconds for the switchover to complete in strict mode. Those are default settings, so you can omit them, as follows: - -```sh -pgd group group1 set-leader node1 -``` diff --git a/product_docs/docs/pgd/5.8/routing/configuration.mdx b/product_docs/docs/pgd/5.8/routing/configuration.mdx deleted file mode 100644 index 3e11ee0964c..00000000000 --- a/product_docs/docs/pgd/5.8/routing/configuration.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: "PGD Proxy configuration" -navTitle: "Configuration" ---- - -## Group-level configuration - -Configuring the routing is done either through SQL interfaces or through -PGD CLI. - -You can enable routing decisions by calling the [`bdr.alter_node_group_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option) function. -For example: - -```text -SELECT bdr.alter_node_group_option('region1-group', 'enable_proxy_routing', 'true') -``` - -You can disable it by setting the same option to `false`. - -Additional group-level options affect the routing decisions: - -- `route_writer_max_lag` — Maximum lag in bytes of the new write candidate to be - selected as write leader. If no candidate passes this, no writer is - selected automatically. -- `route_reader_max_lag` — Maximum lag in bytes for a node to be considered a viable - read-only node (PGD 5.5.0 and later). - -## Node-level configuration - -Set per-node configuration of routing using [`bdr.alter_node_option()`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option). The -available options that affect routing are: - -- `route_dsn` — The dsn used by proxy to connect to this node. -- `route_priority` — Relative routing priority of the node against other nodes in - the same node group. Used only when electing a write leader. -- `route_fence` — Determines whether the node is fenced from routing. When fenced, the node can't receive connections - from PGD Proxy. It therefore can't become the write leader or be available in the read-only node pool. -- `route_writes` — Determines whether writes can be routed to this node, that is, whether the node - can become write leader. -- `route_reads` — Determines whether read-only connections can be routed to this node (PGD 5.5.0 and later). - -## Proxy-level configuration - -You can configure the proxies using SQL interfaces. - -### Creating and dropping proxy configurations - -You can add a proxy configuration using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy). -For example, `SELECT bdr.create_proxy('region1-proxy1', 'region1-group');` -creates the default configuration for a proxy named `region1-proxy1` in the PGD group `region1-group`. - -The name of the proxy given here must be same as the name given in the proxy configuration file. - -You can remove a proxy configuration using `SELECT bdr.drop_proxy('region1-proxy1')`. -Dropping a proxy deactivates it. - -### Altering proxy configurations - -You can configure options for each proxy using the [`bdr.alter_proxy_option()`](/pgd/latest/reference/routing#bdralter_proxy_option) function. - -The available options are: - -- `listen_address` — Address for the proxy to listen on. -- `listen_port` — Port for the proxy to listen on. -- `max_client_conn` — Maximum number of connections for the proxy to accept. -- `max_server_conn` — Maximum number of connections the proxy can make to the - Postgres node. -- `server_conn_timeout` — Connection timeout for server connections. -- `server_conn_keepalive` — Keepalive interval for server connections. -- `consensus_grace_period` — Duration for which proxy continues to route even upon loss -of a Raft leader. If set to `0s`, proxy stops routing immediately. -- `read_listen_address` — Address for the read-only proxy to listen on. -- `read_listen_port` — Port for the read-only proxy to listen on. -- `read_max_client_conn` — Maximum number of connections for the read-only proxy to accept. -- `read_max_server_conn` — Maximum number of connections the read-only proxy can make to the - Postgres node. -- `read_server_conn_keepalive` — Keepalive interval for read-only server connections. -- `read_server_conn_timeout` — Connection timeout for read-only server connections. -- `read_consensus_grace_period` — Duration for which read-only proxy continues to route even upon loss of a Raft leader. diff --git a/product_docs/docs/pgd/5.8/routing/index.mdx b/product_docs/docs/pgd/5.8/routing/index.mdx deleted file mode 100644 index 27308c310ab..00000000000 --- a/product_docs/docs/pgd/5.8/routing/index.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: "PGD Proxy" -navTitle: "PGD Proxy" -indexCards: none -description: How to use PGD Proxy to maintain consistent connections to the PGD cluster. -navigation: - - proxy - - installing_proxy - - configuration - - administering - - monitoring - - readonly - - raft ---- - -Managing application connections is an important part of high availability. PGD Proxy offers a way to manage connections to the EDB Postgres Distributed cluster. It acts as a proxy layer between the client application and the Postgres database. - -* [PGD Proxy overview](/pgd/latest/routing/proxy) provides an overview of the PGD Proxy, its processes, and how it interacts with the EDB Postgres Distributed cluster. - -* [Installing the PGD Proxy service](/pgd/latest/routing/installing_proxy) covers installation of the PGD Proxy service on a host. - -* [Configuring PGD Proxy](/pgd/latest/routing/configuration) details the three levels (group, node, and proxy) of configuration on a cluster that control how the PGD Proxy service behaves. - -* [Administering PGD Proxy](/pgd/latest/routing/administering) shows how to switch the write leader and manage the PGD Proxy. - -* [Monitoring PGD Proxy](/pgd/latest/routing/monitoring) looks at how to monitor PGD Proxy through the cluster and at a service level. - -* [Read-only routing](/pgd/latest/routing/readonly) explains how the read-only routing feature in PGD Proxy enables read scalability. - -* [Raft](/pgd/latest/routing/raft) provides an overview of the Raft consensus mechanism used to coordinate PGD Proxy. diff --git a/product_docs/docs/pgd/5.8/routing/installing_proxy.mdx b/product_docs/docs/pgd/5.8/routing/installing_proxy.mdx deleted file mode 100644 index 4d0876270da..00000000000 --- a/product_docs/docs/pgd/5.8/routing/installing_proxy.mdx +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: "Installing PGD Proxy" -navTitle: "Installing PGD Proxy" ---- - -## Installing PGD Proxy - -You can use two methods to install and configure PGD Proxy to manage an EDB Postgres Distributed cluster. The recommended way to install and configure PGD Proxy is to use the EDB Trusted Postgres Architect (TPA) utility for cluster deployment and management. - -### Installing through TPA - -If the PGD cluster is being deployed through TPA, then TPA installs and configures PGD Proxy automatically as per the recommended architecture. If you want to install PGD Proxy on any other node in a PGD cluster, then you need to attach the pgd-proxy role to that instance in the TPA configuration file. Also set the `bdr_child_group` parameter before deploying, as this example shows. See [Trusted Postgres Architect](../deploy-config/deploy-tpa/) for more information. - -```yaml -- Name: proxy-a1 - location: a - node: 4 - role: - - pgd-proxy - vars: - bdr_child_group: group_a - volumes: - - device_name: /dev/sdf - volume_type: none -``` - -#### Configuration - -PGD Proxy connects to the PGD database for its internal operations, like getting proxy options and getting write leader details. Therefore, it needs a list of endpoints/dsn to connect to PGD nodes. PGD Proxy expects these configurations in a local config file `pgd-proxy-config.yml`. Following is a working example of the `pgd-proxy-config.yml` file: - -```yaml -log-level: debug -cluster: - name: cluster-name - endpoints: - - "host=bdr-a1 port=5432 dbname=bdrdb user=pgdproxy" - - "host=bdr-a3 port=5432 dbname=bdrdb user=pgdproxy" - - "host=bdr-a2 port=5432 dbname=bdrdb user=pgdproxy" - proxy: - name: "proxy-a1" -``` - -By default, in the cluster created through TPA, `pgd-proxy-config.yml` is located in the `/etc/edb/pgd-proxy` directory. PGD Proxy searches for `pgd-proxy-config.yml` in the following locations. Precedence order is high to low. - - 1. `/etc/edb/pgd-proxy` (default) - 2. `$HOME/.edb/pgd-proxy` - -If you rename the file or move it to another location, specify the new name and location using the optional `-f` or `--config-file` flag when starting a service. See the [sample service file](#pgd-proxy-service). - -You can set the log level for the PGD Proxy service using the top-level config parameter `log-level`, as shown in the sample config. The valid values for `log-level` are `debug`, `info`, `warn`, and `error`. - -`cluster.endpoints` and `cluster.proxy.name` are mandatory fields in the config file. PGD Proxy always tries to connect to the first endpoint in the list. If it fails, it tries the next endpoint, and so on. - -PGD Proxy uses endpoints given in the local config file only at proxy startup. After that, PGD Proxy retrieves the list of actual endpoints (route_dsn) from the PGD Proxy catalog. Therefore, the node option `route_dsn` must be set for each PGD Proxy node. See [route_dsn](configuration) for more information. - -##### Configuring health check - -PGD Proxy provides [HTTP(S) health check APIs](monitoring/#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled. - -```yaml -cluster: - name: cluster-name - endpoints: - - "host=bdr-a1 port=5432 dbname=bdrdb user=pgdproxy " - - "host=bdr-a3 port=5432 dbname=bdrdb user=pgdproxy " - - "host=bdr-a2 port=5432 dbname=bdrdb user=pgdproxy " - proxy: - name: "proxy-a1" - endpoint: "host=proxy-a1 port=6432 dbname=bdrdb user=pgdproxy " - http: - enable: true - host: "0.0.0.0" - port: 8080 - secure: false - cert_file: "" - key_file: "" - probes: - timeout: 10s -``` - -You can enable the API by adding the config `cluster.proxy.http.enable: true`. When enabled, an HTTP server listens on the default port, `8080`, with a 10-second `timeout` and no HTTPS support. - -To enable HTTPS, set the config parameter `cluster.proxy.http.secure: true`. If it's set to `true`, you must also set the `cert_file` and `key_file`. - -The `cluster.proxy.endpoint` is an endpoint used by the proxy to connect to the current write leader as part of its checks. When `cluster.proxy.http.enable` is `true`, `cluster.proxy.endpoint` must also be set. It can be the same as BDR node [routing_dsn](configuration), where host is `listen_address` and port is `listen_port` [proxy options](configuration). If required, you can add connection string parameters in this endpoint, like `sslmode`, `sslrootcert`, `user`, and so on. - -#### PGD Proxy user - -The database user specified in the endpoint doesn't need to be a superuser. Typically, in the TPA environment, pgdproxy is an OS user as well as a database user with the bdr_superuser role. - -#### PGD Proxy service - -We recommend running PGD Proxy as a systemd service. The `pgd-proxy` service unit file is located at `/etc/systemd/system/pgd-proxy.service` by default. Following is the sample service file created by TPA: - -```text -[Unit] -Description=PGD Proxy - -[Service] -Type=simple -User=pgdproxy -Group=pgdproxy -Restart=on-failure -RestartSec=1s -ExecStart=/usr/bin/pgd-proxy -f /etc/edb/pgd-proxy/pgd-proxy-config.yml -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=pgd-proxy - -[Install] -WantedBy=multi-user.target -``` - -Use these commands to manage the `pgd-proxy` service: - -```sh -systemctl status pgd-proxy -systemctl stop pgd-proxy -systemctl restart pgd-proxy -``` - -### Installing manually - -You can manually install PGD Proxy on any Linux machine using `.deb` and `.rpm` packages available from the PGD repository. The package name is `edb-pgd5-proxy`. For example: - -```sh -# for Debian -sudo apt-get install edb-pgd5-proxy -``` diff --git a/product_docs/docs/pgd/5.8/routing/monitoring.mdx b/product_docs/docs/pgd/5.8/routing/monitoring.mdx deleted file mode 100644 index d8383e276be..00000000000 --- a/product_docs/docs/pgd/5.8/routing/monitoring.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Monitoring PGD Proxy -navTitle: Monitoring ---- - -You cam monitor proxies at the cluster and group level or at the process level. - -## Monitoring through the cluster - -### Using SQL - -The current configuration of every group is visible in the [`bdr.node_group_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_group_routing_config_summary) view. - -The [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary) view shows current per-node routing configuration. - -[`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) shows per-proxy configuration. - -## Monitoring at the process level - -### Proxy health check - -PGD Proxy provides the following HTTP(S) health check API endpoints. The API endpoints respond to `GET` requests. You need to enable and configure the endpoints before using them. See [Configuration](installing_proxy#configuring-health-check). - -| Endpoint | Description | -| --- | --- | -| `/health/is-ready` | Checks if the proxy can successfully route connections to the current write leader. | -| `/health/is-live` | Checks if the proxy is running. | -| `/health/is-write-ready` | Checks if the proxy can successfully route connections to the current write leader (PGD 5.5.0 and later). | -| `/health/is-read-only-ready` | Checks if the proxy can successfully route read-only connections (PGD 5.5.0 and later). | - -#### Readiness - -On receiving a valid `GET` request: - -* When in default (write) mode, the proxy checks if it can successfully route connections to the current write leader. -* When in read-only mode, the proxy checks if it can successfully route read-only connections. -* When in any mode, the proxy first checks if it can successfully route connections to the current write leader. If it can, the check is successful. If not, it checks if it can route a read-only connection. If it can, the check is successful. If not, the check fails. - -If the check returns successfully, the API responds with a body containing `true` and an HTTP status code `200 (OK)`. Otherwise, it returns a body containing `false` with the HTTP status code `500 (Internal Server Error)`. - -#### Liveness - -Liveness checks return either `true` with HTTP status code `200 (OK)` or an error. They never return `false` because the HTTP server listening for the request is stopped if the PGD Proxy service fails to start or exits. - -## Proxy log location - -Proxies also write logs to system logging where they can be monitored with other system services. - -### syslog - -- Debian based - `/var/log/syslog` -- Red Hat based - `/var/log/messages` - -Use the `journalctl` command to filter and view logs for troubleshooting PGD Proxy. The following are sample commands for quick reference: - -```sh -journalctl -u pgd-proxy -n100 -f -journalctl -u pgd-proxy --since today -journalctl -u pgd-proxy --since "10 min ago" -journalctl -u pgd-proxy --since "2022-10-20 16:21:50" --until "2022-10-20 16:21:55" -``` \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/routing/proxy.mdx b/product_docs/docs/pgd/5.8/routing/proxy.mdx deleted file mode 100644 index ec697d56914..00000000000 --- a/product_docs/docs/pgd/5.8/routing/proxy.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: "EDB Postgres Distributed Proxy overview" -navTitle: "PGD Proxy overview" -indexCards: simple -directoryDefaults: - description: "The PGD Proxy service acts as a proxy layer between the client application and Postgres for your PGD cluster." ---- - - -Especially with asynchronous replication, having a consistent write leader node is -important to avoid conflicts and guarantee availability for the application. - -The two parts to EDB Postgres Distributed's proxy layer are: - -* Proxy configuration and routing information, which is maintained by the PGD consensus mechanism. -* The PGD Proxy service, which is installed on a host. It connects to the PGD cluster, where it reads its configuration and listens for changes to the routing information. - -This layer is normally installed in a highly available configuration (at least two instances of the proxy service per PGD group). - -Once configured, the PGD Proxy service monitors routing changes as decided by the EDB Postgres Distributed cluster. It acts on these changes to ensure that connections are consistently routed to the correct nodes. - -Configuration changes to the PGD Proxy service are made through the PGD cluster. -The PGD Proxy service reads its configuration from the PGD cluster, but the proxy service must be restarted to apply those changes. - -The information about currently selected write and read nodes is visible in -`bdr.node_group_routing_summary`. This is a node-local view: the proxy -always reads from Raft leader to get a current and consistent view. - -## Leader selection - -The write leader is selected by the current Raft leader for the group the proxy is part of. -This could be the Raft leader for a subgroup or leader for the entire cluster. -The leader is selected from candidate nodes that are reachable and meet the criteria based -on the configuration as described in [PGD Proxy cluster configuration](#pgd-proxy-cluster-configuration). To be a viable candidate, the node must have -`route_writes` enabled and `route_fence` disabled and be within `route_writer_max_lag` -(if enabled) from the previous leader. The candidates are ordered by their `route_priority` -in descending order and by the lag from the previous leader in ascending order. - -The new leader selection process is started either when there's no existing leader currently -or when -connectivity is lost to the existing leader. (If there's no existing write leader, it could be because there were no valid candidates or because Raft was down.) - -A node is considered connected if the last Raft protocol message received from the leader -isn't older than Raft election timeout -(see [Internal settings - Raft timeouts](../reference/pgd-settings#internal-settings---raft-timeouts)). - -Since the Raft leader is sending heartbeat 3 times every election timeout limit, the leader -node needs to miss the reply to 3 heartbeats before it's considered disconnected. - -## PGD Proxy cluster configuration - -The PGD cluster always has at least one top-level group and one data group. PGD elects the write leader for each data group that has the `enable_proxy_routing` and `enable_raft` options set to true. - -The cluster also maintains proxy configurations for each group. Each configuration has a name and is associated with a group. You can attach a proxy to a top-level group or data group. You can attach multiple proxies to each group. -When a PGD Proxy service starts running on a host, it has a name in its local configuration file and it connects to a node in a group. From there, it uses the name to look up its complete configuration as stored on the group. - -## PGD Proxy service - -The EDB Postgres Distributed Proxy (PGD Proxy) service is a process that acts as an abstraction layer between the client application and Postgres. It interfaces with the PGD consensus mechanism to get the identity of the current write leader node and redirects traffic to that node. It also optionally supports a read-only mode where it can route read-only queries to nodes that aren't the write leader, improving the overall performance of the cluster. - -PGD Proxy is a TCP layer 4 proxy. - -## How they work together - -Upon starting, PGD Proxy connects to one of the endpoints given in the local config file. It fetches: - -- DB connection information for all nodes. -- Proxy options like listen address, listen port. -- Routing details including the current write leader in default mode, read nodes in read-only mode, or both in any mode. - -The endpoints given in the config file are used only at startup. After that, actual endpoints are taken from the PGD catalog's `route_dsn` field in [`bdr.node_routing_config_summary`](/pgd/latest/reference/catalogs-internal#bdrnode_routing_config_summary). - -PGD manages write leader election. PGD Proxy interacts with PGD to get write leader change events notifications on Postgres notify/listen channels and routes client traffic to the current write leader. PGD Proxy disconnects all existing client connections on write leader change or when write leader is unavailable. Write leader election is a Raft-backed activity and is subject to Raft leader availability. PGD Proxy closes the new client connections if the write leader is unavailable. - -PGD Proxy responds to write leader change events that can be categorized into two modes of operation: *failover* and *switchover*. - -Automatic transfer of write leadership from the current write leader node to a new node in the event of Postgres or operating system crash is called *failover*. PGD elects a new write leader when the current write leader goes down or becomes unresponsive. Once the new write leader is elected by PGD, PGD Proxy closes existing client connections to the old write leader and redirects new client connections to the newly elected write leader. - -User-controlled, manual transfer of write leadership from the current write leader to a new target leader is called *switchover*. Switchover is triggered through the [PGD CLI group set-leader](/pgd/latest/cli/command_ref/group/set-leader/) command. The command is submitted to PGD, which attempts to elect the given target node as the new write leader. Similar to failover, PGD Proxy closes existing client connections and redirects new client connections to the newly elected write leader. This is useful during server maintenance, for example, if the current write leader node needs to be stopped for maintenance like a server update or OS patch update. - -If the proxy is configured to support read-only routing, it can route read-only queries to a pool of nodes that aren't the write leader. The pool of nodes is maintained by the PGD cluster and proxies listen for changes to the pool. When the pool changes, the proxy updates its routing configuration and starts routing read-only queries to the new pool of nodes and disconnecting existing client connections to nodes that have left the pool. - -### Consensus grace period - -PGD Proxy provides the `consensus_grace_period` proxy option that can be used to configure the routing behavior upon loss of a Raft leader. PGD Proxy continues to route to the current write leader (if it's available) for this duration. If the new Raft leader isn't elected during this period, the proxy stops routing. If set to `0s`, PGD Proxy stops routing immediately. - -The main purpose of this option is to allow users to configure the write behavior when the Raft leader is lost. When the Raft leader isn't present in the cluster, it's not always guaranteed that the current write leader seen by the proxy is the correct one. In some cases, like network partition in the following example, it's possible that the two write leaders may be seen by two different proxies attached to the same group, increasing the chances of write conflicts. If this isn't the behavior you want, then you can set the previously mentioned `consensus_grace_period` to 0s. This setting configures the proxy to stop routing and closes existing open connections immediately when it detects the Raft leader is lost. - -#### Network partition example - -Consider a 3-data-node group with a proxy on each data node. In this case, if the current write leader gets network partitioned or isolated, then the data nodes present in the majority partition elect a new write leader. If `consensus_grace_period` is set to a non-zero value, for example, `10s`, then the proxy present on the previous write leader continues to route writes for this duration. - -In this case, if the grace period is kept too high, then writes continue to happen on the two write leaders. This condition increases the chances of write conflicts. - -Having said that, most of the time, upon loss of the current Raft leader, the new Raft leader gets elected by BDR within a few seconds if more than half of the nodes (quorum) are still up. Hence, if the Raft leader is down but the write leader is still up, then proxy can be configured to allow routing by keeping `consensus_grace_period` to a non-zero, positive value. The proxy waits for the Raft leader to get elected during this period before stopping routing. This might be helpful in some cases where availability is more important. - -### Read consensus grace period - -Similar to the `consensus_grace_period`, a `read_consensus_grace_period` option is available for read-only routing. This option can be used to configure the routing behavior upon loss of a Raft leader for read-only queries. PGD Proxy continues to route to the current read nodes for this duration. If the new Raft leader isn't elected during this period, the proxy stops routing read-only queries. If set to `0s`, PGD Proxy stops routing read-only queries immediately. - -### Multi-host connection strings - -The PostgreSQL C client library (libpq) allows you to specify multiple host names in a single connection string for simple failover. This ability is also supported by client libraries (drivers) in some other programming languages. It works well for failing over across PGD Proxy instances that are down or inaccessible. - -If an application connects to a proxy instance that doesn't have access to a write leader, the connection will simply fail. No other hosts in the multi-host connection string will be tried. This behavior is consistent with the behavior of PostgreSQL client libraries with other proxies like HAProxy or pgbouncer. Access to a write leader requires the group the instance is part of has been able to select a write leader for the group. diff --git a/product_docs/docs/pgd/5.8/routing/raft/01_raft_subgroups_and_tpa.mdx b/product_docs/docs/pgd/5.8/routing/raft/01_raft_subgroups_and_tpa.mdx deleted file mode 100644 index d43f87bc205..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/01_raft_subgroups_and_tpa.mdx +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: "Creating Raft subgroups using TPA" ---- - -The `TPAexec configure` command enables Raft subgroups if the `--enable_proxy_routing local` option is set. TPA uses the term *locations* to reflect the common use case of subgroups that map to physical/regional domains. When the configuration is generated, the location name given is stored under the generated group name, which is based on the location name. - -## Creating Raft subgroups using TPA - -This example creates a two-location cluster with three data nodes in each location. The nodes in each location are part of a PGD Raft subgroup for the location. - -The top-level group's name is `pgdgroup`. - -The top-level group has two locations: `us_east` and `us_west`. These locations are mapped to two subgroups: `us_east_subgroup` and `us_west_subgroup`. - -Each location has four nodes: three data nodes and a barman backup node. The three data nodes also cohost PGD Proxy. The configuration can be visualized like this: - -![6 Node Cluster with 2 Raft Subgroups](images/Tmp6NodeRaftSubgroups.png) - -The barman nodes don't participate in the subgroup and, by extension, the Raft group. They're therefore not shown. This diagram is a snapshot of a potential state of the cluster with the West Raft group having selected west_1 as write leader and west_2 as its own Raft leader. On the East, east_1 is write leader while east_3 is Raft leader. The entire cluster is contained within the top-level Raft group. There, west_3 is currently Raft leader. - -To create this configuration, you run: - -``` -tpaexec configure pgdgroup --architecture PGD-Always-ON --location-names us_east us_west --data-nodes-per-location 3 --epas 16 --no-redwood --enable_proxy_routing local --hostnames-from hostnames.txt -``` - -Where `hostnames.txt` contains: - -``` -east1 -east2 -east3 -eastbarman -west1 -west2 -west3 -westbarman -``` - -## The configuration file - -The generated `config.yml` file has a `bdr_node_groups` section that contains the top-level group `pgdgroup` and the two subgroups `us_east_subgroup` and `us_west_subgroup`. Each of those subgroups has a location set (`us_east` and `us_west`) and two other options that are set to true: - -- `enable_raft`, which activates the subgroup Raft in the subgroup -- `enable_proxy_routing`, which enables the pgd_proxy routers to route traffic to the subgroup’s write leader - -Here's an example generated by the sample tpaexec command: - - -``` -cluster_vars: - apt_repository_list: [] - bdr_database: bdrdb - bdr_node_group: pgdgroup - bdr_node_groups: - - name: pgdgroup - - name: us_east_subgroup - options: - enable_proxy_routing: true - enable_raft: true - location: us_east - parent_group_name: pgdgroup - - name: us_west_subgroup - options: - enable_proxy_routing: true - enable_raft: true - location: us_west - parent_group_name: pgdgroup - bdr_version: '5' -``` - -Every node instance has an entry in the instances list. In that entry, `bdr_child_group` appears in the variables section, set to the subgroup the node belongs to. Here's an example generated by the sample tpaexec command: - -``` -instances: -- Name: east1 - backup: eastbarman - location: us_east - node: 1 - role: - - bdr - - pgd-proxy - vars: - bdr_child_group: us_east_subgroup - bdr_node_options: - route_priority: 100 -- Name: east2 - location: us_east - node: 2 - role: - - bdr - - pgd-proxy - vars: - bdr_child_group: us_east_subgroup - bdr_node_options: - route_priority: 100 -- Name: east3 - location: us_east - node: 3 - role: - - bdr - - pgd-proxy - vars: - bdr_child_group: us_east_subgroup - bdr_node_options: - route_priority: 100 -- Name: eastbarman - location: us_east - node: 4 - role: - - barman -``` - -The one node in this location that doesn't have a `bdr_child_group` setting is the barman node because it doesn't participate in the Raft decision-making process. diff --git a/product_docs/docs/pgd/5.8/routing/raft/02_raft_subgroups_and_pgd_cli.mdx b/product_docs/docs/pgd/5.8/routing/raft/02_raft_subgroups_and_pgd_cli.mdx deleted file mode 100644 index 2942fbdf400..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/02_raft_subgroups_and_pgd_cli.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: "Working with Raft subgroups and PGD CLI" ---- - -You can view the status of your nodes and subgroups with the [pgd](../../cli/) CLI command. The examples here assume a cluster as configured in [Creating Raft subgroups with TPA](01_raft_subgroups_and_tpa). - -## Viewing nodes with pgd - -The pgd command is `show-nodes`. -```shell -pgd nodes list -__OUTPUT__ -Node Name Group Name Node Kind Join State Node Status ---------- ---------- --------- ---------- ------------ -east1 us_east data ACTIVE Up -east2 us_east data ACTIVE Up -east3 us_east data ACTIVE Up -west1 us_west data ACTIVE Up -west2 us_west data ACTIVE Up -west3 us_west data ACTIVE Up -``` - -## Viewing groups (and subgroups) with pgd - -To show the groups in a PGD deployment, along with their names and some attributes, use the PGD CLI command `groups list.` - -``` -pgd groups list -__OUTPUT__ -Group Name Parent Group Name Group Type Nodes ----------- ----------------- ---------- ----- -pgdgroup global 0 -us_east pgdgroup data 3 -us_west pgdgroup data 3 -``` - diff --git a/product_docs/docs/pgd/5.8/routing/raft/03_migrating_to_raft_subgroups.mdx b/product_docs/docs/pgd/5.8/routing/raft/03_migrating_to_raft_subgroups.mdx deleted file mode 100644 index 9b661025c7e..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/03_migrating_to_raft_subgroups.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Migrating to Raft subgroups ---- - -You can introduce Raft subgroups in a running PGD installation. - - -## Migrating to Raft subgroups (using SQL only) - -To enable Raft subgroups to an existing cluster, these configuration steps are needed: - -* Identify the top-level group for all nodes in the PGD cluster. An existing cluster already has a top-level group that all nodes belong to. -* Create a subgroup for each location. Use `bdr.create_node_group` with a `parent_group_name` argument that gives the top-level group as its value. -* Add each node at each location to their location’s subgroup using `bdr.switch_node_group()`. -* Alter each of the location’s subgroups to enable Raft for the group. Use `bdr.alter_node_group_option()`, setting the `enable_raft` option to `true`. - -### Enabling subgroup Raft node group (using SQL only) - -```sql -SELECT bdr.alter_node_group_option('$group_name', 'enable_raft', 'true'); -``` - - \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/routing/raft/04_raft_elections_in_depth.mdx b/product_docs/docs/pgd/5.8/routing/raft/04_raft_elections_in_depth.mdx deleted file mode 100644 index e4bb9d78662..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/04_raft_elections_in_depth.mdx +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Raft elections in depth ---- - -The selection of a write leader in PGD relies on PGD's Raft mechanism. The Raft mechanism is completely internal to PGD's BDR Postgres extension and operates transparently. The nodes within a group begin by establishing a Raft leader within the nodes of the group. - -## Node interaction - -With the Raft leader established, the leader then queries the catalog to see if a write leader for proxy routing was designated. - -If no write leader is designated, the Raft leader takes steps to designate a new write leader. The process starts by querying all the nodes in the group to establish their state. The resulting list of nodes is then filtered for ineligible nodes (for example, witness nodes) and prioritized. The first/top entry on the list is then set as the new write leader in the Raft log. - -## Proxy interaction - -All proxies initially connect any data node in their group. This behavior allows them to query the catalog for the current write leader and begin routing connections to that node. - -They connect to the Raft leader and listen for changes to the catalog entry for write leader. When notified of a change in write leader, they reconfigure routing and send connections to the new write leader. - -Both the node and proxy interaction are shown on the following sequence diagram. Two nodes and one proxy are involved, coordinating which node will be write leader and the proxy waiting to learn which node is write leader. - -![Sequence Diagram](images/PGD5sequencediagram.png) - diff --git a/product_docs/docs/pgd/5.8/routing/raft/images/PGD5sequencediagram.png b/product_docs/docs/pgd/5.8/routing/raft/images/PGD5sequencediagram.png deleted file mode 100644 index 147b4ac32c0..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/images/PGD5sequencediagram.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7490ca693b1b776cd49f7c28ed397c7808282f6f21f3e3cea316a3df0940eb65 -size 59400 diff --git a/product_docs/docs/pgd/5.8/routing/raft/images/Tmp6NodeRaftSubgroups.png b/product_docs/docs/pgd/5.8/routing/raft/images/Tmp6NodeRaftSubgroups.png deleted file mode 100644 index 9b839191f38..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/images/Tmp6NodeRaftSubgroups.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ae888b4eb72a94468b7a6b84b39aad08b7a26e3a02bae87cbf7540b932069c1a -size 233200 diff --git a/product_docs/docs/pgd/5.8/routing/raft/index.mdx b/product_docs/docs/pgd/5.8/routing/raft/index.mdx deleted file mode 100644 index 6f017cd5739..00000000000 --- a/product_docs/docs/pgd/5.8/routing/raft/index.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- - -title: Proxies, Raft, and Raft subgroups - ---- - -PGD manages its metadata using a Raft model where a top-level group spans all the data nodes in the PGD installation. A Raft leader is elected by the top-level group and propagates the state of the top-level group to all the other nodes in the group. - -!!! Hint What is Raft? -Raft is an industry-accepted algorithm for making decisions though achieving *consensus* from a group of separate nodes in a distributed system. -!!! - -For certain operations in the top-level group, it's essential that a Raft leader must be both established and connected. Examples of these operations include adding and removing nodes and allocating ranges for [galloc](../../sequences/#pgd-global-sequences) sequences. - -It also means that an absolute majority of nodes in the top-level group (one half of them plus one) must be able to reach each other. So, in a top-level group with five nodes, at least three of the nodes must be reachable by each other to establish a Raft leader. - -## Proxy routing - -One function that also uses Raft is proxy routing. Proxy routing requires that the proxies can coordinate writing to a data node within their group of nodes. This data node is the write leader. If the write leader goes offline, the proxies need to be able to switch to a new write leader, selected by the data nodes, to maintain continuity for connected applications. - -You can configure proxy routing on a per-node group basis in PGD 5, but the recommended configurations are *global* and *local* routing. - -## Global routing - -Global routing uses the top-level group to manage the proxy routing. All writable data nodes (not witness or subscribe-only nodes) in the group are eligible to become write leader for all proxies. Connections to proxies within the top-level group will be routed to data nodes within the top-level group. - -With global routing, there's only one write leader for the entire top-level group. - -## Local routing - -Local routing uses subgroups, often mapped to locations, to manage the proxy routing within the subgroup. Local routing is often used for geographical separation of writes. It's important for them to continue routing even when the top-level consensus is lost. - -That's because PGD allows queries and asynchronous data manipulation (DMLs) to work even when the top-level consensus is lost. But using the top-level consensus, as is the case with global routing, means that new write leaders can't be elected when that consensus is lost. Local groups can't rely on the top-level consensus without adding an independent consensus mechanism and its added complexity. - -PGD 5 introduced subgroup Raft support to elegantly address this issue. Subgroup Raft support allows the subgroups in a PGD top-level group to elect the leaders they need independently. They do this by forming devolved Raft groups that can elect write leaders independent of other subgroups or the top-level Raft consensus. Connections to proxies in the subgroup then route to data nodes within the subgroup. - -With local routing, there's a write leader for each subgroup. - - -## More information - -* [Raft subgroups and TPA](01_raft_subgroups_and_tpa) shows how Raft subgroups can be enabled in PGD when deploying with Trusted Postgres Architect. -* [Raft subgroups and PGD CLI](02_raft_subgroups_and_pgd_cli) shows how the PGD CLI reports on the presence and status of Raft subgroups. -* [Migrating to Raft subgroups](03_migrating_to_raft_subgroups) is a guide to migrating existing installations and enabling Raft subgroups without TPA. -* [Raft elections in depth](04_raft_elections_in_depth) looks in detail at how the write leader is elected using Raft. \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/routing/readonly.mdx b/product_docs/docs/pgd/5.8/routing/readonly.mdx deleted file mode 100644 index 5d4e6013266..00000000000 --- a/product_docs/docs/pgd/5.8/routing/readonly.mdx +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Read-only routing with PGD Proxy -navTitle: Read-only routing ---- - -## Background - -By default, PGD Proxy routes connections to the currently selected write leader in the cluster. This allows the write traffic conflicts to be rapidly and consistently resolved. Just routing everything to a single node, the write leader, is a natural fit for traditional high-availability deployments where system throughput is typically limited to the throughput of what a single node can handle. - -But for some use cases, this behavior also means that clients that are only querying the data are also placing a load on the current write leader. It's possible this read-only workload could be equally well served by one of the non-write-leader nodes in the cluster. - - -If you could move traffic that has read-only queries to the non-write leader nodes, you could, at least in theory, handle a throughput which could be a multiple of a single nodes capability. -An approach like this, though, usually requires changes to applications so that they are aware of details of cluster topology and the current node status to detect the write leader. - -## Read-only routing in PGD Proxy - -From PGD 5.5.0, PGD Proxy addresses this requirement to utilize read capacity while minimizing application exposure to the cluster status. It does this by offering a new `read_listen_port` on proxies that complement the existing listen port. Proxies can be configured with either or both of these ports. - -When a proxy is configured with a `read_listen_port`, connections to that particular port are routed to available data nodes that aren't the current write leader. If an application only queries and reads from the database, using a `read_listen_port` ensures that your queries aren't answered by the write leader. - -Because PGD Proxy is a TCP Layer 4 proxy, it doesn't interfere with traffic passing through it. That means that it can't detect attempts to write passing through the `read_listen_port` connections. As it can't distinguish between a SELECT or an INSERT, it's possible to write through a read-only port. - -The active-active nature of PGD means that any write operation will be performed and replicated, and conflict resolution may or may not have to take place. It's up to the application to avoid this and make sure that it uses only `read_listen_ports` for read-only traffic. - -Where available, the problem can be mitigated on the client side by passing [`default_transaction_read_only=on`](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY) in the connection string or equivalent for the driver in use. - -### Valid read-only nodes - -Only data nodes that aren't the write leader are valid as read-only nodes. For reference, the following node types aren't eligible to be a read-only node: - -* Witness nodes, because they don't contain data -* Logical standbys, because they're standbys and prioritize replicating -* Subscriber-only nodes - -## Creating a proxy configuration - -SQL proxy creation functions in PGD take an optional `proxy_mode` parameter. You can set this parameter to one of the following values: - -* `default` — This is the default value. It creates a proxy that can handle traffic that follows the write leader on port 6432. -* `read-only` — This option creates a read-only proxy that routes traffic to nodes that aren't the write leader. It handles this read-only traffic only on port 6433. -* `any` — This option creates create a proxy that can handle both read-only and write-leader-following traffic on separate ports: 6432 for write-leader-following traffic and 6433 for read-only traffic. - -PGD CLI proxy creation passes the `proxy_mode` value using the `--proxy-mode` flag. - -### Creating a read-only proxy - -#### Using SQL - -To create a new read-only proxy, use the `bdr.create_proxy` function: - -```sql -SELECT bdr.create_proxy('proxy-ro1','group-a','read-only'); -``` - -This command creates a read-only proxy named `proxy-ro1` in group `group-a`. By default, it listens on port 6433 for read-only traffic. - -#### Using PGD CLI - -To create a new read-only proxy, use the `pgd create-proxy` command with the optional `--proxy-mode` flag set to `read-only`: - -```sh -pgd create-proxy --proxy-name proxy-ro1 --node-group group-a --proxy-mode read-only -``` - -## Configuring running proxies - -!!! Note -After changing a proxy's configuration, restart the proxy to make the changes take effect. -!!! - -You activate read-only routing on a proxy by setting the `read_listen_port` option to a port number. This port number is the port on which the proxy will listen for read-only traffic. -If the proxy already has a `listen_port` set, then the proxy will listen on both ports, routing read/write and read-only traffic respectively on each port. -This is equivalent to creating a proxy with `proxy-mode` set to `any`. - -If you set a `read_listen_port` on a proxy and then set the `listen_port` to 0, the proxy listens only on the `read_listen_port` and routes only read-only traffic. -This is equivalent to creating a proxy with `proxy-mode` set to `read-only`. -The configuration elements related to the read/write port are cleared (set to null). - -If you set a `listen_port` on a proxy and then set the `read_listen_port` to 0, the proxy listens only on the `listen_port` and routes only read/write traffic. -This is equivalent to creating a proxy with `proxy-mode` set to `default`. -The configuration elements related to the read-only port are cleared (set to null). - -### Configuring using SQL - -To configure a read-only proxy port on a proxy, use the `bdr.alter_proxy_options` function: - -```sql -SELECT bdr.alter_proxy_options('proxy-a1','read_listen_port','6433'); -``` - -This command configures a read-only proxy port on port 6433 in the proxy-a1 configuration. - -To remove the read-only proxy, set the port to 0: - -```sql -SELECT bdr.alter_proxy_options('proxy-a1','read_listen_port','0'); -``` - -### Configuring using PGD CLI - -To configure a read-only proxy port on a proxy, use the `pgd alter-proxy` command: - -```sh -pgd set-proxy-options --proxy-name proxy-a1 --option read_listen_port=6433 -``` - -This command configures a read-only proxy port on port 6433 in the proxy-a1 configuration. - -To remove the read-only proxy, set the port to 0: - -```sh -pgd set-proxy-options --proxy-name proxy-a1 --option read_listen_port=0 -``` diff --git a/product_docs/docs/pgd/5.8/security/pgd-predefined-roles.mdx b/product_docs/docs/pgd/5.8/security/pgd-predefined-roles.mdx deleted file mode 100644 index 97e74fae2a0..00000000000 --- a/product_docs/docs/pgd/5.8/security/pgd-predefined-roles.mdx +++ /dev/null @@ -1,173 +0,0 @@ ---- -title: PGD predefined roles -description: Describes predefined roles in PGD ---- - -PGD predefined roles are created when the BDR extension is installed. After BDR -extension is dropped from a database, the roles continue to exist. You need to -drop them manually if dropping is required. - -### bdr_superuser - -This is a role for an admin user that can manage anything PGD related. It allows you to separate management of the database and table access. Using it allows you to have a user that can manage the PGD cluster without giving them PostgreSQL superuser privileges. - -#### Privileges - -- ALL PRIVILEGES ON ALL TABLES IN SCHEMA BDR -- ALL PRIVILEGES ON ALL ROUTINES IN SCHEMA BDR - - -### bdr_read_all_stats - -This role provides read access to most of the tables, views, and functions that users or applications may need to observe the statistics and state of the PGD cluster. - -#### Privileges - -`SELECT` privilege on: - -- [`bdr.autopartition_partitions`](/pgd/latest/reference/catalogs-internal#bdrautopartition_partitions) -- [`bdr.autopartition_rules`](/pgd/latest/reference/catalogs-internal#bdrautopartition_rules) -- [`bdr.ddl_epoch`](/pgd/latest/reference/catalogs-internal#bdrddl_epoch) -- [`bdr.ddl_replication`](/pgd/latest/reference/pgd-settings#bdrddl_replication) -- [`bdr.global_consensus_journal_details`](/pgd/latest/reference/catalogs-visible#bdrglobal_consensus_journal_details) -- [`bdr.global_lock`](/pgd/latest/reference/catalogs-visible#bdrglobal_lock) -- [`bdr.global_locks`](/pgd/latest/reference/catalogs-visible#bdrglobal_locks) -- [`bdr.group_camo_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_camo_details) -- [`bdr.local_consensus_state`](/pgd/latest/reference/catalogs-visible#bdrlocal_consensus_state) -- [`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary) -- [`bdr.node`](/pgd/latest/reference/catalogs-visible#bdrnode) -- [`bdr.node_catchup_info`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info) -- [`bdr.node_catchup_info_details`](/pgd/latest/reference/catalogs-visible#bdrnode_catchup_info_details) -- [`bdr.node_conflict_resolvers`](/pgd/latest/reference/catalogs-visible#bdrnode_conflict_resolvers) -- [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group) -- [`bdr.node_local_info`](/pgd/latest/reference/catalogs-visible#bdrnode_local_info) -- [`bdr.node_peer_progress`](/pgd/latest/reference/catalogs-visible#bdrnode_peer_progress) -- [`bdr.node_replication_rates`](/pgd/latest/reference/catalogs-visible#bdrnode_replication_rates) -- [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) -- [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary) -- [`bdr.replication_sets`](/pgd/latest/reference/catalogs-visible#bdrreplication_sets) -- `bdr.replication_status` -- [`bdr.sequences`](/pgd/latest/reference/catalogs-visible#bdrsequences) -- [`bdr.stat_activity`](/pgd/latest/reference/catalogs-visible#bdrstat_activity) -- [`bdr.stat_relation`](/pgd/latest/reference/catalogs-visible#bdrstat_relation) -- [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription) _deprecated_ -- [`bdr.state_journal_details`](/pgd/latest/reference/catalogs-visible#) -- [`bdr.subscription`](/pgd/latest/reference/catalogs-visible#bdrsubscription) -- [`bdr.subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrsubscription_summary) -- [`bdr.tables`](/pgd/latest/reference/catalogs-visible#bdrtables) -- [`bdr.taskmgr_local_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_local_work_queue) -- [`bdr.taskmgr_work_queue`](/pgd/latest/reference/catalogs-visible#bdrtaskmgr_work_queue) -- [`bdr.worker_errors`](/pgd/latest/reference/catalogs-visible#) _deprecated_ -- [`bdr.workers`](/pgd/latest/reference/catalogs-visible#bdrworkers) -- [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters) -- `bdr.xid_peer_progress` - -EXECUTE privilege on: - -- `bdr.bdr_edition` _deprecated_ -- [`bdr.bdr_version`](/pgd/latest/reference/functions#bdrbdr_version) -- [`bdr.bdr_version_num`](/pgd/latest/reference/functions#bdrbdr_version_num) -- [`bdr.decode_message_payload`](/pgd/latest/reference/functions-internal#bdrdecode_message_payload) -- [`bdr.get_consensus_status`](/pgd/latest/reference/functions#bdrget_consensus_status) -- [`bdr.get_decoding_worker_stat`](/pgd/latest/reference/functions#bdrget_decoding_worker_stat) -- [`bdr.get_global_locks`](/pgd/latest/reference/functions-internal#bdrget_global_locks) -- [`bdr.get_min_required_replication_slots`](/pgd/latest/reference/functions-internal#bdrget_min_required_replication_slots) -- [`bdr.get_min_required_worker_processes`](/pgd/latest/reference/functions-internal#bdrget_min_required_worker_processes) -- [`bdr.get_raft_status`](/pgd/latest/reference/functions#bdrget_raft_status) -- [`bdr.get_relation_stats`](/pgd/latest/reference/functions#bdrget_relation_stats) -- [`bdr.get_slot_flush_timestamp`](/pgd/latest/reference/functions-internal#bdrget_slot_flush_timestamp) -- `bdr.get_sub_progress_timestamp` -- [`bdr.get_subscription_stats`](/pgd/latest/reference/functions#bdrget_subscription_stats) -- [`bdr.lag_control`](/pgd/latest/reference/functions#bdrlag_control) -- [`bdr.lag_history`](/pgd/latest/reference/functions-internal#bdrlag_history) -- [`bdr.node_catchup_state_name`](/pgd/latest/reference/functions-internal#bdrnode_catchup_state_name) -- [`bdr.node_kind_name`](/pgd/latest/reference/functions-internal#bdrnode_kind_name) -- [`bdr.peer_state_name`](/pgd/latest/reference/functions-internal#bdrpeer_state_name) -- [`bdr.pglogical_proto_version_ranges`](/pgd/latest/reference/functions-internal#bdrpglogical_proto_version_ranges) -- [`bdr.show_subscription_status`](/pgd/latest/reference/functions-internal#bdrshow_subscription_status) -- [`bdr.show_workers`](/pgd/latest/reference/functions-internal#bdrshow_workers) -- [`bdr.show_writers`](/pgd/latest/reference/functions-internal#bdrshow_writers) -- [`bdr.stat_get_activity`](/pgd/latest/reference/functions-internal#bdrstat_get_activity) -- [`bdr.wal_sender_stats`](/pgd/latest/reference/functions#bdrwal_sender_stats) -- [`bdr.worker_role_id_name`](/pgd/latest/reference/functions-internal#bdrworker_role_id_name) - -### bdr_monitor - -This role provides read access to any tables, views, and functions that users or applications may need to monitor the PGD cluster. It includes all the privileges of the [`bdr_read_all_stats`](#bdr_read_all_stats) role. - -#### Privileges - -All privileges from [`bdr_read_all_stats`](#bdr_read_all_stats) plus the following additional privileges: - -`SELECT` privilege on: - -- [`bdr.group_raft_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_raft_details) -- [`bdr.group_replslots_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_replslots_details) -- [`bdr.group_subscription_summary`](/pgd/latest/reference/catalogs-visible#bdrgroup_subscription_summary) -- [`bdr.group_versions_details`](/pgd/latest/reference/catalogs-visible#bdrgroup_versions_details) -- `bdr.raft_instances` - -`EXECUTE` privilege on: - -- [`bdr.get_raft_instance_by_nodegroup`](/pgd/latest/reference/functions-internal#bdrget_raft_instance_by_nodegroup) -- [`bdr.monitor_camo_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_camo_on_all_nodes) -- [`bdr.monitor_group_raft`](/pgd/latest/reference/functions#bdrmonitor_group_raft) -- [`bdr.monitor_group_versions`](/pgd/latest/reference/functions#bdrmonitor_group_versions) -- [`bdr.monitor_local_replslots`](/pgd/latest/reference/functions#bdrmonitor_local_replslots) -- [`bdr.monitor_raft_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_raft_details_on_all_nodes) -- [`bdr.monitor_replslots_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_replslots_details_on_all_nodes) -- [`bdr.monitor_subscription_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_subscription_details_on_all_nodes) -- [`bdr.monitor_version_details_on_all_nodes`](/pgd/latest/reference/functions-internal#bdrmonitor_version_details_on_all_nodes) -- [`bdr.node_group_member_info`](/pgd/latest/reference/functions-internal#bdrnode_group_member_info) - -### bdr_application - -This role is designed for applications that require access to PGD features, objects, and functions such as sequences, CRDT datatypes, CAMO status functions, or trigger management functions. - -#### Privileges - -`EXECUTE` privilege on: - -- All functions for column_timestamps datatypes -- All functions for CRDT datatypes -- [`bdr.alter_sequence_set_kind`](/pgd/latest/reference/sequences#bdralter_sequence_set_kind) -- [`bdr.create_conflict_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_conflict_trigger) -- [`bdr.create_transform_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrcreate_transform_trigger) -- [`bdr.drop_trigger`](/pgd/latest/reference/streamtriggers/interfaces#bdrdrop_trigger) -- [`bdr.get_configured_camo_partner`](/pgd/latest/reference/functions#bdrget_configured_camo_partner) -- [`bdr.global_lock_table`](/pgd/latest/reference/functions#bdrglobal_lock_table) -- [`bdr.is_camo_partner_connected`](/pgd/latest/reference/functions#bdris_camo_partner_connected) -- [`bdr.is_camo_partner_ready`](/pgd/latest/reference/functions#bdris_camo_partner_ready) -- [`bdr.logical_transaction_status`](/pgd/latest/reference/functions#bdrlogical_transaction_status) -- `bdr.ri_fkey_trigger` -- [`bdr.seq_nextval`](/pgd/latest/reference/functions-internal#bdrseq_nextval) -- [`bdr.seq_currval`](/pgd/latest/reference/functions-internal#bdrseq_currval) -- [`bdr.seq_lastval`](/pgd/latest/reference/functions-internal#bdrseq_lastval) -- [`bdr.trigger_get_committs`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_committs) -- [`bdr.trigger_get_conflict_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_conflict_type) -- [`bdr.trigger_get_origin_node_id`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_origin_node_id) -- [`bdr.trigger_get_row`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_row) -- [`bdr.trigger_get_type`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_type) -- [`bdr.trigger_get_xid`](/pgd/latest/reference/streamtriggers/rowfunctions#bdrtrigger_get_xid) -- [`bdr.wait_for_camo_partner_queue`](/pgd/latest/reference/functions#bdrwait_for_camo_partner_queue) -- [`bdr.wait_slot_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_slot_confirm_lsn) -- [`bdr.wait_node_confirm_lsn`](/pgd/latest/reference/functions#bdrwait_node_confirm_lsn) - -Many of these functions require additional privileges before you can use them. -For example, you must be the table owner to successfully execute -`bdr.alter_sequence_set_kind`. These additional rules are described with each -specific function. - -### bdr_read_all_conflicts - -PGD logs conflicts into the -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) -table. Conflicts are visible only to table owners, so no extra privileges are -required for the owners to read the conflict history. - -If, though, it's useful to have a user that can see conflicts for all tables, -you can optionally grant the role `bdr_read_all_conflicts` to that user. - -#### Privileges - -An explicit policy is set on [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) that allows this role to read the `bdr.conflict_history` table. diff --git a/product_docs/docs/pgd/5.8/tssnapshots.mdx b/product_docs/docs/pgd/5.8/tssnapshots.mdx deleted file mode 100644 index bb20d573eb2..00000000000 --- a/product_docs/docs/pgd/5.8/tssnapshots.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Timestamp-based snapshots -description: Learn how to use timestamp-based snapshots in EDB Postgres Distributed. -redirects: - - ../bdr/tssnapshots - ---- - -The timestamp-based snapshots allow reading data in a consistent manner by using -a user-specified timestamp rather than the usual MVCC snapshot. You can use this feature -to access data on different PGD nodes at a common point in time. For -example, you can compare data on multiple nodes for data-quality checking. - -This feature doesn't currently work with write transactions. - -Enable the use of timestamp-based snapshots using the `snapshot_timestamp` -parameter. This parameter accepts either a timestamp value or -a special value, `'current'`, which represents the current timestamp (now). If -`snapshot_timestamp` is set, queries use that timestamp to determine -visibility of rows rather than the usual MVCC semantics. - -For example, the following query returns the state of the `customers` table at -2018-12-08 02:28:30 GMT: - -```sql -SET snapshot_timestamp = '2018-12-08 02:28:30 GMT'; -SELECT count(*) FROM customers; -``` - -Without PGD, this query works only with future timestamps or the -special `'current'` value, so you can't use it for historical queries. - -PGD works with and improves on that feature in a multi-node environment. First, -PGD makes sure that all connections to other nodes replicate any -outstanding data that was added to the database before the specified -timestamp. This ensures that the timestamp-based snapshot is consistent across the whole -multi-master group. Second, PGD adds a parameter called -`bdr.timestamp_snapshot_keep`. This parameter specifies a window of time when you can execute -queries against the recent history on that node. - -You can specify any interval, but be aware that VACUUM (including autovacuum) -doesn't clean dead rows that are newer than up to twice the specified -interval. This also means that transaction ids aren't freed for the same -amount of time. As a result, using this can leave more bloat in user tables. -Initially, we recommend 10 seconds as a typical setting, although you can change that as needed. - -Once the query is accepted for execution, the query might run -for longer than `bdr.timestamp_snapshot_keep` without problem, just as normal. - -Also, information about how far the snapshots were kept doesn't -survive server restart. The oldest usable timestamp for the timestamp-based -snapshot is the time of last restart of the PostgreSQL instance. - -You can combine the use of `bdr.timestamp_snapshot_keep` with the -`postgres_fdw` extension to get a consistent read across multiple nodes in a -PGD group. You can use this combination to run parallel queries across nodes, when used with foreign tables. - -There are no limits on the number of nodes in a multi-node query when using this -feature. - -Use of timestamp-based snapshots doesn't increase inter-node traffic or -bandwidth. Only the timestamp value is passed in addition to query data. diff --git a/product_docs/docs/pgd/5.8/upgrades/app_upgrades.mdx b/product_docs/docs/pgd/5.8/upgrades/app_upgrades.mdx deleted file mode 100644 index ff10a4c00fa..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/app_upgrades.mdx +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: "Application schema upgrades" ---- - -Similar to the upgrade of EDB Postgres Distributed, there are two -approaches to upgrading the application schema. The simpler option is to -stop all applications affected, preform the schema upgrade, and restart the -application upgraded to use the new schema variant. This approach -imposes some downtime. - -To eliminate this downtime, EDB Postgres Distributed offers useful tools to -perform a rolling application schema upgrade. - -The following recommendations and tips reduce the impact of the -application schema upgrade on the cluster. - -### Rolling application schema upgrades - -By default, DDL is automatically sent to all nodes. You can control this behavior -manually, as described in -[DDL replication](../ddl/). You can use this approach -to create differences between database schemas across nodes. - -PGD is designed to allow replication to continue even with minor -differences between nodes. These features are designed to allow -application schema migration without downtime or to allow logical -standby nodes for reporting or testing. - -!!! Warning - You must manage rolling application schema upgrades outside of PGD. - - Careful scripting is required to make this work correctly - on production clusters. We recommend extensive testing. - -See [Replicating between nodes with differences](../appusage/) for details. - -When one node runs DDL that adds a new table, nodes that haven't -yet received the latest DDL need to handle the extra table. -In view of this, the appropriate setting for rolling schema upgrades -is to configure all nodes to apply the `skip` resolver in case of a -`target_table_missing` conflict. Perform this configuration before adding tables to any -node. This setting is intended to be permanent. - -Execute the following query **separately on each node**. Replace `node1` with the actual -node name. - -```sql -SELECT bdr.alter_node_set_conflict_resolver('node1', - 'target_table_missing', 'skip'); -``` - -When one node runs DDL that adds a column to a table, nodes that haven't -yet received the latest DDL need to handle the extra columns. -In view of this, the appropriate setting for rolling schema -upgrades is to configure all nodes to apply the `ignore` resolver in -case of a `target_column_missing` conflict. Perform this before adding columns to -one node. This setting is intended to be -permanent. - -Execute the following query **separately on each node**. Replace `node1` with the actual -node name. - -```sql -SELECT bdr.alter_node_set_conflict_resolver('node1', - 'target_column_missing', 'ignore'); -``` - -When one node runs DDL that removes a column from a table, nodes that -haven't yet received the latest DDL need to handle the missing column. -This situation causes a `source_column_missing` conflict, which uses -the `use_default_value` resolver. Thus, columns that don't -accept NULLs and don't have a DEFAULT value require a two-step process: - -1. Remove the NOT NULL constraint, or add a DEFAULT value for a column - on all nodes. -2. Remove the column. - -You can remove constraints in a rolling manner. -There's currently no supported way for handling adding table -constraints in a rolling manner, one node at a time. - -When one node runs a DDL that changes the type of an existing column, -depending on the existence of binary coercibility between the current -type and the target type, the operation might not rewrite the underlying -table data. In that case, it's only a metadata update of the -underlying column type. Rewriting a table is normally restricted. -However, in controlled DBA environments, you can change -the type of a column to an automatically castable one by adopting -a rolling upgrade for the type of this column in a non-replicated -environment on all the nodes, one by one. See [ALTER TABLE](../ddl/ddl-command-handling/#alter-table) for more details. diff --git a/product_docs/docs/pgd/5.8/upgrades/compatibility.mdx b/product_docs/docs/pgd/5.8/upgrades/compatibility.mdx deleted file mode 100644 index ac30086652a..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/compatibility.mdx +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Compatibility changes ---- - -Many changes in PGD 5 aren't backward compatible with -PGD 4 or PGD 3.7. - -## Connection routing - -HARP Manager doesn't exist anymore. It's been replaced by new -[connection management](../routing) configuration. - -HARP Proxy is replaced by similarly functioning PGD Proxy, which removes any -deprecated features and is configured through connection -management configuration. - -## Commit At Most Once - -CAMO configuration is now done through [commit scopes](../commit-scopes/commit-scopes). The -`bdr.camo_pairs` catalog and any related manipulation functions don't exist -anymore. The `bdr.enable_camo` GUC was removed. -The `synchronous_replication_availability` GUC doesn't affect CAMO anymore. -Use the `DEGRADE ON ... TO ASYNC` clause of a commit scope. - - -## Eager All-Node Replication - -The `global` scope no longer exists. To create scope with the same -behavior, use [Group Commit](../commit-scopes/group-commit). - -```sql -SELECT bdr.create_commit_scope( - commit_scope_name := 'eager_scope', - origin_node_group := 'top_group', - rule := 'ALL (top_group) GROUP COMMIT (conflict_resolution = eager, commit_decision = raft) ABORT ON (timeout = 60s)', - wait_for_ready := true -); -``` - -The `bdr.global_commit_timeout` GUC was removed. Use the `ABORT ON` clause for the -commit scope. - -## Lag Control - -Similarly to CAMO and Eager, Lag Control configuration was also moved to -[commit scopes](../commit-scopes/commit-scopes) for more flexible durability configuration. - -## Catalogs - -- `bdr.workers` doesn't show worker-specific info like `worker_commit_timestamp` anymore. -- `bdr.worker_errors` is deprecated and lost most of the info. -- `bdr.state_journal_details` is deprecated and lost most of the info. -- `bdr.event_summary` replaces `bdr.worker_errors` and - `bdr.state_journal_details` with additional info like Raft role changes. -- The table `bdr.node_catchup_info` now has the user-consumable view - `bdr.node_catchup_info_details`, which shows info in a more friendly way. -- Witness node is no longer distinguished by the replication sets - it replicates but by using the `node_kind` value in `bdr.node_summary`. -- All the Raft (consensus) related tables and functions were adjusted to support - multiple Raft instances (sub-group Raft). -- `bdr.node_pre_commit` view and the underlying table was removed, as the - information is no longer stored in a table. -- `bdr.commit_decisions` view was added and replaces the `bdr.node_pre_commit` one. -- Multiple internal autopartition tables were replaced by taskmgr ones, as the - mechanism behind it was generalized. -- `bdr.network_monitoring` view was removed along with underlying tables and - functions. -- Many catalogs were added and some have new columns, as described in - [Catalogs](/pgd/latest/reference/catalogs-visible/). These - aren't breaking changes strictly speaking, but we recommend reviewing them - when upgrading. diff --git a/product_docs/docs/pgd/5.8/upgrades/index.mdx b/product_docs/docs/pgd/5.8/upgrades/index.mdx deleted file mode 100644 index 782e30fed7d..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/index.mdx +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: "Upgrading" -description: Upgrading EDB Postgres Distributed and Postgres -navigation: -- tpa_overview -- manual_overview -- upgrade_paths -- compatibility -- bdr_pg_upgrade -- app_upgrades ---- - -While PGD and Postgres are closely related, they're separate products with separate upgrade paths. This section covers how to upgrade both PGD and Postgres. - -## Upgrading PGD - -EDB Postgres Distributed is a flexible platform. This means that your upgrade path depends largely on how you installed PGD. - -* **[Upgrading with TPA](tpa_overview)** — If you installed using TPA, you can use its automated upgrade feature to upgrade to the latest minor versions. - -* **[Upgrading manually](manual_overview)** — If you manually installed and configured your PGD cluster, you can move a cluster between versions, both minor and major. - -* **[Upgrade paths](upgrade_paths)** — Several supported upgrade paths are available. - -* **[Compatibility changes](compatibility)** — If you're upgrading from PGD 3.x or 4.x to PGD 5.x or later, you need to understand the compatibility changes between versions. - - -## Upgrading Postgres or Postgres and PGD major versions - -* **[In-place Postgres major version upgrades](inplace_upgrade)** — How to use `pgd node upgrade` to manually upgrade the Postgres version or Postgres and PGD major version on one or more nodes. - -* **[Rolling major version upgrades](upgrading_major_rolling)** — How to perform a major version upgrade of Postgres on a cluster. - - -## Other upgrades - -* **[Application schema upgrades](app_upgrades)** — A guide for safely upgrading your application's schema when running multiple distributed servers with PGD. diff --git a/product_docs/docs/pgd/5.8/upgrades/inplace_upgrade.mdx b/product_docs/docs/pgd/5.8/upgrades/inplace_upgrade.mdx deleted file mode 100644 index c1f71baa4f6..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/inplace_upgrade.mdx +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: In-place Postgres or Postgres and PGD major version upgrades -redirects: - - pgd/latest/upgrades/bdr_pg_upgrade/ ---- - -You can upgrade a PGD node to a newer major version of Postgres or a major version of Postgres and PGD using the command-line utility [pgd node upgrade](/pgd/5.7/cli/command_ref/node/upgrade.mdx). - -!!!Note -In previous versions before 5.7.0, the command used for in-place major version upgrades was `bdr_pg_upgrade`. -However, this command didn't have an option to upgrade both Postgres major versions and PGD versions simultaneously, as `pgd node upgrade` does. -!!! - -`pgd node upgrade` is a wrapper around the standard [pg_upgrade](https://www.postgresql.org/docs/current/pgupgrade.html) that -adds PGD-specific logic to the process to ensure a smooth upgrade. - -## Terminology - -This terminology is used when describing the upgrade process and components involved: - -*Postgres cluster* — The database files, both executables and data, that make up a Postgres database instance on a system when run. - -*Old Postgres cluster* — The existing Postgres cluster to upgrade, the one from which to migrate data. - -*New Postgres cluster* — The new Postgres cluster that data is migrated to. -This Postgres cluster must be one major version ahead of the old cluster. - -## Precautions - -Standard Postgres major version upgrade precautions apply, including the fact both Postgres clusters must meet -all the requirements for [pg_upgrade](https://www.postgresql.org/docs/current/pgupgrade.html#id-1.9.5.12.7.). - -Additionally, don't use `pgd node upgrade` if other tools are using replication slots and replication origins. -Only PGD slots and origins are restored after the upgrade. - -You must meet several prerequisites for `pgd node upgrade`: - -- Disconnect applications using the old Postgres cluster. You can, for example, - redirect them to another node in the PGD cluster. -- Configure peer authentication for both Postgres clusters. bdr_pg_upgrade - requires peer authentication. -- The same PGD version must be installed on both clusters. -- The PGD version must be 4.1.0 or later. Version 3.7.22 and later is also supported. -- The new cluster must be in a shutdown state. -- You must install PGD packages in the new cluster. -- The new cluster must already be initialized and configured as needed to - match the old cluster configuration. -- Databases, tables, and other objects must not exist in the new cluster. - -!!! Note -When upgrading to PGD 5.7.0+, you don't need to have both clusters run the same PGD version. -The new cluster must be running 5.7.0+. -In that case `pgd node upgrade` will upgrade the PGD version to 5.7.x and upgrade the Postgres major version. -!!! - -We also recommend having the old Postgres cluster up prior to running `pgd node upgrade`. -The CLI starts the old Postgres cluster if it's shut down. - -## Usage - -To upgrade to a newer major version of Postgres or Postgres and PGD, you must first install the new version packages. - -### `pgd node upgrade` command-line - -`pgd node upgrade` passes all parameters to pg_upgrade. -Therefore, you can specify any parameters supported by [pg_upgrade](https://www.postgresql.org/docs/current/pgupgrade.html#id-1.9.5.12.6). - -#### Synopsis - -```plaintext -pgd node upgrade [OPTION] ... -``` - -#### Options - -In addition to the options for pg_upgrade, you can pass the following parameters -to `pgd node upgrade`. - -##### Required parameters - -Specify these parameters either in the command line or, for all but the `--database` parameter, in their equivalent environment variable. They're used by `pgd node upgrade`. - -- `-b, --old-bindir` — Old Postgres cluster bin directory. -- `-B, --new-bindir`— New Postgres cluster bin directory. -- `-d, --old-datadir` — Old Postgres cluster data directory. -- `-D, --new-datadir` — New Postgres cluster data directory. -- `--database` — PGD database name. - -##### Optional parameters - -These parameters are optional and are used by `pgd node upgrade`: - -- `-p, --old-port` — Old cluster port number. -- `-s, --socketdir` — Directory to use for postmaster sockets during upgrade. -- `--check` — Specify to only perform checks and not modify clusters. - -##### Other parameters - -Any other parameter that's not one of the above is passed to pg_upgrade. pg_upgrade accepts the following parameters: - -- `-j, --jobs` — Number of simultaneous processes or threads to use. -- `-k, --link` — Use hard links instead of copying files to the new cluster. -- `-o, --old-options` — Option to pass to old postgres command. Multiple invocations are appended. -- `-O, --new-options` — Option to pass to new postgres command. Multiple invocations are appended. -- `-N, --no-sync` — Don't wait for all files in the upgraded cluster to be written to disk. -- `-P, --new-port` — New cluster port number. -- `-r, --retain` — Retain SQL and log files even after successful completion. -- `-U, --username` — Cluster's install user name. -- `--clone` — Use efficient file cloning. - -#### Environment variables - -You can use these environment variables in place of command-line parameters: - -- `PGBINOLD` — Old Postgres cluster bin directory. -- `PGBINNEW` — New Postgres cluster bin directory. -- `PGDATAOLD` — Old Postgres cluster data directory. -- `PGDATANEW` — New Postgres cluster data directory. -- `PGPORTOLD` — Old Postgres cluster port number. -- `PGSOCKETDIR` — Directory to use for postmaster sockets during upgrade. - - -### Example - -Given a scenario where: - -- Node name of the cluster you want to upgrade is kaolin. -- Old Postgres cluster bin directory is `/usr/lib/postgresql/16/bin`. -- New Postgres cluster bin directory is `/usr/lib/postgresql/17/bin`. -- Old Postgres cluster data directory is `/var/lib/postgresql/16/main`. -- New Postgres cluster data directory is `/var/lib/postgresql/17/main`. -- Database name is `bdrdb`. - - -You can use the following command to upgrade the cluster: - -``` -pgd node kaolin upgrade \ ---old-bindir /usr/lib/postgresql/16/bin \ ---new-bindir /usr/lib/postgresql/17/bin \ ---old-datadir /var/lib/postgresql/16/main \ ---new-datadir /var/lib/postgresql/17/main \ ---database bdrdb -``` - -### Steps performed - -These steps are performed when running `pgd node upgrade`. - -!!! Note - When `--check` is supplied as an argument to `pgd node upgrade`, the CLI skips steps that modify the database. - -#### PGD Postgres checks - - -| Steps | `--check` supplied | -| :-----------------------------------------------|:------------------:| -| Collecting pre-upgrade new cluster control data | `run` | -| Checking new cluster state is shutdown | `run` | -| Checking PGD versions | `run` | -| Starting old cluster (if shutdown) | `skip` | -| Connecting to old cluster | `skip` | -| Checking if bdr schema exists | `skip` | -| Turning DDL replication off | `skip` | -| Terminating connections to database | `skip` | -| Waiting for all slots to be flushed | `skip` | -| Disconnecting from old cluster | `skip` | -| Stopping old cluster | `skip` | -| Starting old cluster with PGD disabled | `skip` | -| Connecting to old cluster | `skip` | -| Collecting replication origins | `skip` | -| Collecting replication slots | `skip` | -| Disconnecting from old cluster | `skip` | -| Stopping old cluster | `skip` | - -#### pg_upgrade steps - -Standard pg_upgrade steps are performed. - -!!! Note - If supplied, `--check` is passed to pg_upgrade. - - -#### PGD post-upgrade steps - -| Steps | `--check` supplied | -| :-----------------------------------------------|:------------------:| -| Collecting old cluster control data | `skip` | -| Collecting new cluster control data | `skip` | -| Advancing LSN of new cluster | `skip` | -| Starting new cluster with PGD disabled | `skip` | -| Connecting to new cluster | `skip` | -| Creating replication origin, repeated for each origin | `skip` | -| Advancing replication origin, repeated for each origin | `skip` | -| Creating replication slot, repeated for each slot | `skip` | -| Stopping new cluster | `skip` | diff --git a/product_docs/docs/pgd/5.8/upgrades/manual_overview.mdx b/product_docs/docs/pgd/5.8/upgrades/manual_overview.mdx deleted file mode 100644 index cc366e32846..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/manual_overview.mdx +++ /dev/null @@ -1,231 +0,0 @@ ---- -title: "Upgrading PGD clusters manually" ---- - -Because EDB Postgres Distributed consists of multiple software components, -the upgrade strategy depends partially on the components that are being upgraded. - -In general, you can upgrade the cluster with almost zero downtime by -using an approach called *rolling upgrade*. Using this approach, nodes are upgraded one by one, and -the application connections are switched over to already upgraded nodes. - -You can also stop all nodes, perform the upgrade on all nodes, and -only then restart the entire cluster. This approach is the same as with a standard PostgreSQL setup. -This strategy of upgrading all nodes at the same time avoids running with -mixed versions of software and therefore is the simplest. However, it incurs -downtime and we don't recommend it unless you can't perform the rolling upgrade -for some reason. - -To upgrade an EDB Postgres Distributed cluster: - -1. Plan the upgrade. -2. Prepare for the upgrade. -3. Upgrade the server software. -4. Check and validate the upgrade. - -## Upgrade planning - -There are broadly two ways to upgrade each node: - -* Upgrade nodes in place to the newer software version. See [Rolling server - software upgrades](#rolling-server-software-upgrades). -* Replace nodes with ones that have the newer version installed. See [Rolling - upgrade using node join](#rolling-upgrade-using-node-join). - -You can use both of these approaches in a rolling manner. - -### Rolling upgrade considerations - -While the cluster is going through a rolling upgrade, mixed versions of software -are running in the cluster. For example, suppose nodeA has PGD 4.3.6, while -nodeB and nodeC have 5.6.1. In this state, the replication and group -management uses the protocol and features from the oldest version (4.3.6 -in this example), so any new features provided by the newer version -that require changes in the protocol are disabled. Once all nodes are -upgraded to the same version, the new features are enabled. - -Similarly, when a cluster with WAL-decoder-enabled nodes is going through a -rolling upgrade, WAL decoder on a higher version of PGD node produces -[logical change records (LCRs)](../decoding_worker/#enabling) with a -higher pglogical version. WAL decoder on a lower version of PGD node produces -LCRs with a lower pglogical version. As a result, WAL senders on a higher version -of PGD nodes aren't expected to use LCRs due to a mismatch in protocol -versions. On a lower version of PGD nodes, WAL senders can continue to use LCRs. -Once all the PGD nodes are on the same PGD version, WAL senders use LCRs. - -A rolling upgrade starts with a cluster with all nodes at a prior release. It -then proceeds by upgrading one node at a time to the newer release, until all -nodes are at the newer release. There must be no more than two versions of the -software running at the same time. An upgrade must be completed, with all nodes -fully upgraded, before starting another upgrade. - -Where additional caution is required to reduce business risk, more time may be required to perform an upgrade. -For maximum caution and to reduce the time required upgrading production systems, we suggest performing the upgrades in a separate test environment first. - -Don't run with mixed versions of the software for any longer than is absolutely necessary to complete the upgrade. -You can check on the versions in the cluster using the [`pgd nodes list --versions`](/pgd/5.8/cli/command_ref/nodes/list/) command. - -The longer you run with mixed versions, the more likely you are to encounter issues, the more difficult it is to diagnose and resolve them. -We recommend upgrading in off peak hours for your business, and over a short period of time. - -While you can use a rolling upgrade for upgrading a major version of the software, we don't support mixing PostgreSQL, EDB Postgres Extended, and EDB Postgres Advanced Server in one cluster. So you can't use this approach to change the Postgres variant. - -!!! Warning - Downgrades of EDB Postgres Distributed aren't supported. They require - that you manually rebuild the cluster. - -### Rolling server software upgrades - -A rolling upgrade is where the [server software -upgrade](#server-software-upgrade) is upgraded sequentially on each node in a -cluster without stopping the cluster. Each node is temporarily stopped from -participating in the cluster and its server software is upgraded. Once updated, it's -returned to the cluster, and it then catches up with the cluster's activity -during its absence. - -The actual procedure depends on whether the Postgres component is being -upgraded to a new major version. - -During the upgrade process, you can switch the application over to a node -that's currently not being upgraded to provide continuous availability of -the database for applications. - -### Rolling upgrade using node join - -The other method to upgrade the server software is to join a new node -to the cluster and later drop one of the existing nodes running -the older version of the software. - -For this approach, the procedure is always the same. However, because it -includes node join, a potentially large data transfer is required. - -Take care not to use features that are available only in -the newer Postgres version until all nodes are upgraded to the -newer and same release of Postgres. This is especially true for any -new DDL syntax that was added to a newer release of Postgres. - -!!! Note - `bdr_init_physical` makes a byte-by-byte copy of the source node - so you can't use it while upgrading from one major Postgres version - to another. In fact, currently `bdr_init_physical` requires that even the - PGD version of the source and the joining node be exactly the same. - You can't use it for rolling upgrades by way of joining a new node method. Instead, use a logical join. - -### Upgrading a CAMO-enabled cluster - -Upgrading a CAMO-enabled cluster requires upgrading CAMO groups one by one while -disabling the CAMO protection for the group being upgraded and reconfiguring it -using the new [commit scope](../commit-scopes/commit-scopes)-based settings. - -We recommended the following approach for upgrading two BDR nodes that -constitute a CAMO pair to PGD 5.0: - -1. Ensure `bdr.enable_camo` remains `off` for transactions on any of - the two nodes, or redirect clients away from the two nodes. Removing - the CAMO pairing while attempting to use CAMO leads to errors - and prevents further transactions. -1. Uncouple the pair by deconfiguring CAMO either by resetting - `bdr.camo_origin_for` and `bdr.camo_parter_of` (when upgrading from - BDR 3.7.x) or by using `bdr.remove_camo_pair` (on BDR 4.x). -1. Upgrade the two nodes to PGD 5.0. -1. Create a dedicated node group for the two nodes and move them into - that node group. -1. Create a [commit scope](../commit-scopes/commit-scopes) for this node - group and thus the pair of nodes to use CAMO. -1. Reactivate CAMO protection again either by setting a - `default_commit_scope` or by changing the clients to explicitly set - `bdr.commit_scope` instead of `bdr.enable_camo` for their sessions - or transactions. -1. If necessary, allow clients to connect to the CAMO-protected nodes - again. - -## Upgrade preparation - -Each major release of the software contains several changes that might affect -compatibility with previous releases. These might affect the Postgres -configuration, deployment scripts, as well as applications using PGD. We -recommend considering these changes and making any needed adjustments in advance of the upgrade. - -See individual changes mentioned in the [release notes](../rel_notes/) and any version-specific upgrade notes. - -## Server software upgrade - -Upgrading EDB Postgres Distributed on individual nodes happens in place. -You don't need to back up and restore when upgrading the BDR extension. - -### BDR extension upgrade - -The BDR extension upgrade process consists of a few steps. - -#### Stop Postgres - -During the upgrade of binary packages, it's usually best to stop the running -Postgres server first. Doing so ensures that mixed versions don't get loaded in case -of an unexpected restart during the upgrade. - -#### Upgrade packages - -The first step in the upgrade is to install the new version of the BDR packages. This installation -installs both the new binary and the extension SQL script. This step is specific to the operating system. - -#### Start Postgres - -Once packages are upgraded, you can start the Postgres instance. The BDR -extension is upgraded upon start when the new binaries -detect the older version of the extension. - -### Postgres upgrade - -The process of in-place upgrade of Postgres depends on whether you're -upgrading to a new minor version of Postgres or to a new major version of Postgres. - -#### Minor version Postgres upgrade - -Upgrading to a new minor version of Postgres is similar to [upgrading -the BDR extension](#bdr-extension-upgrade). Stopping Postgres, upgrading packages, -and starting Postgres again is typically all that's needed. - -However, sometimes more steps, like reindexing, might be recommended for -specific minor version upgrades. Refer to the release notes of the -version of Postgres you're upgrading to. - -#### Major version Postgres upgrade - -Upgrading to a new major version of Postgres is more complicated than upgrading to a minor version. - -EDB Postgres Distributed provides a `pgd node upgrade` command line utility, -which you can use to do [in-place Postgres major version upgrades](inplace_upgrade). - -!!! Note - When upgrading to a new major version of any software, including Postgres, the - BDR extension, and others, it's always important to ensure - your application is compatible with the target version of the software you're upgrading. - -## Upgrade check and validation - -After you upgrade your PGD node, you can verify the current -version of the binary: - -```sql -SELECT bdr.bdr_version(); -``` - -Always check your [monitoring](../monitoring) after upgrading a node to confirm -that the upgraded node is working as expected. - -## Moving from HARP to PGD Proxy - -HARP can temporarily coexist with the new -[connection management](../routing) configuration. This means you can: - -- Upgrade a whole pre-5 cluster to a PGD 5 cluster. -- Set up the connection routing. -- Replace HARP Proxy with PGD Proxy. -- Move application connections to PGD Proxy instances. -- Remove the HARP Manager from all servers. - -We strongly recommend doing this as soon as possible after upgrading nodes to -PGD 5. HARP isn't certified for long-term use with PGD 5. - -TPA provides some useful tools for this and will eventually provide a single-command -upgrade path between PGD 4 and PGD 5. diff --git a/product_docs/docs/pgd/5.8/upgrades/tpa_overview.mdx b/product_docs/docs/pgd/5.8/upgrades/tpa_overview.mdx deleted file mode 100644 index e8f60c80300..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/tpa_overview.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Upgrading PGD clusters with TPA ---- - -!!! Note No Postgres major version upgrades -TPA doesn't currently support major version upgrades of Postgres. - -To perform a major version upgrade of Postgres, see [In-place Postgres major version upgrades](inplace_upgrade). -!!! - -If you used TPA to install your cluster, you can also use TPA to upgrade it. The techniques outlined here can perform minor and major version upgrades of the PGD software. They can also perform minor version upgrades of Postgres. - -You can read more about the capabilities of TPA upgrades in [Upgrading your cluster](/tpa/latest/tpaexec-upgrade/) in the TPA documentation. - -!!! Warning Always test first -Always test upgrade processes in a QA environment first. This helps to ensure that there are no unexpected factors to take into account. TPA's ability to reproducibly deploy a PGD configuration makes it much easier to build a test environment to work with. -!!! - -## Minor and major PGD upgrades - -TPA automatically manages minor version upgrades of PGD. - -Major version upgrades of PGD require changes to the TPA `config.yml` file, which contains the deployment configuration. - -When upgrading to PGD 5 from previous PGD major versions, you can use [`tpaexec reconfigure`](/tpa/latest/reference/tpaexec-reconfigure/). This command helps you make appropriate modifications to your deployment configuration. - -The `reconfigure` command requires settings for architecture (the only supported setting is `PGD_Always_ON`) and PGD Proxy routing (`--pgd-proxy-routing `) to run. Remember to back up your deployment configuration before running, and use the `--describe` and `--output` options to preview the reconfiguration. - -## Prerequisites - -* You need the cluster configuration directory created when TPA deployed your PGD cluster. - -* If performing a major version upgrade of PGD, ensure that `tpaexec reconfigure` was run and [appropriate configuration changes](#minor-and-major-pgd-upgrades) were made. - - -## Upgrading - -Run: - -``` -tpaexec upgrade clustername -``` - -Where `clustername` is the name of the cluster and the path to the cluster configuration directory. By default, TPA upgrades each node of the cluster to the latest minor versions of the software the nodes were configured with. - - -## TPA's automated rolling upgrade procedure - -TPA first tests the cluster and then the nodes. - -Each node is then isolated from the cluster, upgraded, and returned to operation within the cluster. - -### TPA upgrades - step by step - -* Checks that all preconditions for upgrading the cluster are met. -* For each instance in the cluster: - * Checks that it has the correct repositories configured. - * Checks that the required Postgres packages are available in those repositories. - * For each BDR node in the cluster, one at a time: - * Fences the node off to ensure that pgd-proxy doesn't send any connections to it. - * Stops, updates, and restarts Postgres. - * Unfences the node so it can receive connections again. - * Updates pgbouncer, pgd-proxy, and pgd-cli, as applicable for this node. diff --git a/product_docs/docs/pgd/5.8/upgrades/upgrade_paths.mdx b/product_docs/docs/pgd/5.8/upgrades/upgrade_paths.mdx deleted file mode 100644 index 6292bb1033f..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/upgrade_paths.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Supported PGD upgrade paths ---- - -## Upgrading within version 5 - -EDB Postgres Distributed uses [semantic versioning](https://semver.org/). -All changes within the same major version are backward compatible, lowering the risk when upgrading and allowing you to choose any later minor or patch release as the upgrade target. - -You can upgrade from any version 5.x release to a later 5.x release. - -## Upgrading from version 4 to version 5 - -Upgrades from PGD 4 to PGD 5 are supported from version 4.3.0. For earlier -versions, upgrade to 4.3.0 before upgrading to 5. See [Upgrading within -4](/pgd/4/upgrades/upgrade_paths/#upgrading-within-version-4) for more -information. - -Generally, we recommend that you upgrade to the latest version 4 -release before upgrading to the latest version 5 release. After upgrading to -4.3.0 or later, the following upgrade paths are possible. - -| From version | To version | -| ---- | -- | -| 4.3.0 | 5.0.0 or later | -| 4.3.1 | 5.1.0 or later | -| 4.3.2 | 5.1.0 or later | -| 4.3.3 | 5.1.0 or later | -| 4.3.7 | 5.7.0 or later | - -## Upgrading from version 3.7 to version 5 - -Starting with version 3.7.23, you can upgrade directly to version 5.3.0 or later. -For earlier versions, upgrade to 3.7.23 before upgrading to 5. -See [Upgrading within version 3.7](/pgd/3.7/bdr/upgrades/supported_paths/#upgrading-within-version-37) -for more information. diff --git a/product_docs/docs/pgd/5.8/upgrades/upgrading_major_rolling.mdx b/product_docs/docs/pgd/5.8/upgrades/upgrading_major_rolling.mdx deleted file mode 100644 index b3ad2a4a976..00000000000 --- a/product_docs/docs/pgd/5.8/upgrades/upgrading_major_rolling.mdx +++ /dev/null @@ -1,622 +0,0 @@ ---- -title: Performing a Postgres major version rolling upgrade on a PGD cluster -navTitle: Rolling Postgres major version upgrade -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-tpa/upgrading_major_rolling/ #generated for pgd deploy-config-planning reorg ---- - -## Upgrading Postgres major versions - -Upgrading a Postgres database's major version to access improved features, performance enhancements, and security updates is a common administration task. -Doing the same for an EDB Postgres Distributed (PGD) cluster is essentially the same process but performed as a rolling upgrade. - -The rolling upgrade process allows updating individual cluster nodes to a new major Postgres version while maintaining cluster availability and operational continuity. -This approach minimizes downtime and ensures data integrity by allowing the rest of the cluster to remain operational as each node is upgraded sequentially. - -The following overview of the general instructions and [worked example](#worked-example) help to provide a smooth and controlled upgrade process. - -!!!Note -The following overview and worked example assume you're upgrading to or from 5.7.0+. -For upgrading to older versions of PGD, you need to use the command `bdr_pg_upgrade`, which has almost the same behavior and requirements as `pgd node upgrade`. -The only difference is that with `bdr_pg_upgrade` you can only upgrade major versions of Postgres (community/PGE/EDB Postgres Advanced Server), but with `pgd node upgrade` you can upgrade major versions of PGD and Postgres at once. -!!! - -### Prepare the upgrade - -To prepare for the upgrade, identify the subgroups and nodes you're trying to upgrade and note an initial upgrade order. - -To do this, connect to one of the nodes using SSH and run the `pgd nodes list` command: - -```bash -sudo -u postgres pgd nodes list -``` - -The `pgd nodes list` command shows you all the nodes in your PGD cluster and the subgroup to which each node belongs. -Then you want to find out which node is the write leader in each subgroup: - -```bash -sudo -u postgres pgd group show --summary -``` - -This command shows you information about the pgd group tokened by your `` running in your cluster, including which node is the write leader. -To maintain operational continuity, you need to switch write leaders over to another node in their subgroup before you can upgrade them. -To keep the number of planned switchovers to a minimum, when upgrading a subgroup of nodes, upgrade the writer leaders last. - -Even though you verified which node is the current write leader for planning purposes, the write leader of a subgroup could change to another node at any moment for operational reasons before you upgrade that node. -Therefore, you still need to verify that a node isn't the write leader just before upgrading that node. - -You now have enough information to determine your upgrade order, one subgroup at a time, aiming to upgrade the identified write leader node last in each subgroup. - -### Perform the upgrade on each node - -!!! Note -To help prevent data loss, before starting the upgrade process, ensure that your databases and configuration files are backed up. -!!! - -Using the [preliminary order](#prepare-the-upgrade), perform the following steps on each node while connected via SSH: - -* **Confirm the current Postgres version** - * View versions from PGD: - - `sudo -u postgres pgd nodes list --versions`. - * Ensure that the expected major version is running. - - -* **Verify that the target node isn't the write leader** - * Check whether the target node is the write leader for the group you're upgrading: - - `sudo -u postgres pgd group show --summary` - * If the target node is the current write leader for the group/subgroup you're upgrading, perform a [planned switchover](#perform-a-planned-switchover) to another node: - - `sudo -u postgres pgd group set-leader ` - - -* **Stop Postgres on the target node** - * Stop the Postgres service on the current node: - - `sudo systemctl stop postgres` - - The target node is no longer actively participating as a node in the cluster. - - -* **Install PGD and utilities** - * Install PGD and its utilities compatible with the Postgres version you're upgrading to: - - `sudo apt install edb-bdr5-pg edb-bdr-utilities` - -* **Initialize the new Postgres instance** - * Create a directory to house the database files for the new version of PostgreSQL: - - `sudo mkdir -p /opt/postgres/datanew` - * Ensure that the user postgres has ownership permissions to the directory using `chown`. - * Initialize a new PostgreSQL database cluster in the directory you just created. This step involves using the `initdb` command provided by the newly installed version of PostgreSQL. - Include the `--data-checksums` flag to ensure the cluster uses data checksums. - - `sudo -u postgres /initdb -D /opt/postgres/datanew --data-checksums` - - Replace `` with the path to the bin directory of the newly installed PostgreSQL version. - - You may need to run this command as the postgres user or another user with appropriate permissions. - -* **Migrate configuration to the new Postgres version** - * Locate the following configuration files in your current PostgreSQL data directory: - * `postgresql.conf` — The main configuration file containing settings related to the database system. - * `postgresql.auto.conf`— Contains settings set by PostgreSQL, such as those modified by the `ALTER SYSTEM` command. - * `pg_hba.conf` — Manages client authentication, specifying which users can connect to which databases from which hosts. - * The entire `conf.d` directory (if present) — Allows for organizing configuration settings into separate files for better manageability. - * Copy these files and the `conf.d` directory to the new data directory you created for the upgraded version of PostgreSQL. - - -* **Verify the Postgres service is inactive** - * Before proceeding, it's important to ensure that no PostgreSQL processes are active for both the old and the new data directories. This verification step prevents any data corruption or conflicts during the upgrade process. - - Use the `sudo systemctl status postgres` command to verify that Postgres was stopped. If it isn't stopped, run `systemctl stop postgres` and verify again that it was stopped. - - -* **Swap PGDATA directories for version upgrade** - * Rename `/opt/postgres/data` to `/opt/postgres/dataold` and `/opt/postgres/datanew` to `/opt/postgres/data`. - - This step readies your system for the next crucial phase: running pgd node upgrade to finalize the PostgreSQL version transition. - - -* **Verify upgrade feasibility** - * The `pgd node upgrade` tool offers a `--check` option designed to perform a preliminary scan of your current setup, identifying any potential issues that could hinder the upgrade process. - - You need to run this check from an upgrade directory with ownership given to user postgres, such as `/home/upgrade/`, so that the upgrade log files created by `pgd node upgrade` can be stored. - To initiate the safety check, append the `--check` option to your `pgd node upgrade` command. - - This operation simulates the upgrade process without making any changes, providing insights into any compatibility issues, deprecated features, or configuration adjustments required for a successful upgrade. - * Address any warnings or errors indicated by this check to ensure an uneventful transition to the new version. - - -* **Execute the Postgres major version upgrade** - * Execute the upgrade process by running the `pgd node upgrade` command without the `--check` option. - * It's essential to monitor the command output for any errors or warnings that require attention. - * The time the upgrade process take depends on the size of your database and the complexity of your setup. - - -* **Update the Postgres service configuration** - * Update the service configuration to reflect the new PostgreSQL version by updating the version number in the `postgres.service` file: - - `sudo sed -i -e 's///g' /etc/systemd/system/postgres.service` - * Refresh the system's service manager to apply these changes: - - `sudo systemctl daemon-reload` - - -* **Restart Postgres** - * Proceed to restart the PostgreSQL service: - - `systemctl start postgres` - - -* **Validate the new Postgres version** - * Verify that your PostgreSQL instance is now upgraded: - - `sudo -u postgres pgd nodes list --versions` - - -* **Clean up post-upgrade** - * Run `vacuumdb` with the `ANALYZE` option immediately after the upgrade but before introducing a heavy production load. Running this command minimizes the immediate performance impact, preparing the database for more accurate testing. - * Remove the old version's data directory, `/opt/postgres/dataold`. - - -The worked example that follows shows upgrading the Postgres major version from 16 to 17 on a PGD 5 cluster deployed with TPA in detail. - -## Worked example - -This worked example starts with a TPA-managed PGD cluster deployed using the [AWS quick start](/pgd/latest/quickstart/quick_start_aws/), which create Debian OS nodes. The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postgres 16. - -This example starts with the node named `kaboom`. - -!!! Note -Some steps of this process involve running commands as the Postgres owner. We refer to this user as postgres throughout, when appropriate. If you're running EDB Postgres Advanced Server, substitute the postgres user with enterprisedb in all relevant commands. -!!! - -### Confirm the current Postgres version - -SSH into kaboom to confirm the major version of Postgres is expected: - -```bash -sudo -u postgres pgd nodes list --versions -``` - -The output will be similar to this for your cluster: - -``` -Node Name BDR Version Postgres Version ---------- ------------------------------ -------------------------------- -kaboom 5.7.0-dev (snapshot 8516bb3ab) 16.6 (Debian 16.6-1EDB.bullseye) -kaftan 5.7.0-dev (snapshot 8516bb3ab) 16.6 (Debian 16.6-1EDB.bullseye) -kaolin 5.7.0-dev (snapshot 8516bb3ab) 16.6 (Debian 16.6-1EDB.bullseye) - -``` - -Confirm that the Postgres version is the expected version. - -### Verify that the target node isn't the write leader - -The cluster must be available throughout the process (that is, a *rolling* upgrade). There must always be an available write leader to maintain continuous cluster availability. -So, if the target node is the current write leader, you must [perform a planned switchover](#perform-a-planned-switchover) of the [write leader](../terminology/#write-leader) node before upgrading it so that a write leader is always available. - -While connected via SSH to kaboom, see which node is the current write leader of the group you're upgrading using the `pgd group show --summary` command: - -```bash -sudo -u postgres pgd group dc1_subgroup show --summary -``` - -In this case, you can see that kaboom is the current write leader of the sole subgroup `dc1_subgroup`: - -``` -Group Property Value ------------------ ------------ -Group Name dc1_subgroup -Parent Group Name democluster -Group Type data -Write Leader kaboom -Commit Scope -``` - -So you must perform a planned switchover of the write leader of `dc1_subgroup` to another node in the cluster. - -#### Perform a planned switchover - -Change the write leader to kaftan so kaboom's Postgres instance can be stopped: - -```bash -sudo -u postgres pgd group dc1_subgroup set-leader kaftan -``` - -After the switchover is successful, this message appears: - -``` -Command executed successfully -``` - -Then it's safe to stop Postgres on the target node. -Of course, if kaftan is switched back to being the write leader when you come to upgrading it, you'll need to perform another planned switchover at that time. - -### Stop Postgres on the target node - -While connected via SSH to the target node (in this case, kaboom), stop Postgres on the node by running: - -```bash -sudo systemctl stop postgres -``` - -This command halts the server on kaboom. Your cluster continues running using the other two nodes. - -### Install PGD and utilities - -Next, install the new version of Postgres (PG16) and the upgrade tool: - -```bash -sudo apt install edb-bdr5-pg17 edb-bdr-utilities -``` - -### Initialize the new Postgres instance - -Make a new data directory for the upgraded Postgres, and give the postgres user ownership of the directory: - -```bash -sudo mkdir /opt/postgres/datanew -sudo chown -R postgres:postgres /opt/postgres/datanew -``` - -Then, initialize Postgres 17 in the new directory: - -```bash -sudo -u postgres /usr/lib/postgresql/17/bin/initdb \ - -D /opt/postgres/datanew \ - -E UTF8 \ - --lc-collate=en_US.UTF-8 \ - --lc-ctype=en_US.UTF-8 \ - --data-checksums -``` - -This command populates the PG17 data directory for configuration, `/opt/postgres/datanew`. - -### Migrate configuration to the new Postgres version - -The next step copies the configuration files from the old Postgres version (PG16) to the new Postgres version's (PG17). Configuration files reside in each version's data directory. - -Copy over the `postgresql.conf`, `postgresql.auto.conf`, and `pg_hba.conf` files and the whole `conf.d` directory: - -```bash -sudo -u postgres cp /opt/postgres/data/postgresql.conf /opt/postgres/datanew/ -sudo -u postgres cp /opt/postgres/data/postgresql.auto.conf /opt/postgres/datanew/ -sudo -u postgres cp /opt/postgres/data/pg_hba.conf /opt/postgres/datanew/ -sudo -u postgres cp -r /opt/postgres/data/conf.d/ /opt/postgres/datanew/ -``` - -### Verify the Postgres service is inactive - -Although you [previously stopped the Postgres service on the target node](#stop-postgres-on-the-target-node), kaboom, to verify it's stopped, run the `systemctl status postgres` command: - -```bash -sudo systemctl status postgres -``` - -The output of the `status` command shows that the Postgres service has stopped running: - -``` -● postgres.service - Postgres 16 (TPA) - Loaded: loaded (/etc/systemd/system/postgres.service; enabled; vendor preset: enabled) - Active: inactive (dead) since Tue 2025-02-11 18:14:25 UTC; 7min ago - Main PID: 20370 (code=exited, status=0/SUCCESS) - CPU: 11.375s - -Feb 11 18:14:25 kaboom postgres[21066]: [22-1] 2025-02-11 18:14:25 UTC [pgdproxy@10.33.217.238(194> -Feb 11 18:14:25 kaboom postgres[20372]: [24-1] 2025-02-11 18:14:25 UTC [@//:20372]: [15] LOG: che> -Feb 11 18:14:25 kaboom postgres[21067]: [22-1] 2025-02-11 18:14:25 UTC [pgdproxy@10.33.217.237(426> -Feb 11 18:14:25 kaboom postgres[21068]: [22-1] 2025-02-11 18:14:25 UTC [pgdproxy@10.33.217.237(426> -Feb 11 18:14:25 kaboom postgres[20370]: [22-1] 2025-02-11 18:14:25 UTC [@//:20370]: [23] LOG: dat> -Feb 11 18:14:25 kaboom systemd[1]: postgres.service: Succeeded. -Feb 11 18:14:25 kaboom systemd[1]: Stopped Postgres 16 (TPA). -``` - -### Swap PGDATA directories for version upgrade - -Next, swap the PG15 and PG16 data directories: - -```bash -sudo mv /opt/postgres/data /opt/postgres/dataold -sudo mv /opt/postgres/datanew /opt/postgres/data -``` - -!!! Important -If something goes wrong at some point during the procedure, you may want to roll back/revert a node to the older major version. To do this, rename directories again so that the current data directory, `/opt/postgres/data`, becomes `/opt/postgres/datafailed` and the old data directory, `/opt/postgres/dataold`, becomes the current data directory: - -```bash -sudo mv /opt/postgres/data /opt/postgres/datafailed -sudo mv /opt/postgres/dataold /opt/postgres/data -``` - -This rolls back/reverts the node to the previous major version of Postgres. -!!! - -### Verify upgrade feasibility - -The `pgd node upgrade` tool has a `--check` option, which performs a dry run of some of the upgrade process. You can use this option to ensure the upgrade goes smoothly. - -However, first, you need a directory for the files created by `pgd node upgrade`. For this example, create an `/upgrade` directory in the `/home` directory. Then give ownership of the directory to the user postgres. - -```bash -sudo mkdir /home/upgrade -sudo chown postgres:postgres /home/upgrade -``` - -Next, navigate to `/home/upgrade` and run: - -```bash -sudo -u postgres pgd node kaboom upgrade \ - --old-bindir /usr/lib/postgresql/16/bin/ \ - --new-bindir /usr/lib/postgresql/17/bin/ \ - --old-datadir /opt/postgres/dataold/ \ - --new-datadir /opt/postgres/data/ \ - --database bdrdb \ - --username postgres \ - --check -``` - -The following is the output: - -``` -Performing BDR Postgres Checks ------------------------------- -Getting old PG instance shared directory ok -Getting new PG instance shared directory ok -Collecting pre-upgrade new PG instance control data ok -Checking new cluster state is shutdown ok -Checking BDR extension versions ok -Checking Postgres versions ok - -Finished BDR pre-upgrade steps, calling pg_upgrade --------------------------------------------------- - -Performing Consistency Checks ------------------------------ -Checking cluster versions ok -Checking database user is the install user ok -Checking database connection settings ok -Checking for prepared transactions ok -Checking for contrib/isn with bigint-passing mismatch ok -Checking data type usage ok -Checking for presence of required libraries ok -Checking database user is the install user ok -Checking for prepared transactions ok -Checking for new cluster tablespace directories ok - -*Clusters are compatible* -``` - -!!! Note -If you didn't initialize Postgres 17 with checksums using the `--data-checksums` option but did initialize checksums with your Postgres 16 instance, an error tells you about the incompatibility: - -```bash -old cluster uses data checksums but the new one does not -``` -!!! - -### Execute the Postgres major version upgrade - -You're ready to run the upgrade. On the target node, run: - -```bash -sudo -u postgres pgd node kaboom upgrade \ - --old-bindir /usr/lib/postgresql/16/bin/ \ - --new-bindir /usr/lib/postgresql/17/bin/ \ - --old-datadir /opt/postgres/dataold/ \ - --new-datadir /opt/postgres/data/ \ - --database bdrdb \ - --username postgres -``` - -The following is the expected output: - -``` -Performing BDR Postgres Checks ------------------------------- -Getting old PG instance shared directory ok -Getting new PG instance shared directory ok -Collecting pre-upgrade new PG instance control data ok -Checking new cluster state is shutdown ok -Checking BDR extension versions ok -Checking Postgres versions ok - -Collecting Pre-Upgrade BDR Information --------------------------------------- -Collecting pre-upgrade old PG instance control data ok -Starting old PG instance ok -Connecting to the old PG instance ok -Checking for BDR extension ok -Checking BDR node name ok -Terminating connections to database ok -Waiting for all slots to be flushed ok -Disconnecting from old cluster PG instance ok -Stopping old PG instance ok -Starting old PG instance with BDR disabled ok -Connecting to the old PG instance ok -Collecting replication origins ok -Collecting replication slots ok -Disconnecting from old cluster PG instance ok -Stopping old PG instance ok - -Finished BDR pre-upgrade steps, calling pg_upgrade --------------------------------------------------- - -Performing Consistency Checks ------------------------------ -Checking cluster versions ok -Checking database user is the install user ok -Checking database connection settings ok -Checking for prepared transactions ok -Checking for contrib/isn with bigint-passing mismatch ok -Checking data type usage ok -Creating dump of global objects ok -Creating dump of database schemas - ok -Checking for presence of required libraries ok -Checking database user is the install user ok -Checking for prepared transactions ok -Checking for new cluster tablespace directories ok - -If pg_upgrade fails after this point, you must re-initdb the -new cluster before continuing. - -Performing Upgrade ------------------- -Setting locale and encoding for new cluster ok -Analyzing all rows in the new cluster ok -Freezing all rows in the new cluster ok -Deleting files from new pg_xact ok -Copying old pg_xact to new server ok -Setting oldest XID for new cluster ok -Setting next transaction ID and epoch for new cluster ok -Deleting files from new pg_multixact/offsets ok -Copying old pg_multixact/offsets to new server ok -Deleting files from new pg_multixact/members ok -Copying old pg_multixact/members to new server ok -Setting next multixact ID and offset for new cluster ok -Resetting WAL archives ok -Setting frozenxid and minmxid counters in new cluster ok -Restoring global objects in the new cluster ok -Restoring database schemas in the new cluster - ok -Copying user relation files - ok -Setting next OID for new cluster ok -Sync data directory to disk ok -Creating script to delete old cluster ok -Checking for extension updates notice - -Your installation contains extensions that should be updated -with the ALTER EXTENSION command. The file - update_extensions.sql -when executed by psql by the database superuser will update -these extensions. - -Upgrade Complete ----------------- -Optimizer statistics are not transferred by pg_upgrade. -Once you start the new server, consider running: - /usr/lib/postgresql/17/bin/vacuumdb -U postgres --all --analyze-in-stages -Running this script will delete the old cluster's data files: - ./delete_old_cluster.sh - -pg_upgrade complete, performing BDR post-upgrade steps ------------------------------------------------------- -Collecting post-upgrade old PG instance control data ok -Collecting post-upgrade new PG instance control data ok -Checking LSN of the new PG instance ok -Starting new PG instance with BDR disabled ok -Connecting to the new PG instance ok -Creating replication origin bdr_bdrdb_democluster11_kaolin ok -Advancing replication origin bdr_bdrdb_democluster11_kaol... ok -Creating replication origin pgl_writer_origin_2_1 ok -Advancing replication origin pgl_writer_origin_2_1 to 0/2... ok -Creating replication origin bdr_bdrdb_democluster11_kaftan ok -Advancing replication origin bdr_bdrdb_democluster11_kaft... ok -Creating replication origin pgl_writer_origin_4_1 ok -Advancing replication origin pgl_writer_origin_4_1 to 0/2... ok -Creating replication slot bdr_bdrdb_democluster11_kaolin ok -Creating replication slot bdr_bdrdb_democluster11_kaftan ok -Creating replication slot bdr_bdrdb_democluster11 ok -Stopping new PG instance -``` - -### Update the Postgres service configuration - -The Postgres service on the system is configured to start the old version of Postgres (PG16). You need to modify the `postgres.service` file to start the new version (PG17). - -You can do this using `sed` to replace the old version number `15` with `16` throughout the file. - -```bash -sudo sed -i -e 's/16/17/g' /etc/systemd/system/postgres.service -``` - -After you've changed the version number, you can tell the systemd daemon to reload the configuration. On the target node, run: - -```bash -sudo systemctl daemon-reload -``` - -### Restart Postgres - -Start the modified Postgres service: - -```bash -sudo systemctl start postgres -``` - -### Validate the new Postgres version - -Repeating the first step, check the version of Postgres to confirm that you upgraded kaboom correctly. While still on kaboom, run: - -```bash -sudo -u postgres pgd nodes list --versions -``` - -Use the output to confirm that kaboom is running the upgraded Postgres version: - - ``` -Node Name BDR Version Postgres Version ---------- ----------- -------------------------------- -kaboom 5.7.0 17.3 (Debian 17.3-1EDB.bullseye) -kaftan 5.7.0 16.7 (Debian 16.7-1EDB.bullseye) -kaolin 5.7.0 16.7 (Debian 16.7-1EDB.bullseye) -``` - -Here kaboom has been upgraded to major version 17. - -### Clean up post-upgrade - -As a best practice, run a vacuum over the database at this point. When the upgrade ran, you may have noticed the post-upgrade report included: - -``` -Once you start the new server, consider running: - /usr/lib/postgresql/17/bin/vacuumdb --all --analyze-in-stages -``` - -You can run the vacuum now. On the target node, run: - -```bash -sudo -u postgres /usr/lib/postgresql/17/bin/vacuumdb --all --analyze-in-stages -``` - -If you're sure you don't need to revert this node, you can also clean up the old data directory folder `dataold`: - -```bash -sudo rm -r /opt/postgres/dataold -``` - -Upgrading the target node is now complete. - -### Next steps - -After completing the upgrade on kaboom, run the same steps on kaolin and kaftan. - -If you followed along with this example and kaftan is the write leader, to ensure availability, you must [perform a planned switchover](#perform-a-planned-switchover) to another node that was already upgraded before running the upgrade steps on kaftan. - -#### Check Postgres versions across the cluster - -After completing the upgrade on all nodes, while connected to one of the nodes, you can check your versions again: - -```bash -sudo -u postgres pgd nodes list --versions -``` - -The output will be similar to the following: - - ``` -Node Name BDR Version Postgres Version ---------- ----------- -------------------------------- -kaboom 5.7.0 17.3 (Debian 17.3-1EDB.bullseye) -kaftan 5.7.0 17.3 (Debian 17.3-1EDB.bullseye) -kaolin 5.7.0 17.3 (Debian 17.3-1EDB.bullseye) - -``` - -This output shows that all the nodes are successfully upgraded to the new Postgres version 17. diff --git a/product_docs/docs/pgd/6/compatibility.mdx b/product_docs/docs/pgd/6/compatibility.mdx new file mode 100644 index 00000000000..1528c63ad52 --- /dev/null +++ b/product_docs/docs/pgd/6/compatibility.mdx @@ -0,0 +1,41 @@ +--- +title: PGD compatibility +navTitle: Compatibility +description: Compatibility of EDB Postgres Distributed with different versions of PostgreSQL +deepToC: true +--- + +## PGD compatibility with PostgreSQL versions + +The following table shows the major versions of PostgreSQL that EDB Postgres Distributed (PGD) is compatible with. + +| PGD 6 | Postgres Version | +|-------------|------------------| +| [6](/pgd/6) | 17.5.0+ | +| [6](/pgd/6) | 16.9.0+ | +| [6](/pgd/6) | 15.13.0+ | +| [6](/pgd/6) | 14.18.0+ | + +EDB recommends that you use the latest minor version of any Postgres major version with a supported PGD. + +## PGD compatibility with operating systems and architectures + +The following tables show the versions of EDB Postgres Distributed and their compatibility with various operating systems and architectures. + +### Linux + +| Operating System | x86_64
(amd64) | ppc64le | arm64/
aarch64 | +|------------------------------------|----------------|---------|---------------| +| RHEL 8 | Yes | Yes | | +| RHEL 9 | Yes | Yes | Yes | +| Oracle Linux 8 | Yes | | | +| Oracle Linux 9 | Yes | | | +| Rocky Linux/AlmaLinux | Yes | | | +| SUSE Linux Enterprise Server 15SP6 | Yes | Yes | | +| Ubuntu 22.04 | Yes | | | +| Ubuntu 24.04 | Yes | | | +| Debian 12 | Yes | | Yes | + +!!! Note +See [PGD 5 Compatibility](/pgd/5.8/compatibility) for previous versions of PGD. +!!! diff --git a/product_docs/docs/pgd/6/concepts/advanced-durability.mdx b/product_docs/docs/pgd/6/concepts/advanced-durability.mdx new file mode 100644 index 00000000000..d158bdd67c3 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/advanced-durability.mdx @@ -0,0 +1,4 @@ +--- +title: Advanced Durability +--- + diff --git a/product_docs/docs/pgd/6/concepts/commit-scopes.mdx b/product_docs/docs/pgd/6/concepts/commit-scopes.mdx new file mode 100644 index 00000000000..97ae4221773 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/commit-scopes.mdx @@ -0,0 +1,4 @@ +--- +title: Commit Scopes +--- + diff --git a/product_docs/docs/pgd/6/concepts/conflict-management.mdx b/product_docs/docs/pgd/6/concepts/conflict-management.mdx new file mode 100644 index 00000000000..7108d506f74 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/conflict-management.mdx @@ -0,0 +1,9 @@ +--- +title: Conflict Management +--- + +With PGD Expanded, the presence of multiple writers leads to the possibility, or even the likelihood, of conflicts. Changes to the same rows from different nodes can arrive on a node at any time. +PGD Expanded provides a conflict management system that allows you to define how conflicts are handled in your distributed database environment. + +Read more about conflict management in the [Conflict Management reference](/pgd/latest/reference/conflict-management/) documentation. + diff --git a/product_docs/docs/pgd/6/concepts/connection-management.mdx b/product_docs/docs/pgd/6/concepts/connection-management.mdx new file mode 100644 index 00000000000..607283d40ca --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/connection-management.mdx @@ -0,0 +1,11 @@ +--- +title: Connection Management +--- + +To ensure that clients can connect to the right nodes in the distributed cluster, EDB Postgres Distributed (PGD) provides a connection management system that allows clients to connect to the appropriate nodes based on their needs. + +This system is designed to ensure that clients can access the data they need while maintaining the performance and availability of the cluster. +Unlike Proxy systems, this connection management system is built into the database instance itself, allowing for more efficient and reliable connections. + +Read more about the [Connection Management feature in PGD](/pgd/latest/reference/connection-manager/) for full details of the implementation. + diff --git a/product_docs/docs/pgd/6/concepts/durability.mdx b/product_docs/docs/pgd/6/concepts/durability.mdx new file mode 100644 index 00000000000..26367c5e1c8 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/durability.mdx @@ -0,0 +1,8 @@ +--- +title: Durability +--- + +How does EDB Postgres Distributed (PGD) ensure durability of transactions? + +Durability can be defined as the guarantee that once a transaction has been committed, it will remain so, even in the event of a system failure. In EDB Postgres Distributed (PGD), durability is achieved through a combination of write-ahead logging (WAL) and replication, in combination with the commit scopes available in the cluster and the configuration of the nodes in the cluster. + diff --git a/product_docs/docs/pgd/6/concepts/expanded-commit-scopes.mdx b/product_docs/docs/pgd/6/concepts/expanded-commit-scopes.mdx new file mode 100644 index 00000000000..e7a83eacfe4 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/expanded-commit-scopes.mdx @@ -0,0 +1,6 @@ +--- +title: Expanded Commit Scopes +--- + +PGD Expanded allows you to define commit scopes that are more granular or more customised than the standard pre-defined commit scopes available in PGD. This feature is particularly useful for applications that require specific commit behaviors or need to manage complex transaction scenarios. + diff --git a/product_docs/docs/pgd/6/concepts/geo-distributed-clusters.mdx b/product_docs/docs/pgd/6/concepts/geo-distributed-clusters.mdx new file mode 100644 index 00000000000..6b4f9b9e49a --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/geo-distributed-clusters.mdx @@ -0,0 +1,6 @@ +--- +title: Geo-Distributed Clusters +--- + +Geo-distributed clusters are a powerful feature of PGD which allow you to create a distributed database system that spans multiple geographic locations. This setup is particularly useful for applications that require high availability, low latency, and disaster recovery across different regions. As this feature needs multiple write nodes and multiple distributed groups, it is only available in PGD Expanded. + diff --git a/product_docs/docs/pgd/6/concepts/index.mdx b/product_docs/docs/pgd/6/concepts/index.mdx new file mode 100644 index 00000000000..31116bc4d03 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/index.mdx @@ -0,0 +1,34 @@ +--- +title: PGD concepts explained +navTitle: PGD concepts explained +description: The concepts behind EDB Postgres Distributed (PGD) 6.0 and how they work in practice. +navigation: +- replication +- nodes-and-groups +- connection-management +- locking +- durability +- commit-scopes +- lag-control +- expanded-commit-scopes +- geo-distributed-clusters +- advanced-durability +- conflict-management +--- + +## PGD concepts + +* [Replication](replication) +* [Nodes and groups](nodes-and-groups) +* [Connection management](connection-management) +* [Locking](locking) +* [Durability](durability) +* [Commit scopes](commit-scopes) +* [Lag Control](lag-control) + +## PGD Expanded concepts + +* [Commit scopes for PGD Expanded](expanded-commit-scopes) +* [Geo-distributed clusters](geo-distributed-clusters) +* [Advanced durability](advanced-durability) +* [Conflict management](conflict-management) diff --git a/product_docs/docs/pgd/6/concepts/lag-control.mdx b/product_docs/docs/pgd/6/concepts/lag-control.mdx new file mode 100644 index 00000000000..0d8e08be5f7 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/lag-control.mdx @@ -0,0 +1,11 @@ +--- +title: Lag Control +--- + +When a node is lagging behind the rest of the cluster, it can cause issues with data consistency and availability. Lag control is a mechanism to manage this situation by ensuring that the lagging node does not disrupt the overall performance of the cluster. + +## Lag Control in PGD + +When lag is detected in PGD, the Lag Control feature is activated. This feature is designed to manage the lagging node and ensure that it does not disrupt the overall performance of the cluster. It does this by transparently and temporarily slowing client connections, introducing a commit delay to clients. This allows the lagging node to catch up with the rest of the cluster without impacting the performance of the other nodes. + +Read more about the [Lag Control feature in PGD](/pgd/latest/reference/commit-scopes/lag-control/) for full details. diff --git a/product_docs/docs/pgd/6/concepts/locking.mdx b/product_docs/docs/pgd/6/concepts/locking.mdx new file mode 100644 index 00000000000..381f53bdc1f --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/locking.mdx @@ -0,0 +1,26 @@ +--- +title: Locking +--- + +To prevent conflicts between various operations in the cluster, PGD uses a distributed locking mechanism to ensure the only one node can perform a specific operation at a time. + +This is particularly important in a distributed environment where multiple nodes may attempt to modify the same data concurrently. +As PGD Essential is a single-write node cluster, it does not have to deal with distributed locking in the same way, as there is only one node that can perform write operations at any time. +PGD Expaned, however, has multiple write nodes, and so it must will always use distributed locking to ensure integerity. + +## Kinds of Locks + +PGD uses several kinds of locks to manage concurrent access to data and resources in the cluster: + +### DDL locking + +DDL (Data Definition Language) locks are used to manage access to database objects such as tables, indexes, and schemas. When a DDL operation is performed, a lock is acquired on the object being modified to prevent other operations from interfering with the change. This ensures that the structure of the database remains consistent and prevents conflicts between concurrent DDL operations. Read more about DDL locking in the [DDL Locking reference](/pgd/latest/reference/ddl/ddl-locking/) documentation. + +### DML locking + +DML locking is closly related to DDL locking and is used to add an extra layer of protection to a DDL operation being replicated by also halting any DML operations that would conflict with the DDL operation. Again, this is only needed in a multi-write node cluster, and is not used in PGD Essential. + +### Which locks are used when? + +The locks used in PGD depend on the type of operation being performed and the configuration of the cluster. In general, DDL locks are used for schema changes, while DML locks are used for data modifications. A full list of the locks used in PGD can be found in the [DDL command handling matrix](/pgd/latest/reference/ddl/ddl-command-handling) documentation. + diff --git a/product_docs/docs/pgd/6/concepts/nodes-and-groups.mdx b/product_docs/docs/pgd/6/concepts/nodes-and-groups.mdx new file mode 100644 index 00000000000..3b9cde4f228 --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/nodes-and-groups.mdx @@ -0,0 +1,51 @@ +--- +title: PGD Nodes and Groups +navTitle: Nodes and Groups +description: "Understanding nodes and groups in EDB Postgres Distributed (PGD)" +--- + +A PGD cluster is made up of one or more nodes, with each node being an instance of Postgres. + +Each node in the cluster is a full Postgres instance with the BDR extension installed and configured. +Nodes can have different roles and responsibilities within the cluster. +Nodes are then organized into groups, which are used to organize the replication of data between the nodes. +There's also the "top level" group, which is the cluster itself; every node in the cluster is also a member of this group, and it is the parent of all other groups in the cluster. + +## Data Nodes + +The first node kind to know about is the data node. +This is the basic building block of PGD clusters. +It is configured to replicate data to and from the other data nodes in the cluster. Not the group, the cluster. +By design, all nodes in a PGD cluster replicate to all other nodes in the cluster. + +## Groups + +Groups are used to localize how the nodes **manage** themselves. +Each group selects it's own RAFT leader from the group members. +If the group is a data group that is made up of data nodes it also uses RAFT to elect a write leader node for that group. +The write leader node will be sent all the read/write client connections for that group and will be the node that handles all write operations for that group, assuming that the client connections come in through the connection manager of a node in that group. + +!!! RAFT + RAFT is a consensus algorithm that is used to ensure that all nodes in a group agree on the state of the group. + It allows a group of nodes to elect a leader node, and to ensure that all nodes in the group are in sync with each other over decisions. + The most important thing to know about RAFT is that it needs an odd number of nodes in any group to function correctly. + That's because RAFT uses a majority vote between the nodes to agree on the state of the group. + +## Witness Nodes + +Witness nodes are like data nodes, but they do not replicate or store any data. +Their role is to provide a deadlock breaking vote in the event of a group of data nodes losing sufficient nodes as to not be able to complete a majority vote. + +Witness nodes do not participate in the normal data replication process, but they can be used to help resolve conflicts and ensure that the cluster remains available even in the face of network partitions or other failures. + +## Subscriber-only Nodes + +Subscriber-only nodes are used to provide a read-only replica of the data in the cluster. In PGD 6, you can configure a subscriber-only node as a member of a data group or a member of a subscriber-only group. The latter has no write leader node, and all nodes in the group are read-only and allow for some optional optimizations in the replication process. The former allows for a read-only replica of the data in the group, but it does not allow for any optimizations in the replication process. + +A subscriber-only node can be used to offload read queries from the write leader node, which can help to improve performance and reduce the load on the write leader node. It can also be used to provide a read-only replica of the data in the cluster for reporting or analytics purposes. You can connect to the read-only nodes like this using the connection manager's read-only connection string, which will direct the connections to the pool of read-only nodes in the cluster. + +## Logical Standby Nodes + +Logical standby nodes are used to provide a read-only replica of the data in the cluster. They are similar to subscriber-only nodes, but they are designed to be more flexible and can be used in a wider range of scenarios. + + diff --git a/product_docs/docs/pgd/6/concepts/replication.mdx b/product_docs/docs/pgd/6/concepts/replication.mdx new file mode 100644 index 00000000000..faa43365c8a --- /dev/null +++ b/product_docs/docs/pgd/6/concepts/replication.mdx @@ -0,0 +1,38 @@ +--- +title: Replication +navTitle: Replication +--- + +At the heart of EDB Postgres Distributed (PGD) is the replication system, BDR. BDR stands for Bi-Directional Replication, and it is a multi-master replication system that allows you to create a distributed Postgres cluster with multiple write nodes. This means that you can write to any node in the cluster, and the changes will be replicated to all other nodes in the cluster. + +Just because you can write to any node in the cluster, it doesn't mean that you should. In most cases, you will want to write to a single node in the cluster, which is known as the write leader node. This is the node that is responsible for coordinating the replication of changes to all other nodes in the cluster. In fact, in PGD Essential, you can only write to the write leader node, and all other nodes in the cluster are read-only. + +There are though some cases where you may want to write to multiple nodes in the cluster, such as when you are using a geo-distributed cluster with multiple write nodes in different locations. In these cases, you can use the BDR replication system to replicate changes between the write nodes. This, and other scenarios, are what PGD Expanded is designed for, and it activates additional features and functionality to support these use cases. + +## How Replication Works + +PGD uses logical replication to replicate changes between nodes in the cluster. This means that changes are replicated at the logical level, rather than at the physical level. This allows for more flexibility in how changes are replicated, and it also allows for more efficient replication of changes. + +When a change is made to a table in the cluster, it is first written to the write leader node's write-ahead log (WAL). The WAL is a log of all changes made to the database, and it is used to ensure durability and consistency of the database. Once the change is written to the WAL, it is then replicated to all other nodes in the cluster. + +The replication process is asynchronous by default, which means that changes are not immediately replicated to all nodes in the cluster. Instead, changes are sent to the other nodes in the cluster in batches, which allows for more efficient replication and reduces the load on the network. + +Once the changes are received by the other nodes in the cluster, they are applied to the local copy of the database. This process is known as replaying the WAL, and it ensures that all nodes in the cluster have a consistent view of the data. + +## Commit scopes and replication + +Asynchronous replication is the default mode of replication, but not the only one. PGD allows for definable replication configuration through what are called commit scopes. A commit scope can be applied to a transaction or to all transactions in a group, and it defines how changes are replicated to other nodes in the cluster. This allows you to control how the replication process works, and it can be used to optimize performance, ensure that changes are replicated in a specific way or to handle adverse network and server conditions gracefully. + +- PGD Expanded has fully definable commit scopes, which allow you to create custom replication configurations for your cluster. Read about the [commit scopes in PGD Expanded](/pgd/latest/reference/commit-scopes/) for full details. +- PGD Essential has has four pre-defined commit scopes that you can use to control how changes are replicated. Read about the [commit scopes in PGD Essential](/pgd/latest/essential-how-to/durability/) for full details. + +## What is replicated? + +In PGD, the following types of changes are replicated: + + - **Data changes**: Inserts, updates, and deletes to tables are replicated to all nodes in the cluster. This is called DML (Data Manipulation Language) replication. + - **Schema changes**: Changes to the structure of the database, such as creating or dropping tables, are also replicated to all nodes in the cluster. This is called DDL (Data Definition Language) replication. But not all DDL changes are replicated. For example, adding a column to a table is replicated, but dropping a column is not. + - **Configuration changes**: Changes to the configuration of the database, such as changing the replication settings, are also replicated to all nodes in the cluster. + +Currently, PGD only replicates one Postgres database per cluster. This means that if you have multiple databases in your Postgres instance, only the database that is configured for replication will be replicated to the other nodes in the cluster. This is the same for both PGD Essential and PGD Expanded. + diff --git a/product_docs/docs/pgd/6/essential-how-to/architectures/index.mdx b/product_docs/docs/pgd/6/essential-how-to/architectures/index.mdx new file mode 100644 index 00000000000..8e66d9bebe7 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/architectures/index.mdx @@ -0,0 +1,30 @@ +--- +title: PGD Essential architectures +navTitle: Architectures +navigation: +- standard +- near-far +description: PGD Essential architectures for high availability and disaster recovery and how to implement them. +--- + +## Choosing an architecture + +There are two supported architectures for PGD Essential. Essential supports the two major use cases for replication: high availability and disaster recovery. The architecture you choose depends on your use case. + +They are standard and near/far. + +## Standard architecture - Ideal for a highly available single location + +The standard, or one-location, architecture is designed for a single location that needs to be highly available. Built around three data nodes, the Essential standard architecture ensures that data is replicated across all three nodes and that, in the event of a failure, the system can continue to operate without data loss. + +Learn more about the [Standard architecture](standard). + +## Near/far architecture - Ideal for disaster recovery + +The Near/Far architecture is designed for a single location that needs to be reasonably highly available and needs to be able to recover from a disaster. It does this by having a two-data-node cluster in the primary location and a single data node in a secondary location. + +Learn more about the [Near/far architecture](near-far). + +!!!Note For multi-region deployments + For multi-region deployments, geo-distributed architectures are available in [PGD Expanded](/pgd/latest/expanded-how-to/architectures/). These architectures are designed for use cases that require data to be distributed across multiple regions or data centers. They provide advanced features such as conflict resolution, data distribution, and support for large-scale deployments. + For more information on PGD Expanded, see the [Expanded how-to](/pgd/latest/expanded-how-to). diff --git a/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/index.mdx b/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/index.mdx new file mode 100644 index 00000000000..6a74df6073b --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/index.mdx @@ -0,0 +1,23 @@ +--- +title: Near/far architecture +navTitle: Near/far +navigation: +- manual-deployments +--- + +In the near/far architecture, there are two data nodes in the primary location and one data node in a secondary location. The primary location is where the majority of the data is stored and where most of the client connections are made. The secondary location is used for disaster recovery and isn't used for client connections by default. + +The data nodes are all configured in a multi-master replication configuration, just like the standard architecture. The difference is that the node at the secondary location is fenced off from the other nodes in the cluster and doesn't receive client connections by default. In this configuration, the secondary location node has a complete replica of the data in the primary location. + +Using a PGD commit scope, the data nodes in the primary location are configured to synchronously replicate data to the other node in the primary location and to the node in the secondary location. This ensures that the data is replicated to all nodes before it's committed to on the primary location. In the case of a node going down, the commit scope rule detects the situation and degrades the replication to asynchronous replication. This behavior allows the system to continue to operate. + +In the event of a partial failure at the primary location, the system switches to the other data node, also with a complete replica of the data, and continues to operate. It also continues replication to the secondary location. When the failed node at the primary location comes back, it rejoins and begins replicating data from the node that's currently primary. + +In the event of a complete failure in the primary location, the secondary location's database has a complete replica of the data. Depending on the failure, options for recovery include restoring the primary location from the secondary location or restoring the primary location from a backup of the secondary location. The secondary location can be configured to accept client connections, but this isn't the default configuration and requires some additional reconfiguration. + +## Synchronous replication in near/far architecture + +For best results, configure the near/far architecture with synchronous replication. +This ensures that the data is replicated to the secondary location before it's committed to the primary location. + +See [manually deploying a near/far architecture](manually-deploying-near-far) for more information on how to configure the near/far architecture with synchronous replication. diff --git a/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/manually-deploying-near-far.mdx b/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/manually-deploying-near-far.mdx new file mode 100644 index 00000000000..436e4700ec4 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/architectures/near-far/manually-deploying-near-far.mdx @@ -0,0 +1,44 @@ +--- +title: "Manually Deploying PGD Essential near-far architecture" +navTitle: Manual near-far deployments +description: How to manually deploy the PGD Essential near-far architecture. +--- + +The following instructions describe how to manually deploy the PGD Essential near-far architecture. This architecture is designed for a single location that needs to be reasonably highly available and needs to be able to recover from a disaster. It does this by having a two-data-node cluster in the primary location and a single data node in a secondary location. + +These instructions use the pgd command line tool to create the cluster and configure the nodes. They assume that you have already installed PGD Essential and have access to the pgd command line tool. + +The primary location is referred to as the `active` location and the secondary location as the `dr` location. + +## PGD configuration + +The primary location is configured with two data nodes, in their own group "active". This location is where the majority of the client connections will be made. + +The secondary location is configured with one data node, in its own group "dr". + +They are all members of the same cluster. + +Once created with pgd-cli, the routing and fencing of the nodes needs to be configured. + +First, disable the routing on both the "active" and "dr" groups: + +```shell +pgd group dr set-option enable_routing off --dsn "host=localhost port=5432 dbname=pgddb user=pgdadmin" +pgd group active set-option enable_routing off --dsn "host=localhost port=5432 dbname=pgddb user=pgdadmin" +``` + +Then, enable the routing on the "pgd" top-level group: + +```shell +pgd group pgd set-option enable_routing on --dsn "host=localhost port=5432 dbname=pgddb user=pgdadmin" +``` + +Finally, enable the fencing on the "dr" group: + +```shell +pgd group dr set-option enable_fencing on --dsn "host=localhost port=5432 dbname=pgddb user=pgdadmin" +``` + +This approach ensures that the "dr" group is fenced off from the other nodes in the cluster and doesn't receive client connections by default. +The "active" group will continue to operate normally and will continue to replicate data to the "dr" group. + diff --git a/product_docs/docs/pgd/6/essential-how-to/architectures/standard/index.mdx b/product_docs/docs/pgd/6/essential-how-to/architectures/standard/index.mdx new file mode 100644 index 00000000000..9faf0e05075 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/architectures/standard/index.mdx @@ -0,0 +1,15 @@ +--- +title: Standard PGD architecture +navTitle: Standard architecture +navigation: +--- + + + +Using core PGD capabilites, the standard architecture configures the three nodes in a multi-master replication configuration. That is, each node operates as a master node and logically replicates its data to the other nodes. While PGD is capable of handling conflicts between data changes on nodes, the Essential standard architecture uses PGD's integrated connection manager to ensure that all writes are directed to a single node, the write leader. Conflicts are avoided by allowing that singular leader to handle all updates to the data. Changes are then replicated to the other nodes in the cluster. + +If the write leader fails, the remaining nodes in the cluster will elect a new write leader, and the connection managers in those nodes then failover to send writes to the new leader. When the failed node comes back online, it rejoins the cluster and begins replicating data from the new write leader. + +The Essential standard architecture was created to be easy to deploy and manage, based on user experience. Unlike other high availability solutions, because Essential is built on PGD, moving to a more complex architecture is simple and straightforward. Move to Expanded PGD, and then add new data groups to the cluster as needed. + +See [manually deploying a standard architecture](manually-deploying-standard) for more information on how to configure the standard architecture. diff --git a/product_docs/docs/pgd/6/essential-how-to/architectures/standard/manually-deploying-standard.mdx b/product_docs/docs/pgd/6/essential-how-to/architectures/standard/manually-deploying-standard.mdx new file mode 100644 index 00000000000..70d5087e737 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/architectures/standard/manually-deploying-standard.mdx @@ -0,0 +1,78 @@ +--- +title: "Manually deploying PGD Essential standard architecture" +navTitle: Manual standard deployments +description: How to manually deploy the PGD Essential standard architecture. +--- + +Manually deploying the PGD Essential standard architecture is a straightforward process. This architecture is designed for a single location that needs to be highly available and can recover from a disaster. It does this by having three data nodes in a multi-master replication configuration, with one node acting as the write leader. + +## PGD configuration + +Install PGD on each of the three nodes using the instructions in the Essentials install guide. Specifically: + +* [Configure repositories](/pgd/latest/essential-how-to/install/02-configure-repositories/) to enable installation of the PGD packages. +* [Install PGD and Postgres](/pgd/latest/essential-how-to/install/03-installing-database-and-pgd/) to install the PGD packages. +* [Configure the PGD cluster](/pgd/latest/essential-how-to/install/04-configuring-cluster/) to configure the PGD cluster. + +## Worked example + +This example create a three-node RHEL cluster with EDB Postgres Extended Server, using the PGD Essential Standard architecture and the following parameters: + +* The first node is called `node1` and is located on `host-1`. +* The second node is called `node2` and is located on `host-2`. +* The third node is called `node3` and is located on `host-3`. +* the cluster name is `pgd` (the default name). +* The group name is `group1`. +* The Postgres version is `17`. +* The Postgres data directory is `/var/lib/edb-pge/17/main/`. +* The Postgres executable files are in `/usr/edb/pge17/bin/`. +* The Postgres database user is `postgres`. +* The Postgres database port is `5432`. +* The Postgres database name is `pgddb`. + +### For the first node + +This is the common setup for all three nodes, installing the software: + +```bash +export EDB_SUBSCRIPTION_TOKEN=XXXXXXXXXXXXXX +export EDB_SUBSCRIPTION_PLAN=enterprise +export EDB_REPO_TYPE=rpm +curl -1sSLf " https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_PLAN/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +export PG_VERSION=17 +export PGD_EDITION=essential +export EDB_PACKAGES="edb-as$PG_VERSION-server edb-pgd6-$PGD_EDITION-epas$PG_VERSION" +sudo dnf install -y $EDB_PACKAGES +``` + +On the first node, the following command creates the cluster and the group. It also creates the data directory and initializes the database. + +```bash +sudo su - postgres +export PATH=$PATH:/usr/edb/pge17/bin/ +pgd node node1 setup "host=host-1 user=postgres port=5432 dbname=pgddb" --pgdata /var/lib/edb-pge/17/main/ --group-name group1 --cluster-name pgd --create-group --initial-node-count 3 +``` + +### For the second node + +Repeat the software installation steps on the second node. + +Then run the following command to initialize the node and join the cluster and group: + +```bash +sudo su - postgres +export PATH=$PATH:/usr/edb/pge17/bin/ +pgd node node2 setup "host=host-2 user=postgres port=5432 dbname=pgddb" --pgdata /var/lib/edb-pge/17/main/ --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` + +### For the third node + +Repeat the software installation steps on the third node. + +The command to initialize the node and join the cluster and group is similar to the second node but with a different host and node name: + +```bash +sudo su - postgres +export PATH=$PATH:/usr/edb/pge17/bin/ +pgd node node3 setup "host=host-3 user=postgres port=5432 dbname=pgddb" --pgdata /var/lib/edb-pge/17/main/ --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` diff --git a/product_docs/docs/pgd/6/essential-how-to/autopartition.mdx b/product_docs/docs/pgd/6/essential-how-to/autopartition.mdx new file mode 100644 index 00000000000..f9c7d67ec84 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/autopartition.mdx @@ -0,0 +1,68 @@ +--- +title: Autopartitioning +navTitle: Autopartitioning +description: A guide on how to use autopartitioning in PGD Essential. +--- + +Autopartitioning in PGD allows you to split tables into several partitions, other tables, creating and dropping those partitions are needed. Autopartitioning is useful for managing large tables that grow over time as it allows you to separate the data into manageable chunks. For example, you can archive older data into its own partition, and then archive or drop the partition when the data is no longer needed. + +### Autopartitioning and replication + +PGD autopartitioning is managed, by default, locally through the `bdr.autopartition` function. This function allows you to create or alter the definition of automatic range partitioning for a table. If no definition exists, it creates one; otherwise, it alters the existing definition. + +!!! Note EDB Postgres Advanced Server automatic partitioning isn't supported in PGD +EDB Postgres Advanced Server has native automatic partitioning support, but this isn't available in EDB Postgres Distributed (PGD). PGD autopartitioning is a separate feature that allows you to manage partitions locally. If PGD is active on an EDB Postgres Advanced Server node, native automatic partitioning commands are rejected. See [Autopartition reference](/pgd/latest/reference/autopartition) for more information. +!!! + +### Range partitioning + +PGD autopartitioning supports range partitioning using the `RANGE` keyword. Range partitioning allows you to partition a table based on the ranges of values in a column. For example, you can partition a table by date, where each partition contains data for a specific date range. This is useful for managing large tables that grow over time, as it allows you to separate the data into manageable chunks. + +For example, you can create a table that is partitioned by date: + +```sql +CREATE TABLE measurement ( + logdate date not null, + peaktemp int, + unitsales int +) PARTITION BY RANGE (logdate); +``` + +Then, you can use the `bdr.autopartition` function to create daily partitions and keep data for one month: + +```sql +select bdr.autopartition('measurement', '1 day', data_retention_period := '30 days'); +``` + +This function creates a partition for each day and retains the data for 30 days. After 30 days, the partitions are automatically dropped. If you look at the database tables you'll see the partitions created for the `measurement` table: + +```console +pgddb=# \dt +__OUTPUT__ + List of relations + Schema | Name | Type | Owner +--------+----------------------------------------+-------------------+---------- + public | measurement | partitioned table | postgres + public | measurement_part_1231354915_2103027132 | table | postgres + public | measurement_part_1520219330_1231354915 | table | postgres + public | measurement_part_1670975046_3921991865 | table | postgres + public | measurement_part_2103027132_2095358725 | table | postgres + public | measurement_part_2877346473_1670975046 | table | postgres + public | measurement_part_3921991865_1520219330 | table | postgres +(7 rows) +``` + +Why are there so many partitions? Because, by default, the autopartition creates five advance partitions, for future use and will automatically do that whenever it uses all but two of the partitions. This means that it will always have at least two partitions available for new data. You can change this behavior by setting the `minimum_advance_partitions` and `maximum_advance_partitions` parameters in the `bdr.autopartition` function. + + +```sql +bdr.autopartition(relation regclass, + partition_increment text, + partition_initial_lowerbound text DEFAULT NULL, + partition_autocreate_expression text DEFAULT NULL, + minimum_advance_partitions integer DEFAULT 2, + maximum_advance_partitions integer DEFAULT 5, + data_retention_period interval DEFAULT NULL, + enabled boolean DEFAULT on, + analytics_offload_period); +``` diff --git a/product_docs/docs/pgd/6/essential-how-to/connections.mdx b/product_docs/docs/pgd/6/essential-how-to/connections.mdx new file mode 100644 index 00000000000..318cc023013 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/connections.mdx @@ -0,0 +1,34 @@ +--- +title: Connections +navTitle: Connections +description: How to connect to your PGD cluster. +--- + +PGD Essential uses the same connection methods as Postgres. The difference is that most of your connections to the cluster go through the connection manager that's built into every data node in the cluster. + +Although you can connect directly to the data nodes, we don't recommend it for anything other than maintenance when you want to work on a particular node's database instance. + +For PGD Essential, you must connect to the cluster through the connection manager. PGD Essential is designed to be simple to deploy and manage, and that means the cluster has a write leader node that handles all the writes to the cluster. The connection manager is then responsible for directing your read-write connections to the write leader. All your client or application needs to do is to use the connection manager's port and the connection manager will handle the rest. + +The connection manager is responsible for directing your writes to the write leader and ensuring that your reads are directed to the correct node in the cluster. If you connect directly to a data node, you may not be able to take advantage of these features. For applications that only need to read data, the connection manager can direct your reads to a node that isn't the write leader. This can help to balance the load on the cluster and improve performance. + +## Connecting through the connection manager + +Postgres is very flexible for configuring ports and connections, so for simplicity, this example uses the default port settings for Postgres and the connection manager. The default port for Postgres is 5432, and the default port for the connection manager is 6432. + +You can use that port in your connection strings to connect to the cluster. So, for example, if you're using the psql command line tool, you can connect to the cluster like this: + +```bash +psql -h host-1 -p 6432 -U pgdadmin -d pgddb +``` + +Where `host-1` is the hostname of the node you're connecting to. The connection manager will then direct your connection to the write leader node in the cluster. + +## Connecting directly to a data node + +You can connect directly to a data node in the cluster, but we don't recommend it. However, if you need to connect directly to a data node, you can use the following command: + +```bash +psql -h host-1 -p 5432 -U pgdadmin -d pgddb +``` + diff --git a/product_docs/docs/pgd/6/essential-how-to/durability.mdx b/product_docs/docs/pgd/6/essential-how-to/durability.mdx new file mode 100644 index 00000000000..24ec03ffd4f --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/durability.mdx @@ -0,0 +1,76 @@ +--- +title: Durability in PGD Essential +navTitle: Durability +--- + +By default PGD Essential uses asynchronous replication between its nodes, but it can be configured to use synchronous replication as well. This allows for a high degree of flexibility in terms of data durability and availability. Asynchronous replication offers lower latency and higher throughput, while synchronous replication provides stronger consistency guarantees at the cost of performance. PGD Essential allows you to choose the replication strategy through the use of commit scopes. + +## Commit Scopes + +Commit scopes are a powerful feature of PGD Essential that allow you to control the durability and availability of your data. They enable you to specify the level of durability required for each transaction, allowing you to balance performance and consistency based on your application's needs. PGD Essential has four pre-defined commit scopes that you can use to control the durability of your transactions, among other things. + +- local protect +- lag protect +- majority protect +- adaptive protect + +The predefined commit scopes in PGD Essential are designed to provide a balance between performance and data safety. You cannot add, remove or modify a PGD Essential commit scope. In PGD Expanded, you can create and manage your own commit scopes, allowing for more flexibility and control over the durability guarantees. + +### `local protect` + +This is the default commit scope for PGD Essential. It provides asynchronous commit with no durability guarantees. This means that transactions are considered committed as soon as they are written to the local node's WAL, without waiting for any confirmation from other nodes in the cluster. + +### `lag protect` + +This commit scope ensures that transactions are considered committed only when the lag time is within a specified limit (30 seconds in this case) and the commit delay is also within a specified limit (10 seconds in this case). This helps to prevent data loss in case of network issues or node failures. + +### `majority protect` + +This commit scope provides a durability guarantee based on the majority origin group. It ensures that transactions are considered committed only when they are confirmed by the majority of nodes in the origin group. This helps to ensure data consistency and durability in case of node failures or network issues. + +### `adaptive protect` + +This commit scope provides a more flexible durability guarantee. It allows transactions to be considered committed based on the majority origin group synchronous commit, but it can degrade to asynchronous commit if the transaction cannot be confirmed within a specified timeout (10 seconds in this case). This is useful in scenarios where network latency or node failures may cause delays in confirming transactions. + +For more information on commit scopes, see the [Commit Scopes](/pgd/latest/reference/commit-scopes/) reference section and the [Predefined Commit Scopes](/pgd/latest/reference/commit-scopes/predefined-commit-scopes/) reference page. + +## Using Commit Scopes + +To use commit scopes in PGD Essential, you can specify the desired commit scope when executing a transaction. This allows you to control the durability and availability of your data based on your application's needs. For example, you can use the `lag protect` commit scope for transactions that require a higher level of durability, while using the `local protect` commit scope for transactions that prioritize performance over durability. + +### Within a transaction + +You can specify the commit scope for a transaction using the `SET LOCAL` command. For example, to use the `lag protect` commit scope for a transaction, you can execute the following commands: + +```sql +BEGIN; +SET LOCAL bdr.commit_scope = 'lag protect'; +-- Your transaction statements here +COMMIT; +``` + +This will ensure that the transaction is committed with the specified commit scope, providing the desired level of durability and availability. + +### For a session + +You can also set the commit scope for the entire session using the `SET` command. For example, to set the `majority protect` commit scope for the entire session, you can execute the following command: + +```sql +SET bdr.commit_scope = 'majority protect'; +``` + +This will ensure that all transactions executed in the session will use the specified commit scope, providing the desired level of durability and availability. + +### For a group + +You can also set the default commit scope for a PGD group using the bdr.alter_node_group_option()` function. For example, to set the `adaptive protect` commit scope for a PGD group, you can execute the following command: + +```sql +SELECT bdr.alter_node_group_option( + node_group_name:='mygroup', + config_key:='default_commit_scope', + config_value:='adaptive protect'); +``` + +This will ensure that all transactions executed in the specified PGD group will use the specified commit scope, providing the desired level of durability and availability, unless overridden by a session or transaction-level setting. + diff --git a/product_docs/docs/pgd/6/essential-how-to/index.mdx b/product_docs/docs/pgd/6/essential-how-to/index.mdx new file mode 100644 index 00000000000..2feb2d1c4fd --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/index.mdx @@ -0,0 +1,46 @@ +--- +title: Essential How-To +navTitle: Essential How-To +navigation: +- architectures +- install +- connections +- pgd-cli +- durability +- autopartition +- production-best-practices +- sops +description: Essential how-to guides for deploying and managing your PGD cluster. +--- + +This section provides essential how-to guides for deploying and managing your PGD cluster. It includes information on architectures, deployment, durability, autopartition, production best practices, and standard operating procedures (SOPs). + +## Overview + +PGD Essential offers a simplified approach to deploying and managing your PGD cluster. It is designed to help you get started quickly and easily, while also providing a pathway to using advanced features as your use case becomes more complex. + +At the core of PGD are data nodes, Postgres databases that are part of a PGD cluster. PGD enables these databases to replicate data efficiently between nodes, ensuring that your data is always available and up-to-date. PGD Essential simplifies this process by providing a standard architecture that is easy to set up and manage. + +The standard architecture is built around a single data group, which is the basic architectural element for EDB Postgres Distributed systems. Within a group, nodes cooperate to select which nodes handle incoming write or read traffic, and identify when nodes are available or out of sync with the rest of the group. Groups are most commonly used on a single location where the nodes are in the same data center and where you have just the one group in the cluster, we also call it the one-location architecture. + +## Essential features + +- [Standard Architecture](/pgd/latest/essential-how-to/architectures/standard): Learn about the standard architecture for PGD Essential,which consists of a single data group with three nodes in the same data center or region. + +- [Near/Far Architecture](/pgd/latest/essential-how-to/architectures/near-far): Understand the near/far architecture, which consists of two data groups in different locations, with one group handling writes and the other group handling reads. + +- [Connection Management](/pgd/latest/essential-how-to/connections): Learn how to connect to your PGD cluster using the Connection Manager ports, which automatically route read and write transactions to the appropriate nodes. + +- [PGD CLI](/pgd/latest/essential-how-to/pgd-cli): Discover how to use the PGD CLI to manage your PGD cluster, including creating and managing data groups, nodes, and connections. + +- [Durability](/pgd/latest/essential-how-to/durability): Understand the durability features of PGD Essential, which ensure that your data is always available and up-to-date. + +- [Autopartition](/pgd/latest/essential-how-to/autopartition): Learn about the autopartition feature, which automatically partitions your data across nodes in the cluster for improved performance and scalability. + +## Essential How-To Guides + +- [Simple PGD Essential Installation](/pgd/latest/essential-how-to/install): Get step-by-step instructions for installing PGD Essential on your system using the PGD CLI. + +- [Production Best Practices](/pgd/latest/essential-how-to/production-best-practices): Get best practices for deploying and managing your PGD cluster in a production environment, including performance tuning and monitoring. + +- [Standard Operating Procedures (SOPs)](/pgd/latest/essential-how-to/sops): Explore standard operating procedures for managing your PGD cluster, including backup and recovery, monitoring, and troubleshooting. diff --git a/product_docs/docs/pgd/6/essential-how-to/install/01-prerequisites.mdx b/product_docs/docs/pgd/6/essential-how-to/install/01-prerequisites.mdx new file mode 100644 index 00000000000..1186ebfbffa --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/01-prerequisites.mdx @@ -0,0 +1,83 @@ +--- +title: 1 - Prerequisites for Essential installation +navTitle: Prerequisites +--- + +This guide takes you through the steps to install EDB Postgres Distributed (PGD) Essential on your systems. + +If you want to install a learning/test environment, we recommend using the [PGD First Cluster](/pgd/latest/get-started/first-cluster). + +!!! Note +If you want to install EDB Postgres Distributed (PGD) Expanded, consult the [Expanded installation guide](/pgd/latest/expanded-how-to/install/01-prerequisites). +!!! + +## Provisioning hosts + +The first step in the process of deploying PGD is to provision and configure hosts. + +You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed. + +Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges. + +Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster. + +On cloud deployments, you can do this over the public network or over a VPC. + +On-premises deployments must be able to connect over the local network. + +!!! Note Cloud provisioning guides + +If you're new to cloud provisioning, these guides may provide assistance: + + Vendor | Platform | Guide + ------ | -------- | ------ + Amazon | AWS | [Tutorial: Get started with Amazon EC2 Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) + Microsoft | Azure | [Quickstart: Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu) + Google | GCP | [Create a Linux VM instance in Compute Engine](https://cloud.google.com/compute/docs/create-linux-vm-instance) + +!!! + +### Configuring hosts + +#### Create an admin user + +We recommend that you configure an admin user for each provisioned instance. +The admin user must have superuser or sudo (to superuser) privileges. +We also recommend that the admin user be configured for passwordless SSH access using certificates. + +#### Ensure networking connectivity + +With the admin user created, ensure that each machine can communicate with the other machines you're provisioning. + +In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open +to all machines in the cluster. +The PGD Connection Manager must also be accessible to all nodes in the cluster. By default, the Connection Manager uses port 6432 (or 6444 for EDB Postgres Advanced Server). + +## Worked example + +For this serie of worked examples, three hosts with Red Hat Enterprise Linux 9 were provisioned: + +* host-1 +* host-2 +* host-3 + +These hosts were configured in the cloud. As such, each host has both a public and private IP address. We will use the private IP addresses for the cluster. + +The private IP addresses are: +* host-1: 192.168.254.166 +* host-2: 192.168.254.247 +* host-3: 192.168.254.135 + + +For the example cluster, `/etc/hosts` was also edited to use those private IP addresses: + +```text +192.168.254.166 host-1 +192.168.254.247 host-2 +192.168.254.135 host-3 +``` + +In production environments, you should use DNS to resolve hostnames to IP addresses. + + + diff --git a/product_docs/docs/pgd/6/essential-how-to/install/02-configure-repositories.mdx b/product_docs/docs/pgd/6/essential-how-to/install/02-configure-repositories.mdx new file mode 100644 index 00000000000..1cf9d300ded --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/02-configure-repositories.mdx @@ -0,0 +1,61 @@ +--- +title: Step 2 - Configure repositories +navTitle: Configure repositories +description: Configuring the repositories for the database and pgd software on each host. +deepToC: true +--- + +On each host which you want to use as a PGD data node, you need to install the database and the PGD software. + +## Configure repositories + +Set the following environment variables: + +### `EDB_SUBSCRIPTION_TOKEN` + +This is the token you received when you registered for the EDB subscription. It is used to authenticate your access to the EDB repository. + +```bash +export EDB_SUBSCRIPTION_TOKEN= +``` + +### `EDB_REPO_TYPE` + +This is the type of package manager you use, which informs the installer which type of package you need. This can be `deb` for Ubuntu/Debian or `rpm` for CentOS/RHEL. + +```bash +export EDB_REPO_TYPE= +``` + +## Install the repositories + +There are two repositories you need to configure: one for the database software and one for the PGD software. + +The following commands will download and run a script that configures your package manager to use the EDB repository for databases. + +```bash +curl -1sSLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + +This will install the repository for the database software, which includes the EDB Postgres Extended Server and other related packages. + +```bash +curl -1sSLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + +This command will download and run a script that configures your package manager to use the EDB repository. It will also install any necessary dependencies. + +## Worked example + +In this example, we will configure the repositories on a CentOS/RHEL system that will allow us to install EDB Postgres Extended Server 17 with PGD Essential with a standard subscription. + +### Set the environment variables + +```bash +export EDB_SUBSCRIPTION_TOKEN=XXXXXXXXXXXXXX +export EDB_REPO_TYPE=rpm +curl -1sSLf " https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +curl -1sSLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + +The next step is to [install the database and PGD software](03-installing-database-and-pgd/). diff --git a/product_docs/docs/pgd/6/essential-how-to/install/03-installing-database-and-pgd.mdx b/product_docs/docs/pgd/6/essential-how-to/install/03-installing-database-and-pgd.mdx new file mode 100644 index 00000000000..45c07f3bf3f --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/03-installing-database-and-pgd.mdx @@ -0,0 +1,129 @@ +--- +title: Step 3 - Installing the database and pgd +navTitle: Installing +description: Installing the database and pgd software on each host. +deepToC: true +--- + +On each host which you want to use as a PGD data node, you need to install the database and the PGD software. + +After you have [configured the EDB repository](02-configure-repositories), you can install the database and PGD software using your package manager. + +## Install the database and PGD software + +### Set the Postgres version + +Set an environment variable to specify the version of Postgres you want to install. This is typically `17` for Postgres 17. + +```bash +export PG_VERSION=17 +``` + +### Set the package names + +Set an environment variable to specify the package names for the database and PGD software. The package names will vary depending on the database you are using and the platform you are on. + + + + + + + + + +```shell +export EDB_PACKAGES="edb-as$PG_VERSION-server edb-pgd6-essential-epas$PG_VERSION" +``` + + + + + +```shell +export EDB_PACKAGES="edb-as$PG_VERSION-server edb-pgd6-essential-epas$PG_VERSION" +``` + + + + + + + + + + + + + ```bash + export EDB_PACKAGES="edb-postgresextended-$PG_VERSION edb-pgd6-essential-pgextended$PG_VERSION" + ``` + + + + + + ```bash + export EDB_PACKAGES="edb-postgresextended$PG_VERSION-server edb-postgresextended$PG_VERSION-contrib edb-pgd6-essential-pgextended$PG_VERSION" + ``` + + + + + + + +!!! note Not available + +
+ +Community PostgreSQL is only operable with PGD Expanded. + +
+ + +!!! + +
+ +
+ +


+ +### Run the installation command +Run the installation command appropriate for your platform. + + + + + +```shell +sudo apt install -y $EDB_PACKAGES +``` + + + + + +```shell +sudo dnf install -y $EDB_PACKAGES +``` + + + + + +This command will install the specified packages and any dependencies they require. Once the installation is complete, you will have the database and PGD software installed on your system. + + +## Worked example + +In this example, we will install EDB Postgres Extended Server 17 with PGD Essential on a CentOS/RHEL system using an enterprise subscription using the repository confiuguration we set up in the [previous step's worked example](02-configure-repositories#worked-example). + +```bash +export PG_VERSION=17 + export EDB_PACKAGES="edb-postgresextended$PG_VERSION-server edb-postgresextended$PG_VERSION-contrib edb-pgd6-essential-pgextended$PG_VERSION" + sudo dnf install -y $EDB_PACKAGES +``` + +The next step is to [configure the cluster](04-configuring-cluster/). + diff --git a/product_docs/docs/pgd/6/essential-how-to/install/04-configuring-cluster.mdx b/product_docs/docs/pgd/6/essential-how-to/install/04-configuring-cluster.mdx new file mode 100644 index 00000000000..0dcb77c40f8 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/04-configuring-cluster.mdx @@ -0,0 +1,256 @@ +--- +title: Step 4 - Configuring the cluster +navTitle: Configuring +deepToC: true +--- + +## Configuring the cluster + +The next step in the process is to configure the database and the cluster. + +This involves logging into each host and running the `pgd` command to create the cluster as the database user. + +These steps will vary according to which platform you are using and which version of Postgres you are using. + +## Cluster name + +You will need to choose a name for your cluster. This is the name that will be used to identify the cluster in the PGD CLI and in the database. It will be referred to as `` in the examples. If not specified, the default name is `pgd`. + +## Group names + +You will also need to choose a name for the group. This is the name that will be used to identify the group in the PGD CLI and in the database. It will be referred to as `` in the examples. + +The group name must be unique within the cluster. + +## Node names + +You will also need to choose a name for each node. This is the name that will be used to identify the node in the PGD CLI and in the database. It will be referred to as `` in the examples. This is separate from the host name, which is the name of the machine on which the node is running. + +The node name must be unique within the group and within the cluster. + +## Paths and users + +The paths and users used in the examples will vary according to which version of Postgres you are using and which platform you are using. + + + + + + + + + +| | | +|---------------------------|-------------------------------------| +| Postgres User | `enterprisedb` | +| Postgres Port | `5444` | +| Postgres Executable files | `/usr/lib/edb-as/$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-as/$PG_VERSION/main/` | + +```shell +sudo -iu enterprisedb +export PG_VERSION= +export PATH=$PATH:/usr/lib/edb-as/$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-as/$PG_VERSION/main/ +export PGPORT=5444 +``` + + + + +| | | +|---------------------------|------------------------------------| +| Postgres User | `enterprisedb` | +| Postgres Port | `5444` | +| Postgres Executable files | `/usr/edb/as$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb/as$PG_VERSION/data/` | + +```shell +sudo -iu enterprisedb +export PG_VERSION= +export PATH=$PATH:/usr/edb/as$PG_VERSION/bin/ +export PGDATA=/var/lib/edb/as$PG_VERSION/data/ +export PGPORT=5444 +``` + + + + + + + + + + +| | | +|---------------------------|--------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/lib/edb-pge/$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-pge/$PG_VERSION/main/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/lib/edb-pge/$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/main/ +export PGPORT=5432 +``` + + + + +| | | +|---------------------------|--------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/edb/pge$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-pge/$PG_VERSION/data/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPORT=5432 +``` + + + + + +


+ + +!!! note Not available + +

+ + +Community PostgreSQL is only operable with PGD Expanded. + +

+ +!!! + +



+
+
+ +## On each host + +Run the commands from the script/settings above to set the environment variables and paths for the Postgres user on each host. +This will ensure that the `pgd` command can find the Postgres executable files and data directory. + +1. Using the appropriate user, log in as the database user. + +```bash +sudo -iu +``` + +1. Set the Postgres version environment variable. Don't forget to replace `` with the actual version number you are using, such as `17`. + +```bash +export PG_VERSION= +``` + +1. Add the Postgres executable files to your path. + +```bash +export PATH=$PATH: +``` + +1. Set the Postgres data directory environment variable. + +```bash +export PGDATA= +``` + +1. Set the Postgres password environment variable. Don't forget to replace `` with the actual password you want for the database user. + +```bash +export PGPASSWORD= +``` + +### On the first host + +The first host in the cluster is also the first node and will be where we begin the cluster creation. +On the first host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --group-name +``` + +This command will create the data directory and initialize the database, then will create the cluster and the group on the first node. + +### On the second host + +On the second host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --cluster-dsn "host= user= port= dbname=" +``` + +This command will create the node on the second host, and then join the cluster using the cluster-dsn setting to connect to the first host. + +### On the third host + +On the third host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --cluster-dsn "host= user= port= dbname=" +``` + +This command will create the node on the third host, and then join the cluster using the cluster-dsn setting to connect to the first host. + +## Worked example + +In this example, we will configure the PGD Essential cluster with EDB Postgres Extended Server 17 on a CentOS/RHEL system that we [configured](02-configure-repositories) and [installed](03-installing-database-and-pgd) in the previous steps. + +We will now create a cluster called `pgd` with three nodes called `node-1`, `node-2`, and `node-3`. + +* The group name will be `group-1`. The hosts are `host-1`, `host-2`, and `host-3`. +* The Postgres version is 17. +* The database user is `postgres`. +* The database port is 5432. +* The database name is `pgddb`. +* The Postgres executable files are in `/usr/edb/pge17/bin/`. +* The Postgres data directory is in `/var/lib/edb-pge/17/main/`. +* The Postgres password is `secret`. + +(Note that we assume the Postgres version environment variable PG_VERSION is set to `17` from the previous step, and that we are preserving the environment variable when switching users.) + +#### On the first host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-1 setup --dsn "host=host-1 user=postgres port=5432 dbname=pgddb" --group-name group-1 +``` + +#### On the second host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-2 setup --dsn "host=host-2 user=postgres port=5432 dbname=pgddb" --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` + +#### On the third host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-3 setup --dsn "host=host-3 user=postgres port=5432 dbname=pgddb" --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` + +The next step is to [create the database](05-check-cluster). diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/06-check-cluster.mdx b/product_docs/docs/pgd/6/essential-how-to/install/05-check-cluster.mdx similarity index 52% rename from product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/06-check-cluster.mdx rename to product_docs/docs/pgd/6/essential-how-to/install/05-check-cluster.mdx index fc7938ce85c..eab0c54edf5 100644 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/06-check-cluster.mdx +++ b/product_docs/docs/pgd/6/essential-how-to/install/05-check-cluster.mdx @@ -1,10 +1,7 @@ --- -title: Step 6 - Checking the cluster +title: Step 5 - Checking the cluster navTitle: Checking the cluster deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/06-check-cluster/ #generated for pgd deploy-config-planning reorg --- ## Checking the cluster @@ -14,79 +11,86 @@ With the cluster up and running, it's worthwhile to run some basic checks to see The following example shows one quick way to do this, but you must ensure that any testing you perform is appropriate for your use case. +On any of the installed and configured nodes, log in and run `psql` to connect to the database. If you are using EDB Postgres Advanced Server, use the `enterprisedb` user, otherwise use `postgres`: + +```bash +sudo -iu postgres psql "host=host-1 port=5432 username=postgres dbname=pgddb" +``` + * **Preparation** * Ensure the cluster is ready: - * Log in to the database on host-one/node-one. + * Log in to the database on host-1/node-1. * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`. * When the query returns, the cluster is ready. * **Create data** The simplest way to test that the cluster is replicating is to log in to one node, create a table, and populate it. - * On node-one, create a table: + * On node-1, create a table: ```sql CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` - * On node-one, populate the table: + * On node-1, populate the table: ```sql INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` - * On node-one, monitor performance: + * On node-1, monitor performance: ```sql select * from bdr.node_replication_rates; ``` - * On node-one, get a sum of the value column (for checking): + * On node-1, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` * **Check data** - * Log in to node-two. - Log in to the database on host-two/node-two. - * On node-two, get a sum of the value column (for checking): + * Log in to node-2. + Log in to the database on host-2/node-2. + * On node-2, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one. - * Log in to node-three. - Log in to the database on host-three/node-three. - * On node-three, get a sum of the value column (for checking): + * Compare with the result from node-1. + * Log in to node-3. + Log in to the database on host-3/node-3. + * On node-3, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one and node-two. + * Compare with the result from node-1 and node-2. ## Worked example ### Preparation -Log in to host-one's Postgres server. -``` -ssh admin@host-one -sudo -iu enterprisedb psql bdrdb +Log in to host-1's Postgres server. + +```shell +ssh admin@host-1 +sudo -iu postgres psql "host=host-1 port=5432 username=postgres dbname=pgddb" ``` -This is your connection to PGD's node-one. +This is your connection to PGD's node-1. #### Ensure the cluster is ready To ensure that the cluster is ready to go, run: -``` +```sql select bdr.wait_slot_confirm_lsn(NULL, NULL) ``` This query blocks while the cluster is busy initializing and returns when the cluster is ready. -In another window, log in to host-two's Postgres server: +In another window, log in to host-2's Postgres server: -``` -ssh admin@host-two -sudo -iu enterprisedb psql bdrdb +```shell +ssh admin@host-2 +sudo -iu postgres psql "host=host-2 port=5432 username=postgres dbname=pgddb" ``` ### Create data -#### On node-one, create a table +#### On node-1, create a table Run: @@ -94,15 +98,15 @@ Run: CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` -#### On node-one, populate the table +#### On node-1, populate the table ``` INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` -This command generates a table of 10000 rows of random values. +This command generates a table of 10000 rows of random values. -#### On node-one, monitor performance +#### On node-1, monitor performance As soon as possible, run: @@ -113,19 +117,19 @@ select * from bdr.node_replication_rates; The command shows statistics about how quickly that data was replicated to the other two nodes: ```console -bdrdb=# select * from bdr.node_replication_rates; +pgddb=# select * from bdr.node_replication_rates; peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interv al --------------+-------------+-----------+------------+------------+------------------+-----------------+------------+--------------- --- - 1954860017 | node-three | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13682 | 00:00:00 - 2299992455 | node-two | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13763 | 00:00:00 + 1954860017 | node-3 | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13682 | 00:00:00 + 2299992455 | node-2 | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13763 | 00:00:00 (2 rows) ``` And it's already replicated. -#### On node-one get a checksum +#### On node-1 get a checksum Run: @@ -136,7 +140,7 @@ select COUNT(*),SUM(value) from quicktest; This command gets some values from the generated data: ```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; +pgddb=# select COUNT(*),SUM(value) from quicktest; __OUTPUT__ count | sum --------+----------- @@ -146,15 +150,16 @@ __OUTPUT__ ### Check data -#### Log in to host-two's Postgres server -``` -ssh admin@host-two -sudo -iu enterprisedb psql bdrdb +#### Log in to host-2's Postgres server + +```shell +ssh admin@host-2 +sudo -iu postgres psql "host=host-2 port=5432 username=postgres dbname=pgddb" ``` -This is your connection to PGD's node-two. +This is your connection to PGD's node-2. -#### On node-two, get a checksum +#### On node-2, get a checksum Run: @@ -162,10 +167,10 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -This command gets node-two's values for the generated data: +This command gets node-2's values for the generated data: ```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; +pgddb=# select COUNT(*),SUM(value) from quicktest; __OUTPUT__ count | sum --------+----------- @@ -177,17 +182,18 @@ __OUTPUT__ The values are identical. -You can repeat the process with node-three or generate new data on any node and see it replicate to the other nodes. +You can repeat the process with node-3 or generate new data on any node and see it replicate to the other nodes. -#### Log in to host-three's Postgres server -``` -ssh admin@host-two -sudo -iu enterprisedb psql bdrdb +#### Log in to host-3's Postgres server + +```shell +ssh admin@host-3 +sudo -iu enterprisedb psql pgddb ``` -This is your connection to PGD's node-three. +This is your connection to PGD's node-3. -#### On node-three, get a checksum +#### On node-3, get a checksum Run: @@ -195,10 +201,10 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -This command gets node-three's values for the generated data: +This command gets node-3's values for the generated data: ```sql -bdrdb=# select COUNT(*),SUM(value) from quicktest; +pgddb=# select COUNT(*),SUM(value) from quicktest; __OUTPUT__ count | sum --------+----------- diff --git a/product_docs/docs/pgd/6/essential-how-to/install/images/edbrepos2.0.png b/product_docs/docs/pgd/6/essential-how-to/install/images/edbrepos2.0.png new file mode 100644 index 00000000000..e2b61730574 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/images/edbrepos2.0.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1900876939036491c2604939cc7173fa347d5ee218656ef4e0f2d984c262231 +size 278800 diff --git a/product_docs/docs/pgd/6/essential-how-to/install/index.mdx b/product_docs/docs/pgd/6/essential-how-to/install/index.mdx new file mode 100644 index 00000000000..4e139a07e6e --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/install/index.mdx @@ -0,0 +1,12 @@ +--- +title: Installing and configuring EDB Postgres Distributed 6 +navTitle: Installing and configuring +--- + +This section covers how to manually deploy and configure EDB Postgres Distributed 6. + +* [Provisioning hosts](01-prerequisites) +* [Configuring the EDB repository](02-configure-repositories) +* [Installing the database and PGD software](03-installing-database-and-pgd) +* [Configuring the cluster](04-configuring-cluster) +* [Checking the cluster](05-check-cluster) diff --git a/product_docs/docs/pgd/6/essential-how-to/pgd-cli.mdx b/product_docs/docs/pgd/6/essential-how-to/pgd-cli.mdx new file mode 100644 index 00000000000..4917d86d893 --- /dev/null +++ b/product_docs/docs/pgd/6/essential-how-to/pgd-cli.mdx @@ -0,0 +1,245 @@ +--- +title: Using PGD CLI +navTitle: PGD CLI +deepToC: true +redirects: + +--- + +The PGD CLI is a powerful command line interface for managing your PGD cluster. It can be used to perform a variety of tasks, including: + +- Checking the health of the cluster +- Listing the nodes in the cluster +- Listing the groups in the cluster +- Setting group options +- Switching the write leader + +If you have used the [installation guide](install) to install PGD, you will have already installed PGD CLI and used it to create the cluster. + +## Using PGD CLI + +The PGD CLI command uses a configuration file to work out the hosts to connect to. +There are [options](/pgd/latest/reference/cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server. But, by default, PGD CLI looks for a configuration file in preset locations. + +The connection to the database is authenticated in the same way as other command line utilities, like the psql command, are authenticated. + +Unlike other commands, PGD CLI doesn't interactively prompt for your password. Therefore, you must pass your password using one of the following methods: + +- Adding an entry to your [`.pgpass` password file](https://www.postgresql.org/docs/current/libpq-pgpass.html), which includes the host, port, database name, user name, and password +- Setting the password in the `PGPASSWORD` environment variable +- Including the password in the connection string + +We recommend the first option, as the other options don't scale well with multiple database clusters, or they compromise password confidentiality. + +### Configuring and connecting PGD CLI + +- Ensure PGD CLI is installed. + - If PGD CLI was already installed, move to the next step. + - For any system, repeat the [configure repositories](install/02-configure-repositories) step on that system. + - Then run the package installation command appropriate for that platform. + - RHEL and derivatives: `sudo dnf install edb-pgd6-cli` + - Debian, Ubuntu, and derivatives: `sudo apt-get install edb-pgd6-cli` +- Create a configuration file. + - This is a YAML file that specifies the cluster and endpoints for PGD CLI to use. +- Install the configuration file. + - Copy the YAML configuration file to a default config directory `/etc/edb/pgd-cli/` as `pgd-cli-config.yml`. + - Repeat this process on any system where you want to run PGD CLI. +- Run pgd-cli. + +### Use PGD CLI to explore the cluster + +- Check the health of the cluster with the `cluster show --health` command. +- Show the nodes in the cluster with the `nodes list` command. +- Show the groups in the cluster with the `groups list` command. +- Set a group option with the `group set-option` command. +- Switch write leader with the `group set-leader` command. + +For more details about these commands, see the worked example that follows. + +Also consult the [PGD CLI documentation](/pgd/latest/reference/cli/) for details of other configuration options and a full command reference. + +## Worked example + +### Ensure PGD CLI is installed + +In this worked example, you configure and use PGD CLI on host-1, where you've already installed Postgres and PGD. +You don't need to install PGD CLI again. + +### (Optionally) Create a configuration file + +The PGD CLI configuration file is a YAML file that contains a cluster object. +This has two properties: + +- The name of the PGD cluster's top-level group (as `name`) +- An array of endpoints of databases (as `endpoints`) + +``` +cluster: + name: pgd + endpoints: + - host=host-1 dbname=pgddb port=5444 + - host=host-2 dbname=pgddb port=5444 + - host=host-3 dbname=pgddb port=5444 +``` + +Note that the endpoints in this example specify `port=5444`. +This is necessary for EDB Postgres Advanced Server instances. +For EDB Postgres Extended and community PostgreSQL, you can omit this. + +Create the PGD CLI configuration directory: + +```shell +sudo mkdir -p /etc/edb/pgd-cli +``` + +Then, write the configuration to the `pgd-cli-config.yml` file in the `/etc/edb/pgd-cli` directory. + +For this example, you can run this on host-1 to create the file: + +```shell +cat < No - if 2 PGD data nodes | Yes - if 3 PGD data nodes
No - if 2 PGD data nodes | Yes - if 3 PGD data nodes
No - if 2 PGD data nodes | Yes - if 3 PGD data nodes
No - if 2 PGD data nodes | -| Data protection in case of location failure | No (unless offsite backup) | Yes | Yes | Yes | +| Data protection in case of location failure | No (unless offsite backup) | Yes | Yes | Yes | | Global consensus in case of location failure | N/A | No | Yes | Yes | | Data restore required after location failure | Yes | No | No | No | | Immediate failover in case of location failure | No - requires data restore from backup | Yes - alternate Location | Yes - alternate Location | Yes - alternate Location | | Cross-location network traffic | Only if backup is offsite | Full replication traffic | Full replication traffic | Full replication traffic | | License cost | 2 or 3 PGD data nodes | 4 or 6  PGD data nodes | 4 or 6 PGD data nodes | 6+ PGD data nodes | - - ## Adding flexibility to the standard architectures To provide the data resiliency needed and proximity to applications and to the users maintaining the data, you can deploy the single-location architecture in as many locations as you want. While EDB Postgres Distributed has a variety of conflict-handling approaches available, do take care to minimize the number of expected collisions if allowing write activity from geographically disparate locations. You can also expand the standard architectures with two additional types of nodes: -- *Subscriber-only nodes*, which you can use to achieve additional read scalability and to have data closer to users when the majority of an application’s workload is read intensive with infrequent writes. You can also leverage them to publish a subset of the data for reporting, archiving, and analytic needs. +- *Subscriber-only nodes*, which you can use to achieve additional read scalability and to have data closer to users when the majority of an application’s workload is read intensive with infrequent writes. You can also leverage them to publish a subset of the data for reporting, archiving, and analytic needs. -- *Logical standbys*, which receive replicated data from another node in the PGD cluster but don't participate in the replication mesh or consensus. They contain all the same data as the other PGD data nodes and can quickly be promoted to a master if one of the data nodes fails to return the cluster to full capacity/consensus. You can use them in environments where network traffic between data centers is a concern. Otherwise, three PGD data nodes per location is always preferred. +- *Logical standbys*, which receive replicated data from another node in the PGD cluster but don't participate in the replication mesh or consensus. They contain all the same data as the other PGD data nodes and can quickly be promoted to a master if one of the data nodes fails to return the cluster to full capacity/consensus. You can use them in environments where network traffic between data centers is a concern. Otherwise, three PGD data nodes per location is always preferred. diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/essential.mdx b/product_docs/docs/pgd/6/expanded-how-to/architectures/essential.mdx new file mode 100644 index 00000000000..9c0fd37458d --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/essential.mdx @@ -0,0 +1,11 @@ +--- +title: Essential Architectures +navTitle: Essential +--- + +PGD 6 Expanded supports a wide range of architectures, including the Essential editions standard and near-far architectures. + +With Expanded, you can deploy an Essential architecture and then add more groups to it or build out a more complex architecture as your needs grow. The Essential architectures are designed to be simple to deploy and manage, while still providing the core features of PGD. + +You can read about the Essential architectures in the [Essential How-to](/pgd/latest/essential-how-to/architectures/). + diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/geo-distributed.mdx b/product_docs/docs/pgd/6/expanded-how-to/architectures/geo-distributed.mdx new file mode 100644 index 00000000000..c73ebcb8b22 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/geo-distributed.mdx @@ -0,0 +1,7 @@ +--- +title: Geo-Distributed Architectures +navTitle: Geo-Distributed +--- + +PGD supports clusters that span multiple geographic, as well as logical, locations. These clusters are known as geo-distributed architectures. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/images/1x3-cluster.svg b/product_docs/docs/pgd/6/expanded-how-to/architectures/images/1x3-cluster.svg new file mode 100644 index 00000000000..d9aac6a7133 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/images/1x3-cluster.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/images/2x3-cluster.svg b/product_docs/docs/pgd/6/expanded-how-to/architectures/images/2x3-cluster.svg new file mode 100644 index 00000000000..ba14b6bed03 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/images/2x3-cluster.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/architectures/index.mdx new file mode 100644 index 00000000000..3bf1011cbf8 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/index.mdx @@ -0,0 +1,17 @@ +--- +title: PGD Architectures +navTitle: Architectures +navigation: +- always-on +- essential +- multi-location +- geo-distributed +--- + +With PGD 6 Expanded, you can deploy a cluster in a wide range of architectures. Unlike PGD 6 Essential, which is limited to two architectures made with a limited number of groups, PGD 6 Expanded supports multiple architectures with technically unlimited groups, including: + +* [**Always-on architecture**](always-on): A single PGD cluster with two or more groups in the same data center or availability zone. This architecture is designed for high availability and disaster recovery, ensuring that the database remains operational even if one group fails. +* [**Essentials's Standard/One-location architecture**](essential): A single PGD cluster with three nodes in the same data center or availability zone; The PGD 6 Essential architecture. +* [**Multi-location architecture**](multi-location): A single PGD cluster with two or more groups in different data centers or availability zones. +* [**Geo-distributed architecture**](geo-distributed): A single PGD cluster with two or more groups in different regions, like a multi-location architecture but with higher latency and potential network partitioning issues. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/architectures/multi-location.mdx b/product_docs/docs/pgd/6/expanded-how-to/architectures/multi-location.mdx new file mode 100644 index 00000000000..66dac43e59e --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/architectures/multi-location.mdx @@ -0,0 +1,7 @@ +--- +title: Multi-Location Architectures +navTitle: Multi-Location +--- + +PGD 6 Expanded inherently supports architectures that span multiple locations, such as data centers or availability zones. This is a key feature of the Expanded edition, allowing you to build robust and resilient distributed databases that can handle failures and maintain high availability across different geographic locations. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/index.mdx new file mode 100644 index 00000000000..fc8e947f063 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/index.mdx @@ -0,0 +1,24 @@ +--- +title: Expanded How-to +navTitle: Expanded How-to +description: How to use PGD Expanded's advanced features and capabilities. +--- + +## Overview + +PGD Expanded offers the full PGD capability set to users; where PGD Essential is a best practice, controlled and simplified version of PGD. The expanded version is for users who want to take advantage of the full set of features and capabilities of PGD, including advanced architectures, custom configurations, and more complex use cases. + +PGD Expanded is designed for users who need the highest level of flexibility and control over their database environments. It provides a comprehensive set of tools and features that allow users to customize their deployments and optimize their performance. + +## Expanded Features + +The following features are enabled in PGD Expanded: + +- **Multi-master replication**: PGD Expanded supports multi-master replication, allowing users to create a highly available and fault-tolerant database environment. This feature enables users to write to any node in the cluster, providing maximum flexibility and scalability. + +- **Conflict resolution**: PGD Expanded's support for multi-master replication includes advanced conflict resolution capabilities, allowing users to handle conflicts that may arise during replication. This feature ensures that data consistency is maintained across all nodes in the cluster. + +- **Advanced durability**: PGD Expanded opens up the full set of durability options in PGD with customizable commit scopes offering flexibility beyond PGD Essentials pre-defined commit scopes. This feature allows users to optimize their database performance and durability based on their specific needs. + +- **Custom configurations**: PGD Expanded allows users to customize their database configurations to meet their specific needs. Where PGD Essential supports two basic architectures with limited numbers of nodes and groups, there are no restrictions on the number of nodes, node types, or replication configurations that can be used in a PGD Expanded deployment. + diff --git a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/01-prerequisites.mdx similarity index 65% rename from product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx rename to product_docs/docs/pgd/6/expanded-how-to/install/01-prerequisites.mdx index 1c9b8d291ff..01970bb2116 100644 --- a/product_docs/docs/pgd/5.8/deploy-config/deploy-manual/deploying/01-provisioning-hosts.mdx +++ b/product_docs/docs/pgd/6/expanded-how-to/install/01-prerequisites.mdx @@ -1,23 +1,19 @@ --- -title: Step 1 - Provisioning hosts -navTitle: Provisioning hosts -deepToC: true -redirects: - - /pgd/latest/install-admin/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg - - /pgd/latest/admin-manual/installing/01-provisioning-hosts/ #generated for pgd deploy-config-planning reorg +title: 1 - Prerequisites for Expanded installation +navTitle: Prerequisites --- ## Provisioning hosts -The first step in the process of deploying PGD is to provision and configure hosts. +The first step in the process of deploying PGD Expanded is to provision and configure hosts. You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed. Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges. -Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster. +Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster. -On cloud deployments, you can do this over the public network or over a VPC. +On cloud deployments, you can do this over the public network or over a VPC. On-premises deployments must be able to connect over the local network. @@ -47,32 +43,30 @@ With the admin user created, ensure that each machine can communicate with the o In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open -to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be -open to any applications that will connect to the cluster. Port 6432 is typically -used for PGD Proxy. +to all machines in the cluster. The PGD Connection Manager must also be +accessible to all nodes in the cluster. By default, the Connection Manager uses port 6432 (or 6444 for EDB Postgres Advanced Server). ## Worked example -For this example, three hosts with Red Hat Enterprise Linux 9 were provisioned: +For this serie of worked examples, three hosts with Red Hat Enterprise Linux 9 were provisioned: -* host-one -* host-two -* host-three +* host-1 +* host-2 +* host-3 -Each is configured with an admin user named admin. +These hosts were configured in the cloud. As such, each host has both a public and private IP address. We will use the private IP addresses for the cluster. -These hosts were configured in the cloud. As such, each host has both a public and private IP address. +The private IP addresses are: + +- host-1: 192.168.254.166 +- host-2: 192.168.254.247 +- host-3: 192.168.254.135 - Name | Public IP | Private IP -------|-----------|---------------------- - host-one | 172.24.117.204 | 192.168.254.166 - host-two | 172.24.113.247 | 192.168.254.247 - host-three | 172.24.117.23 | 192.168.254.135 For the example cluster, `/etc/hosts` was also edited to use those private IP addresses: -``` -192.168.254.166 host-one -192.168.254.247 host-two -192.168.254.135 host-three +```text +192.168.254.166 host-1 +192.168.254.247 host-2 +192.168.254.135 host-3 ``` diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/02-configure-repositories.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/02-configure-repositories.mdx new file mode 100644 index 00000000000..46a6045a3bb --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/02-configure-repositories.mdx @@ -0,0 +1,75 @@ +--- +title: Step 2 - Configure repositories +navTitle: Configure repositories +description: Configuring the repositories for the database and pgd software on each host. +deepToC: true +--- + +On each host which you want to use as a PGD data node, you need to install the database and the PGD software. + +## Configure repositories + +Set the following environment variables: + +### `EDB_SUBSCRIPTION_TOKEN` + +This is the token you received when you registered for the EDB subscription. It is used to authenticate your access to the EDB repository. + +```bash +export EDB_SUBSCRIPTION_TOKEN= +``` + +### `EDB_SUBSCRIPTION_PLAN` + +This is the type of subscription you have with EDB. It can be `standard`, `enterprise`, or `community`. + +```bash +export EDB_SUBSCRIPTION_PLAN= +``` + +### `EDB_REPO_TYPE` + +This is the type of package manager you use, which informs the installer which type of package you need. This can be `deb` for Ubuntu/Debian or `rpm` for CentOS/RHEL. + +```bash +export EDB_REPO_TYPE= +``` + +## Install the repository/repositories + +There are two repositories you need to configure: one for the database software and one for the PGD software. + +The following command will download and run a script that configures your package manager to use the EDB repository for databases. + +```bash +curl -1sSLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_PLAN/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + +The following command will download and run a script that configures your package manager to use the EDB repository for PGD. + +```bash +curl -1sSLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + + +## Worked example + +In this example, we will configure the repositories on a CentOS/RHEL system that will allow us to install EDB Postgres Advanced Server 17 with PGD Expanded using an enterprise subscription. + +### Set the environment variables + +```bash +export EDB_SUBSCRIPTION_TOKEN=XXXXXXXXXXXXXX +export EDB_SUBSCRIPTION_PLAN=enterprise +export EDB_REPO_TYPE=rpm +``` + +### Install the repositories + +```bash +# For PGD Expanded, there are two repositories to install. +curl -1sSLf " https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_PLAN/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +curl -1sSLf " https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.$EDB_REPO_TYPE.sh" | sudo -E bash +``` + +The next step is to [install the database and PGD software](03-installing-database-and-pgd/). \ No newline at end of file diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/03-installing-database-and-pgd.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/03-installing-database-and-pgd.mdx new file mode 100644 index 00000000000..941fa253b6b --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/03-installing-database-and-pgd.mdx @@ -0,0 +1,143 @@ +--- +title: Step 3 - Installing the database and pgd +navTitle: Installing +description: Installing the database and pgd software on each host. +deepToC: true +--- + +On each host which you want to use as a PGD data node, you need to install the database and the PGD software. + +After you have [configured the EDB repository](02-configure-repositories), you can install the database and PGD software using your package manager. + +## Install the database and PGD software + +### Set the Postgres version + +Set an environment variable to specify the version of Postgres you want to install. This is typically `17` for Postgres 17. + +```bash +export PG_VERSION=17 +``` + +### Set the package names + +Set an environment variable to specify the package names for the database and PGD software. The package names will vary depending on the database you are using and the platform you are on. + + + + +


+ +#### EDB Postgres Advanced Server + + + + + +```shell +export EDB_PACKAGES="edb-as$PG_VERSION-server edb-pgd6-expanded-epas$PG_VERSION" +``` + + + + + +```shell +export EDB_PACKAGES="edb-as$PG_VERSION-server edb-pgd6-expanded-epas$PG_VERSION" +``` + + + + +
+ + +


+ +#### EDB Postgres Extended + + + + + + ```bash + export EDB_PACKAGES="edb-postgresextended-$PG_VERSION edb-pgd6-expanded-pgextended$PG_VERSION" + ``` + + + + + + ```bash + export EDB_PACKAGES="edb-postgresextended$PG_VERSION-server edb-postgresextended$PG_VERSION-contrib edb-pgd6-expanded-pgextended$PG_VERSION" + ``` + + + +
+ + +


+ +#### Community PostgreSQL + + + + + + ```bash + export EDB_PACKAGES="postgresql-$PG_VERSION edb-pgd6-expanded-pg$PG_VERSION" + ``` + + + + + ```bash + export EDB_PACKAGES="postgresql$PG_VERSION-server postgresql$PG_VERSION-contrib edb-pgd6-expanded-pg$PG_VERSION" + ``` + + + + + +
+ +
+ +


+ +### Run the installation command +Run the installation command appropriate for your platform. + + + + + +```shell +sudo apt install -y $EDB_PACKAGES +``` + + + + + +```shell +sudo dnf install -y $EDB_PACKAGES +``` + + + + + +This command will install the specified packages and any dependencies they require. Once the installation is complete, you will have the database and PGD software installed on your system. + + +## Worked example + +In this example, we will install EDB Postgres Extended Server 17 with PGD Expanded on a CentOS/RHEL system using the repository configuration we set up in the [previous step's worked example](02-configure-repositories#worked-example). + +```bash +export PG_VERSION=17 +export EDB_PACKAGES="edb-as$PG_VERSION edb-pgd6-expanded-epas$PG_VERSION" +sudo dnf install -y $EDB_PACKAGES +``` diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/04-configuring-cluster.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/04-configuring-cluster.mdx new file mode 100644 index 00000000000..8753ca03c80 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/04-configuring-cluster.mdx @@ -0,0 +1,287 @@ +--- +title: Step 4 - Configuring the cluster +navTitle: Configuring +deepToC: true +--- + +## Configuring the cluster + +The next step in the process is to configure the database and the cluster. + +This involves logging into each host and running the `pgd` command to create the cluster as the database user. + +These steps will vary according to which platform you are using and which version of Postgres you are using. + +## Cluster name + +You will need to choose a name for your cluster. This is the name that will be used to identify the cluster in the PGD CLI and in the database. It will be referred to as `` in the examples. If not specified, the default name is `pgd`. + +## Group names + +You will also need to choose a name for the group. This is the name that will be used to identify the group in the PGD CLI and in the database. It will be referred to as `` in the examples. + +The group name must be unique within the cluster. + +## Node names + +You will also need to choose a name for each node. This is the name that will be used to identify the node in the PGD CLI and in the database. It will be referred to as `` in the examples. This is separate from the host name, which is the name of the machine on which the node is running. + +The node name must be unique within the group and within the cluster. + +## Paths and users + +The paths and users used in the examples will vary according to which version of Postgres you are using and which platform you are using. + +Select your Postgres version: + + + + + +
Then select your platform: + + + + +| | | +|---------------------------|-------------------------------------| +| Postgres User | `enterprisedb` | +| Postgres Port | `5444` | +| Postgres Executable files | `/usr/lib/edb-as/$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-as/$PG_VERSION/main/` | + +```shell +sudo -iu enterprisedb +export PG_VERSION= +export PATH=$PATH:/usr/lib/edb-as/$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-as/$PG_VERSION/main/ +export PGPORT=5444 +``` + + + + +| | | +|---------------------------|------------------------------------| +| Postgres User | `enterprisedb` | +| Postgres Port | `5444` | +| Postgres Executable files | `/usr/edb/as$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb/as$PG_VERSION/data/` | + +```shell +sudo -iu enterprisedb +export PG_VERSION= +export PATH=$PATH:/usr/edb/as$PG_VERSION/bin/ +export PGDATA=/var/lib/edb/as$PG_VERSION/data/ +export PGPORT=5444 +``` + + + +
+ + +
Then select your platform: + + + + +| | | +|---------------------------|--------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/lib/edb-pge/$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-pge/$PG_VERSION/main/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/lib/edb-pge/$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/main/ +export PGPORT=5432 +``` + + + + +| | | +|---------------------------|--------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/edb/pge$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/edb-pge/$PG_VERSION/data/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPORT=5432 +``` + + + +
+ + +
Then select your platform: + + + + +| | | +|---------------------------|-----------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/lib/postgresql/$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/postgresql/$PG_VERSION/main/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/lib/postgresql/$PG_VERSION/bin/ +export PGDATA=/var/lib/postgresql/$PG_VERSION/main/ +export PGPORT=5432 +``` + + + + +| | | +|---------------------------|------------------------------------| +| Postgres User | `postgres` | +| Postgres Port | `5432` | +| Postgres Executable files | `/usr/pgsql-$PG_VERSION/bin/` | +| Postgres Data Directory | `/var/lib/pgsql/$PG_VERSION/data/` | + +```shell +sudo -iu postgres +export PG_VERSION= +export PATH=$PATH:/usr/pgsql-$PG_VERSION/bin/ +export PGDATA=/var/lib/pgsql/$PG_VERSION/data/ +export PGPORT=5432 +``` + + + +
+
+ +## On each host + +Run the commands from the script/settings above to set the environment variables and paths for the Postgres user on each host. +This will ensure that the `pgd` command can find the Postgres executable files and data directory. + +1. Using the appropriate user, log in as the database user. + +```bash +sudo -iu +``` + +1. Set the Postgres version environment variable. Don't forget to replace `` with the actual version number you are using, such as `17`. + +```bash +export PG_VERSION= +``` + +1. Add the Postgres executable files to your path. + +```bash +export PATH=$PATH: +``` + +1. Set the Postgres data directory environment variable. + +```bash +export PGDATA= +``` + +1. Set the Postgres password environment variable. Don't forget to replace `` with the actual password you want for the database user. + +```bash +export PGPASSWORD= +``` + +### On the first host + +The first host in the cluster is also the first node and will be where we begin the cluster creation. +On the first host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --group-name +``` + +This command will create the data directory and initialize the database, then will create the cluster and the group on the first node. + +### On the second host + +On the second host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --cluster-dsn "host= user= port= dbname=" +``` + +This command will create the node on the second host, and then join the cluster using the cluster-dsn setting to connect to the first host. + +### On the third host + +On the third host, run the following command to create the cluster: + +```bash +pgd node setup --dsn "host= user= port= dbname=" --cluster-dsn "host= user= port= dbname=" +``` + +This command will create the node on the third host, and then join the cluster using the cluster-dsn setting to connect to the first host. + +## Worked example + +In this example, we will configure the PGD Essential cluster with EDB Postgres Extended Server 17 on a CentOS/RHEL system that we [configured](02-configure-repositories) and [installed](03-installing-database-and-pgd) in the previous steps. + +We will now create a cluster called `pgd` with three nodes called `node-1`, `node-2`, and `node-3`. + +* The group name will be `group-1`. The hosts are `host-1`, `host-2`, and `host-3`. +* The Postgres version is 17. +* The database user is `postgres`. +* The database port is 5432. +* The database name is `pgddb`. +* The Postgres executable files are in `/usr/edb/pge17/bin/`. +* The Postgres data directory is in `/var/lib/edb-pge/17/main/`. +* The Postgres password is `secret`. + +(Note that we assume the Postgres version environment variable PG_VERSION is set to `17` from the previous step, and that we are preserving the environment variable when switching users.) + +#### On the first host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-1 setup --dsn "host=host-1 user=postgres port=5432 dbname=pgddb" --group-name group-1 +``` + +#### On the second host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-2 setup --dsn "host=host-2 user=postgres port=5432 dbname=pgddb" --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` + +#### On the third host + +```bash +sudo -iu postgres +export PG_VERSION=17 +export PATH=$PATH:/usr/edb/pge$PG_VERSION/bin/ +export PGDATA=/var/lib/edb-pge/$PG_VERSION/data/ +export PGPASSWORD=secret +pgd node node-3 setup --dsn "host=host-3 user=postgres port=5432 dbname=pgddb" --cluster-dsn "host=host-1 user=postgres port=5432 dbname=pgddb" +``` + +The next step is to [check the cluster](05-check-cluster). + diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/05-check-cluster.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/05-check-cluster.mdx new file mode 100644 index 00000000000..a8ccbefb5f7 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/05-check-cluster.mdx @@ -0,0 +1,222 @@ +--- +title: Step 5 - Checking the cluster +navTitle: Checking the cluster +deepToC: true +--- + +## Checking the cluster + + +With the cluster up and running, it's worthwhile to run some basic checks to see how effectively it's replicating. + +The following example shows one quick way to do this, but you must ensure that any testing you perform is appropriate for your use case. + +On any of the installed and configured nodes, log in and run `psql` to connect to the database. If you are using EDB Postgres Advanced Server, use the `enterprisedb` user, otherwise use `postgres`: + +```bash +sudo -iu postgres psql pgddb +``` + +This command connects you *directly* to the database on host-1/node-1. + +### Quick test + +* **Preparation** + * Ensure the cluster is ready: + * Log in to the database on host-1/node-1. + * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`. + * When the query returns, the cluster is ready. + + +* **Create data** + The simplest way to test that the cluster is replicating is to log in to one node, create a table, and populate it. + * On node-1, create a table: + ```sql + CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); + ``` + * On node-1, populate the table: + ```sql + INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); + ``` + * On node-1, monitor performance: + ```sql + select * from bdr.node_replication_rates; + ``` + * On node-1, get a sum of the value column (for checking): + ```sql + select COUNT(*),SUM(value) from quicktest; + ``` +* **Check data** + * Log in to node-2. + Log in to the database on host-2/node-2. + * On node-2, get a sum of the value column (for checking): + ```sql + select COUNT(*),SUM(value) from quicktest; + ``` + * Compare with the result from node-1. + * Log in to node-3. + Log in to the database on host-3/node-3. + * On node-3, get a sum of the value column (for checking): + ```sql + select COUNT(*),SUM(value) from quicktest; + ``` + * Compare with the result from node-1 and node-2. + +## Worked example + +### Preparation + +Log in to host-1's Postgres server. + +```shell +ssh admin@host-1 +sudo -iu postgres psql "host=host-1 port=5432 username=postgres dbname=pgddb" +``` + +This is your connection to PGD's node-1. + +#### Ensure the cluster is ready + +To ensure that the cluster is ready to go, run: + +```sql +select bdr.wait_slot_confirm_lsn(NULL, NULL) +``` + +This query blocks while the cluster is busy initializing and returns when the cluster is ready. + +In another window, log in to host-2's Postgres server: + +```shell +ssh admin@host-2 +sudo -iu postgres psql "host=host-2 port=5432 username=postgres dbname=pgddb" +``` + +### Create data + +#### On node-1, create a table + +Run: + +```sql +CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); +``` + +#### On node-1, populate the table + +```sql +INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); +``` + +This command generates a table of 10000 rows of random values. + +#### On node-1, monitor performance + +As soon as possible, run: + +```sql +select * from bdr.node_replication_rates; +``` + +The command shows statistics about how quickly that data was replicated to the other two nodes: + +```console +pgddb=# select * from bdr.node_replication_rates; +__OUTPUT__ + peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interv +al +--------------+-------------+-----------+------------+------------+------------------+-----------------+------------+--------------- +--- + 1954860017 | node-3 | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13682 | 00:00:00 + 2299992455 | node-2 | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13763 | 00:00:00 +(2 rows) +``` + +And it's already replicated. + +#### On node-1 get a checksum + +Run: + +```sql +select COUNT(*),SUM(value) from quicktest; +``` + +This command gets some values from the generated data: + +```sql +pgddb=# select COUNT(*),SUM(value) from quicktest; +__OUTPUT__ + count | sum +--------+----------- + 100000 | 498884606 +(1 row) +``` + +### Check data + +#### Log in to host-2's Postgres server + +```shell +ssh admin@host-2 +sudo -iu postgres psql "host=host-2 port=5432 username=postgres dbname=pgddb" +``` + +This is your connection to PGD's node-2. + +#### On node-2, get a checksum + +Run: + +```sql +select COUNT(*),SUM(value) from quicktest; +``` + +This command gets node-2's values for the generated data: + +```sql +pgddb=# select COUNT(*),SUM(value) from quicktest; +__OUTPUT__ + count | sum +--------+----------- + 100000 | 498884606 +(1 row) +``` + +#### Compare with the result from node-one + +The values are identical. + +You can repeat the process with node-3 or generate new data on any node and see it replicate to the other nodes. + +#### Log in to host-3's Postgres server + +```shell +ssh admin@host-3 +sudo -iu enterprisedb psql pgddb +``` + +This is your connection to PGD's node-3. + +#### On node-3, get a checksum + +Run: + +```sql +select COUNT(*),SUM(value) from quicktest; +``` + +This command gets node-3's values for the generated data: + +```sql +pgddb=# select COUNT(*),SUM(value) from quicktest; +__OUTPUT__ + count | sum +--------+----------- + 100000 | 498884606 +(1 row) +``` + +#### Compare with the result from node-one and node-two + +The values are identical. diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/images/edbrepos2.0.png b/product_docs/docs/pgd/6/expanded-how-to/install/images/edbrepos2.0.png new file mode 100644 index 00000000000..e2b61730574 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/images/edbrepos2.0.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1900876939036491c2604939cc7173fa347d5ee218656ef4e0f2d984c262231 +size 278800 diff --git a/product_docs/docs/pgd/6/expanded-how-to/install/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/install/index.mdx new file mode 100644 index 00000000000..4e139a07e6e --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/install/index.mdx @@ -0,0 +1,12 @@ +--- +title: Installing and configuring EDB Postgres Distributed 6 +navTitle: Installing and configuring +--- + +This section covers how to manually deploy and configure EDB Postgres Distributed 6. + +* [Provisioning hosts](01-prerequisites) +* [Configuring the EDB repository](02-configure-repositories) +* [Installing the database and PGD software](03-installing-database-and-pgd) +* [Configuring the cluster](04-configuring-cluster) +* [Checking the cluster](05-check-cluster) diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/barman.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/barman.mdx new file mode 100644 index 00000000000..97648a22836 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/barman.mdx @@ -0,0 +1,33 @@ +--- +title: Backup and Restore with Barman +navTitle: Barman +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/index.mdx new file mode 100644 index 00000000000..85af9de8e75 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/index.mdx @@ -0,0 +1,14 @@ +--- +title: Backup and Restore SOPs +navTitle: Backup and Restore +navigation: +- pg_dump +- barman +--- +The SOPs in this section cover the process of backing up and restoring the Postgres database servers running on the nodes in a PGD cluster. It includes best practices for backup and restore, tools to use, and common issues that may arise during the backup and restore process. + +## SOPs + +- [Backup and Restore with pg_dump](/pgd/latest/expanded-how-to/sops/backup-restore/pg_dump) +- [Backup and Restore with Barman](/pgd/latest/expanded-how-to/sops/backup-restore/barman) + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/pg_dump.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/pg_dump.mdx new file mode 100644 index 00000000000..9307ca21037 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/backup-restore/pg_dump.mdx @@ -0,0 +1,33 @@ +--- +title: Backup and Restore with pg_dump +navTitle: pg_dump +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/index.mdx new file mode 100644 index 00000000000..d8a12d42942 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/index.mdx @@ -0,0 +1,14 @@ +--- +title: Data Movement SOPs +navTitle: Data Movement +navigation: +- move-in +- move-out +--- + +This section covers how to move data in and out of a Postgres Distributed cluster as efficiently as possible. + +## SOPs + +- [Moving Data into a PGD Cluster](/pgd/latest/expanded-how-to/sops/data-movement/move-in) +- [Moving Data out of a PGD Cluster](/pgd/latest/expanded-how-to/sops/data-movement/move-out) diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-in.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-in.mdx new file mode 100644 index 00000000000..ca81d71a097 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-in.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Moving Data into the Cluster +navTitle: Move In +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-out.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-out.mdx new file mode 100644 index 00000000000..498bce0fc32 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/data-movement/move-out.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Moving Data Out of the Cluster +navTitle: Move Out +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/how-to-use-sops.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/how-to-use-sops.mdx new file mode 100644 index 00000000000..53208b36222 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/how-to-use-sops.mdx @@ -0,0 +1,24 @@ +--- +title: How to use Standard Operating Procedures +navTitle: How to use +description: How to use Standard Operating Procedures (SOPs) for EDB Postgres Distributed (PGD). +--- + +Standard Operating Procedures, or SOPs, are a set of instructions that cover the expanded tasks for the successful operation of EDB Postgres Distributed (PGD). + +They are designed to be easy to follow and provide step-by-step guidance for performing various tasks. + +To make it easy to follow, each SOP is divided into sections that cover the following: + +- **Overview**: A brief description of the task and its purpose. +- **Prerequisites**: Any requirements or dependencies that must be met before performing the task. +- **Instructions**: Step-by-step generic instructions for performing the task. +- **Worked Example**: A specific example of how to perform the task, including any relevant commands or configurations. +- **Notes**: Additional information or tips that may be helpful. +- **Troubleshooting**: Common issues that may arise during the task and how to resolve them. +- **References**: Links to related documentation or resources. + +## How to use SOPs + +**TODO**: Add a description of how to use SOPs. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/index.mdx new file mode 100644 index 00000000000..47ad775cc94 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/index.mdx @@ -0,0 +1,49 @@ +--- +title: Expanded Standard Operating Procedures +navTitle: Expanded Standard Operating Procedures +navigation: +- how-to-use-sops +- install +- data-movement +- monitoring +- backup-restore +- upgrade +- monitoring +- troubleshooting +--- + +## Overview + +Standard Operating Procedures (SOPs) are a set of procedures that are expanded for the successful operation of EDB Postgres Distributed (PGD). These procedures cover various aspects of the system, including installation, configuration, backup and restore, upgrades, monitoring, and troubleshooting. + +SOPs are designed to address the most common tasks around using and maintaining a PGD cluster. They provide a structured approach to performing these tasks, ensuring consistency and reliability in operations. Read more about the structure of SOPs in the [How to Use SOPs](/pgd/latest/expanded-how-to/sops/how-to-use-sops). + +This document provides an overview of the SOPs and links to detailed instructions for each procedure. + +## [Installation and Configuration](/pgd/latest/expanded-how-to/sops/install) + +The SOPs in this section cover the procedures for installing PGD, creating a new PGD cluster, adding a node to an existing cluster, and configuring PGD. + +## [Data Movement](/pgd/latest/expanded-how-to/sops/data-movement) + +The SOPs in this section cover the procedures for moving data into or out of a PGD cluster. This include importing and exporting data efficiently. + +## [Monitoring](/pgd/latest/expanded-how-to/sops/monitoring) + +The SOPs in this section cover the procedures for monitoring a Postgres Distributed (PGD) cluster. Monitoring is crucial for maintaining the health and performance of your database system. + +## [Maintenance](/pgd/latest/expanded-how-to/sops/maintenance) + +The SOPs in this section cover the procedures for maintaining a Postgres Distributed (PGD) cluster. It covers routine maintenance tasks and how they should be performed when working with a PGD cluster. + +## [Backup and Restore](/pgd/latest/expanded-how-to/sops/backup-restore) + +The SOPs in this section cover the process of backing up and restoring the Postgres database servers running on the nodes in a PGD cluster. + +## [Upgrade](/pgd/latest/expanded-how-to/sops/upgrade) + +The SOPs in this section cover the process of upgrading the Postgres database servers running on the nodes in a PGD cluster and upgrade PGD itself. This includes minor and major upgrades of Postgres. + +## [Troubleshooting](/pgd/latest/expanded-how-to/sops/troubleshooting) + +The SOPs in this section cover the procedures for troubleshooting common issues that may arise in a Postgres Distributed (PGD) cluster. It includes steps to diagnose and resolve problems effectively. diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/install/add-node.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/install/add-node.mdx new file mode 100644 index 00000000000..4f1f19a9d01 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/install/add-node.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Adding a Node to an Existing Cluster +navTitle: Add Node +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/install/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/install/index.mdx new file mode 100644 index 00000000000..e097484c7ba --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/install/index.mdx @@ -0,0 +1,15 @@ +--- +title: Installation and Configuration SOPs +navTitle: Installation +--- + +## Overview + +This SOP covers the expanded SOPs for installing PGD, creating a new PGD cluster, adding a node to an existing cluster, and configuring PGD. + +## SOPs + +- [Installing PGD on a New Node](/pgd/latest/expanded-how-to/sops/install/new-node) +- [Adding a Node to an Existing Cluster](/pgd/latest/expanded-how-to/sops/install/add-node) +- [Creating a New Group](/pgd/latest/expanded-how-to/sops/install/new-group) + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-group.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-group.mdx new file mode 100644 index 00000000000..d071e96dce7 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-group.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Creating a New Group +navTitle: New Group +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-node.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-node.mdx new file mode 100644 index 00000000000..b68f0090507 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/install/new-node.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Installing PGD on a New Node +navTitle: New Node +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/index.mdx new file mode 100644 index 00000000000..7f4f03ba849 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/index.mdx @@ -0,0 +1,17 @@ +--- +title: Maintenance SOPs +navTitle: Maintenance +navigation: +- routine +- node-failures +- online-vacuum +--- + +This section covers the expanded SOPs for maintaining a Postgres Distributed (PGD) cluster. Regular maintenance is crucial for ensuring the health and performance of your database system. + +## SOPs + +- [Performing Routine Maintenance](/pgd/latest/expanded-how-to/sops/maintenance/routine) +- [Handling Node Failures](/pgd/latest/expanded-how-to/sops/maintenance/node-failures) +- [Online Vacuuming](/pgd/latest/expanded-how-to/sops/maintenance/online-vacuum) + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/node-failures.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/node-failures.mdx new file mode 100644 index 00000000000..d7adaee81b2 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/node-failures.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Handling Node Failures +navTitle: Node Failures +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/online-vacuum.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/online-vacuum.mdx new file mode 100644 index 00000000000..ed1eb9cd1a5 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/online-vacuum.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Online Vacuuming +navTitle: Online Vacuuming +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/routine.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/routine.mdx new file mode 100644 index 00000000000..86e819bc3de --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/maintenance/routine.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Performing Routine Maintenance +navTitle: Routine Maintenance +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/index.mdx new file mode 100644 index 00000000000..8f809e670d4 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/index.mdx @@ -0,0 +1,10 @@ +--- +title: Monitoring SOPs +navTitle: Monitoring +--- + +This section covers the expanded SOPs for monitoring a Postgres Distributed (PGD) cluster. Monitoring is crucial for maintaining the health and performance of your database system. + +## SOPs + +- [Monitoring with SQL](/pgd/latest/expanded-how-to/sops/monitoring/sql) diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/sql.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/sql.mdx new file mode 100644 index 00000000000..7b38c8a1a9d --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/monitoring/sql.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Monitoring PGD clusters using SQL +navTitle: With SQL +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/template.txt b/product_docs/docs/pgd/6/expanded-how-to/sops/template.txt new file mode 100644 index 00000000000..b76eca8fdb2 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/template.txt @@ -0,0 +1,33 @@ +--- +title: SOP - +navTitle: +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/cluster-operations.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/cluster-operations.mdx new file mode 100644 index 00000000000..8ede35d33e3 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/cluster-operations.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Troubleshooting Cluster Operations +navTitle: Cluster Operations +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/index.mdx new file mode 100644 index 00000000000..195511f51d7 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/troubleshooting/index.mdx @@ -0,0 +1,10 @@ +--- +title: Troubleshooting +navTitle: Troubleshooting +--- + +This section provides troubleshooting guidance for common issues encountered in Postgres Distributed (PGD) clusters. It includes solutions for problems related to cluster operations, node management, and performance. + +## SOPs + +- [Troubleshooting Cluster Operations](/pgd/latest/expanded-how-to/sops/troubleshooting/cluster-operations) diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/index.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/index.mdx new file mode 100644 index 00000000000..10a4a06a005 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/index.mdx @@ -0,0 +1,19 @@ +--- +title: Upgrading Postgres +navTitle: Upgrades +redirects: + - /pgd/latest/upgrades +navigation: +- minor +- major +- pgd +--- + +These SOPs cover the process of upgrading the Postgres database servers running on the nodes in a PGD cluster and upgrading PGD itself. This includes minor and major upgrades of Postgres. + +## SOPs + +- [Upgrading Postgres to a Minor Version](/pgd/latest/expanded-how-to/sops/upgrade/minor) +- [Upgrading Postgres to a Major Version](/pgd/latest/expanded-how-to/sops/upgrade/major) +- [Upgrading Postgres Distributed](/pgd/latest/expanded-how-to/sops/upgrade/pgd) + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/major.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/major.mdx new file mode 100644 index 00000000000..5751e409335 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/major.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Major upgrades of Postgres +navTitle: Major Postgres +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/minor.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/minor.mdx new file mode 100644 index 00000000000..5dad6ecfad9 --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/minor.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Minor upgrades of Postgres +navTitle: Minor Postgres +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/pgd.mdx b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/pgd.mdx new file mode 100644 index 00000000000..0be931aacfd --- /dev/null +++ b/product_docs/docs/pgd/6/expanded-how-to/sops/upgrade/pgd.mdx @@ -0,0 +1,33 @@ +--- +title: SOP - Upgrading PGD in PGD clusters +navTitle: Postgres Distributed +--- + +## Overview + +A brief description of the task and its purpose. + +## Prerequisites + +Any requirements or dependencies that must be met before performing the task. + +## Instructions + +Step-by-step generic instructions for performing the task. + +## Worked Example + +A specific example of how to perform the task, including any relevant commands or configurations. + +## Notes + +Additional information or tips that may be helpful. + +## Troubleshooting + +Common issues that may arise during the task and how to resolve them. + +## References + +Links to related documentation or resources. + diff --git a/product_docs/docs/pgd/6/get-started/assets/docker-compose.yml b/product_docs/docs/pgd/6/get-started/assets/docker-compose.yml new file mode 100644 index 00000000000..bded4e6b65f --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/assets/docker-compose.yml @@ -0,0 +1,24 @@ + +services: + host-1: + hostname: host-1 + image: pgd + environment: + PGPASSWORD: secret + PGD_JOIN_NODE_DSN: "port=5432 dbname=pgddb host=host-1 user=postgres" + restart: always + volumes: + - ./host-1-data:/var/lib/postgresql/data + + host-2: + hostname: host-2 + extends: host-1 + volumes: + - ./host-2-data:/var/lib/postgresql/data + + host-3: + hostname: host-3 + extends: host-1 + volumes: + - ./host-3-data:/var/lib/postgresql/data + diff --git a/product_docs/docs/pgd/6/get-started/assets/pgd_quickstart.sh b/product_docs/docs/pgd/6/get-started/assets/pgd_quickstart.sh new file mode 100755 index 00000000000..0707addb026 --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/assets/pgd_quickstart.sh @@ -0,0 +1,86 @@ +#!/bin/bash +# This script will unpack embedded files directly into the current working directory. +# Designed for 'curl ... | bash' execution. + +# Exit immediately if a command exits with a non-zero status. +# Pipefail ensures that a pipeline's return status is the value of the last (rightmost) command +# to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. +set -eo pipefail + +echo "Starting PGD Docker Quickstart unpacker (via curl | bash)..." +echo "" +echo "============================================================================" +echo "WARNING: Files will be extracted directly into your current directory: '$(pwd)'" +echo " This process requires the current directory to be EMPTY." +echo " Affected files: Dockerfile.pge, docker-compose.yml, docker-entrypoint.sh, qs.sh" # <--- UPDATED list +echo "============================================================================" +echo "" + +# --- Check if current directory is empty --- +# 'ls -A .' lists all files and directories except '.' and '..' +if [ -n "$(ls -A .)" ]; then + echo "Error: The current directory '$(pwd)' is NOT empty." >&2 + echo " Please run this script from an empty directory to prevent accidental overwrites." >&2 + echo " Aborting extraction." >&2 + exit 1 +fi + +echo "Current directory is empty. Proceeding with extraction." +echo "" # Newline for readability + +# --- No temporary directory creation or 'cd' operations here --- +# Files will be extracted directly into the current working directory. + +echo "Extracting embedded files..." + +# --- Embedded Data Decoding and Extraction --- +# Each file's content is base64 encoded and embedded here as a here-document. + +echo " Extracting Dockerfile.pge..." +base64 -d <<'EOF_DOCKERFILE_PGE_' > "Dockerfile.pge" +RlJPTSBkZWJpYW4KClJVTiBhcHQtZ2V0IHVwZGF0ZSAteSAmJiBhcHQtZ2V0IGluc3RhbGwgLXkgY3VybAoKQVJHIEVEQl9TVUJTQ1JJUFRJT05fVE9LRU49IiIKClJVTiBjdXJsIC0xc0xmICJodHRwczovL2Rvd25sb2Fkcy5lbnRlcnByaXNlZGIuY29tLyR7RURCX1NVQlNDUklQVElPTl9UT0tFTn0vZW50ZXJwcmlzZS9zZXR1cC5kZWIuc2giIHwgYmFzaApSVU4gY3VybCAtMXNMZiAiaHR0cHM6Ly9kb3dubG9hZHMuZW50ZXJwcmlzZWRiLmNvbS8ke0VEQl9TVUJTQ1JJUFRJT05fVE9LRU59L3Bvc3RncmVzX2Rpc3RyaWJ1dGVkL3NldHVwLmRlYi5zaCIgfCBiYXNoCgpSVU4gYXB0LWdldCB1cGRhdGUgLXkgJiYgYXB0LWdldCBpbnN0YWxsIC15IGVkYi1wb3N0Z3Jlc2V4dGVuZGVkLTE3IGVkYi1wZ2Q2LWVzc2VudGlhbC1wZ2V4dGVuZGVkMTcgClJVTiBhcHQtZ2V0IGluc3RhbGwganEgLXkKUlVOIGFwdC1nZXQgaW5zdGFsbCAteSBpcHV0aWxzLXBpbmcKClJVTiBta2RpciAtcCAvdmFyL2xpYi9wb3N0Z3Jlc3FsCgpDT1BZIC4vZG9ja2VyLWVudHJ5cG9pbnQuc2ggL3Zhci9saWIvcG9zdGdyZXNxbC8KClJVTiBjaG93biAtUiBwb3N0Z3Jlczpwb3N0Z3JlcyAvdmFyL2xpYi9wb3N0Z3Jlc3FsCgpSVU4gbWtkaXIgLXAgL2V0Yy9lZGIvcGdkLWNsaQoKUlVOIGNob3duIC1SIHBvc3RncmVzOnBvc3RncmVzIC9ldGMvZWRiL3BnZC1jbGkvCgoKRU5WIFBBVEg9Ii91c3IvbGliL2VkYi1wZ2UvMTcvYmluOiR7UEFUSH0iCkVOViBQR0RBVEE9Ii92YXIvbGliL2VkYi1wZ2UvMTcvZGF0YSIKClJVTiBta2RpciAtcCAvdmFyL2xpYi9lZGItcGdlLzE3L2RhdGEKUlVOIGNob3duIC1SIHBvc3RncmVzOnBvc3RncmVzIC92YXIvbGliL2VkYi1wZ2UvMTcvZGF0YQoKVk9MVU1FIC92YXIvbGliL2VkYi1wZ2UvMTcvZGF0YQoKVVNFUiBwb3N0Z3JlcwpXT1JLRElSIC92YXIvbGliL3Bvc3RncmVzcWwKCkNNRCBbIi92YXIvbGliL3Bvc3RncmVzcWwvZG9ja2VyLWVudHJ5cG9pbnQuc2giXQo= +EOF_DOCKERFILE_PGE_ +echo " Extracting docker-compose.yml..." +base64 -d <<'EOF_DOCKER_COMPOSE_YML_' > "docker-compose.yml" +c2VydmljZXM6CiAgaG9zdC0xOgogICAgaW1hZ2U6IHBnZAogICAgaG9zdG5hbWU6IGhvc3QtMQogICAgZW52aXJvbm1lbnQ6CiAgICAgIFBHUEFTU1dPUkQ6IHNlY3JldAogICAgICBQR0RfSk9JTl9OT0RFX0RTTjogInBvcnQ9NTQzMiBkYm5hbWU9cGdkZGIgaG9zdD1ob3N0LTEgdXNlcj1wb3N0Z3JlcyIKICAgIHJlc3RhcnQ6IGFsd2F5cwogICAgdm9sdW1lczoKICAgICAgLSBwZ2RhdGEtaG9zdC0xOi92YXIvbGliL2VkYi1wZ2UvMTcvZGF0YQoKICBob3N0LTI6CiAgICBpbWFnZTogcGdkCiAgICBob3N0bmFtZTogaG9zdC0yCiAgICBlbnZpcm9ubWVudDoKICAgICAgUEdQQVNTV09SRDogc2VjcmV0CiAgICAgIFBHRF9KT0lOX05PREVfRFNOOiAicG9ydD01NDMyIGRibmFtZT1wZ2RkYiBob3N0PWhvc3QtMSB1c2VyPXBvc3RncmVzIgogICAgcmVzdGFydDogYWx3YXlzCiAgICB2b2x1bWVzOgogICAgICAtIHBnZGF0YS1ob3N0LTI6L3Zhci9saWIvZWRiLXBnZS8xNy9kYXRhCgogIGhvc3QtMzoKICAgIGltYWdlOiBwZ2QKICAgIGhvc3RuYW1lOiBob3N0LTMKICAgIGVudmlyb25tZW50OgogICAgICBQR1BBU1NXT1JEOiBzZWNyZXQKICAgICAgUEdEX0pPSU5fTk9ERV9EU046ICJwb3J0PTU0MzIgZGJuYW1lPXBnZGRiIGhvc3Q9aG9zdC0xIHVzZXI9cG9zdGdyZXMiCiAgICByZXN0YXJ0OiBhbHdheXMKICAgIHBvcnRzOgogICAgICAtICI2NDMyOjY0MzIiCiAgICAgIC0gIjY0MzM6NjQzMyIKICAgICAgLSAiNjQzNDo2NDM0IgogICAgdm9sdW1lczoKICAgICAgLSBwZ2RhdGEtaG9zdC0zOi92YXIvbGliL2VkYi1wZ2UvMTcvZGF0YQoKCnZvbHVtZXM6CiAgcGdkYXRhLWhvc3QtMToKICAgIGRyaXZlcjogbG9jYWwKICAgIGRyaXZlcl9vcHRzOgogICAgICBvOiBiaW5kCiAgICAgIHR5cGU6IG5vbmUKICAgICAgZGV2aWNlOiAuL2hvc3QtMS12b2x1bWUKICBwZ2RhdGEtaG9zdC0yOgogICAgZHJpdmVyOiBsb2NhbAogICAgZHJpdmVyX29wdHM6CiAgICAgIG86IGJpbmQKICAgICAgdHlwZTogbm9uZQogICAgICBkZXZpY2U6IC4vaG9zdC0yLXZvbHVtZQogIHBnZGF0YS1ob3N0LTM6CiAgICBkcml2ZXI6IGxvY2FsCiAgICBkcml2ZXJfb3B0czoKICAgICAgbzogYmluZAogICAgICB0eXBlOiBub25lCiAgICAgIGRldmljZTogLi9ob3N0LTMtdm9sdW1lCiAgIAo= +EOF_DOCKER_COMPOSE_YML_ +echo " Extracting docker-entrypoint.sh..." +base64 -d <<'EOF_DOCKER_ENTRYPOINT_SH_' > "docker-entrypoint.sh" +IyEvdXNyL2Jpbi9lbnYgYmFzaAojIFRoaXMgdmVyc2lvbiB3aWxsIGFzc3VtZSB0aGF0IGl0IGlzIFBHRQpzZXQgLWV1Cgpwd2QKClBHRF9JTklUSUFMX05PREVfQ09VTlQ9IiR7UEdEX0lOSVRJQUxfTk9ERV9DT1VOVDotM30iClBHRF9IT1NUX05BTUU9IiR7UEdEX0hPU1RfTkFNRTotJChjYXQgL2V0Yy9ob3N0bmFtZSB8IHhhcmdzKX0iClBHRF9OT0RFX05BTUU9IiR7UEdEX05PREVfTkFNRTotJChjYXQgL2V0Yy9ob3N0bmFtZSB8IHhhcmdzIHwgc2VkIHMvaG9zdC0vbm9kZS0vKX0iClBHRF9OT0RFX0dST1VQPSIke1BHRF9OT0RFX0dST1VQOi1ncm91cC0xfSIKUEdEX0NMVVNURVJfTkFNRT0iJHtQR0RfQ0xVU1RFUl9OQU1FOi1wZ2R9IgoKUE9TVEdSRVNfREI9IiR7UE9TVEdSRVNfREI6LXBnZGRifSIKUE9TVEdSRVNfVVNFUj0iJHtQT1NUR1JFU19VU0VSOi1wb3N0Z3Jlc30iClBHREFUQT0iJHtQR0RBVEE6LS92YXIvbGliL2VkYi1wZ2UvMTcvbWFpbi99IgpQR0xPR0ZJTEU9IiR7UEdMT0dGSUxFOi0vdmFyL2xpYi9lZGItcGdlLzE3L2xvZ2ZpbGV9IgoKZWNobyAiQ29uZmlndXJpbmcgJHtQR0RfTk9ERV9OQU1FfSAoJHtQR0RfSE9TVF9OQU1FfSkiCgpQR19QR0RfQ0xJX0NPTkZfQ09OVEVOVFM9JwpjbHVzdGVyOgogIG5hbWU6IHBnZAogIGVuZHBvaW50czoKICAgIC0gaG9zdD1ob3N0LTEgZGJuYW1lPXBnZGRiIHBvcnQ9NTQzMgogICAgLSBob3N0PWhvc3QtMiBkYm5hbWU9cGdkZGIgcG9ydD01NDMyCiAgICAtIGhvc3Q9aG9zdC0zIGRibmFtZT1wZ2RkYiBwb3J0PTU0MzIKJwoKIyBUaGlzIHdpbGwgYmUgdXNlZCBieSBgcGdkIG5vZGUgc2V0dXBgLgpleHBvcnQgUEdQQVNTV09SRD0iJHtQR1BBU1NXT1JEOi0kUE9TVEdSRVNfUEFTU1dPUkR9IgoKZWNobyAiJFBHX1BHRF9DTElfQ09ORl9DT05URU5UUyIgPiAvZXRjL2VkYi9wZ2QtY2xpL3BnZC1jbGktY29uZmlnLnltbAoKaWYgISBbIC1zICIkUEdEQVRBL1BHX1ZFUlNJT04iIF07IHRoZW4KICAgIGVjaG8gIlByb3Zpc2lvbmluZyBQb3N0Z3Jlcy4iCgogICAgUFJJTUFSWT0nZmFsc2UnCiAgICBjYXNlICIkUEdEX0pPSU5fTk9ERV9EU04iIGluCgkqIiBob3N0PSRQR0RfSE9TVF9OQU1FICIqIHwgImhvc3Q9JFBHRF9IT1NUX05BTUUgIiogfCAqIiBob3N0PSRQR0RfSE9TVF9OQU1FIikKCgkgICAgUFJJTUFSWT0ndHJ1ZScKCSAgICA7OwogICAgZXNhYwoKICAgIGlmIFsgJFBSSU1BUlkgPSAndHJ1ZScgXTsgdGhlbgoJZWNobyAiUHJvdmlzaW9uaW5nIFBHRCBub2RlIGFuZCBuZXcgZ3JvdXAuIgoJcGdkIG5vZGUgIiRQR0RfTk9ERV9OQU1FIiBzZXR1cCAtLXZlcmJvc2UgXAoJICAgIC0tZHNuICIkUEdEX0pPSU5fTk9ERV9EU04iIFwKCSAgICAtLWxpc3Rlbi1hZGRyICIkUEdEX0hPU1RfTkFNRSxsb2NhbGhvc3QiIFwKCSAgICAtLWluaXRpYWwtbm9kZS1jb3VudCAiJFBHRF9JTklUSUFMX05PREVfQ09VTlQiIFwKCSAgICAtLXBnZGF0YSAiJFBHREFUQSIgXAoJICAgIC0tbG9nLWZpbGUgIiRQR0xPR0ZJTEUiIFwKCSAgICAtLWNsdXN0ZXItbmFtZSAiJFBHRF9DTFVTVEVSX05BTUUiIFwKCSAgICAtLWdyb3VwLW5hbWUgIiRQR0RfTk9ERV9HUk9VUCIgCiAgICBlbHNlCgllY2hvICJQcm92aXNpb25pbmcgUEdEIG5vZGUgdG8gam9pbiBleGlzdGluZyBncm91cC4iCgoJIyBJbiBjYXNlIHdlIG5lZWQgdG8gZG8gY2xlYW51cC4KCXBzcWwgIiRQR0RfSk9JTl9OT0RFX0RTTiIgLWMgJ1NFTEVDVCBiZHIucnVuX29uX2FsbF9ub2RlcygkJCBTRUxFQ1QgYmRyLmRyb3Bfbm9kZSgnIickUEdEX05PREVfTkFNRSciJywgZm9yY2UgOj0gdHJ1ZSkgJCQpOycKCglwZ2Qgbm9kZSAiJFBHRF9OT0RFX05BTUUiIHNldHVwIC0tdmVyYm9zZSBcCgkgICAgLS1kc24gImhvc3Q9JFBHRF9IT1NUX05BTUUgcG9ydD01NDMyIGRibmFtZT0kUE9TVEdSRVNfREIgdXNlcj0kUE9TVEdSRVNfVVNFUiIgXAoJICAgIC0tbGlzdGVuLWFkZHIgIiRQR0RfSE9TVF9OQU1FLGxvY2FsaG9zdCIgXAoJICAgIC0tcGdkYXRhICIkUEdEQVRBIiBcCgkgICAgLS1sb2ctZmlsZSAiJFBHTE9HRklMRSIgXAoJICAgIC0tY2x1c3Rlci1kc24gIiRQR0RfSk9JTl9OT0RFX0RTTiIgXAoJICAgIC0tZ3JvdXAtbmFtZSAiJFBHRF9OT0RFX0dST1VQIiBcCgkgICAgLS1jbHVzdGVyLW5hbWUgIiRQR0RfQ0xVU1RFUl9OQU1FIiB8fCAocm0gLXJmICIkUEdEQVRBIiAmJiBleGl0IDEpCiAgICBmaQpmaQoKcGdfY3RsIC1EICIkUEdEQVRBIiBzdG9wIHx8IGVjaG8gIlBvc3RncmVzIG5vdCBydW5uaW5nIgoKZXhlYyBwb3N0Z3JlcyAtRCAiJFBHREFUQSIK +EOF_DOCKER_ENTRYPOINT_SH_ +echo " Extracting qs.sh..." +base64 -d <<'EOF_QS_SH_' > "qs.sh" +IyEvYmluL2Jhc2gKIyBxcy5zaCAtIFBHRCBRdWlja3N0YXJ0IFV0aWxpdHkgU2NyaXB0CiMgVGhpcyBzY3JpcHQgcHJvdmlkZXMgY29tbWFuZHMgZm9yIGJ1aWxkaW5nLCBzdGFydGluZywgc3RvcHBpbmcsCiMgYW5kIGludGVyYWN0aW5nIHdpdGggdGhlIFBHRCBEb2NrZXIgZW52aXJvbm1lbnQuCgojIEV4aXQgaW1tZWRpYXRlbHkgaWYgYSBjb21tYW5kIGV4aXRzIHdpdGggYSBub24temVybyBzdGF0dXMuCnNldCAtZW8gcGlwZWZhaWwKCiMgLS0tIFV0aWxpdHkgRnVuY3Rpb25zIC0tLQoKY29tbWFuZF9leGlzdHMgKCkgewogICAgY29tbWFuZCAtdiAiJDEiID4vZGV2L251bGwgMj4mMQp9CgpjaGVja19kb2NrZXIoKSB7CiAgICBpZiAhIGNvbW1hbmRfZXhpc3RzIGRvY2tlcjsgdGhlbgogICAgICAgIGVjaG8gIkVycm9yOiBEb2NrZXIgaXMgbm90IGluc3RhbGxlZCBvciBub3QgaW4gUEFUSC4iID4mMgogICAgICAgIGVjaG8gIlBsZWFzZSBpbnN0YWxsIERvY2tlciB0byBwcm9jZWVkLiIgPiYyCiAgICAgICAgcmV0dXJuIDEKICAgIGZpCiAgICBlY2hvICIgIERvY2tlciBmb3VuZC4iCiAgICByZXR1cm4gMAp9CgpjaGVja19kb2NrZXJfY29tcG9zZSgpIHsKICAgIGlmICEgZG9ja2VyIGNvbXBvc2UgdmVyc2lvbiA+L2Rldi9udWxsIDI+JjE7IHRoZW4KICAgICAgICBlY2hvICJFcnJvcjogRG9ja2VyIENvbXBvc2UgKHYyIG9yIG5ld2VyKSBpcyBub3QgaW5zdGFsbGVkIG9yIG5vdCBpbiBQQVRILiIgPiYyCiAgICAgICAgZWNobyAiUGxlYXNlIGluc3RhbGwgRG9ja2VyIENvbXBvc2UgdG8gcHJvY2VlZC4iID4mMgogICAgICAgIHJldHVybiAxCiAgICBmaQogICAgZWNobyAiICBEb2NrZXIgQ29tcG9zZSBmb3VuZC4iCiAgICByZXR1cm4gMAp9CgpjaGVja19zdWJzY3JpcHRpb25fdG9rZW4oKSB7CiAgICBpZiBbIC16ICIkRURCX1NVQlNDUklQVElPTl9UT0tFTiIgXTsgdGhlbgogICAgICAgIGVjaG8gIkVycm9yOiBFREJfU1VCU0NSSVBUSU9OX1RPS0VOIGVudmlyb25tZW50IHZhcmlhYmxlIGlzIG5vdCBzZXQuIiA+JjIKICAgICAgICBlY2hvICJQbGVhc2Ugc2V0IHRoaXMgdmFyaWFibGUgKGUuZy4sIGV4cG9ydCBFREJfU1VCU0NSSVBUSU9OX1RPS0VOPSd5b3VyX3Rva2VuJykgYmVmb3JlIHJ1bm5pbmcgdGhlIGNvbW1hbmQuIiA+JjIKICAgICAgICByZXR1cm4gMQogICAgZmkKICAgIGVjaG8gIiAgRURCX1NVQlNDUklQVElPTl9UT0tFTiBpcyBzZXQuIgogICAgcmV0dXJuIDAKfQoKIyAtLS0gQ29tbWFuZHMgLS0tCgpjbWRfcHJlcGFyZSgpIHsKICAgIGVjaG8gIlJ1bm5pbmcgJ3ByZXBhcmUnIGNvbW1hbmQ6IENyZWF0aW5nIHZvbHVtZSBkaXJlY3Rvcmllcy4uLiIKICAgIGxvY2FsIHZvbHVtZXM9KCJob3N0LTEtdm9sdW1lIiAiaG9zdC0yLXZvbHVtZSIgImhvc3QtMy12b2x1bWUiKQogICAgbG9jYWwgc3VjY2Vzcz0wCgogICAgZm9yIHZvbCBpbiAiJHt2b2x1bWVzW0BdfSI7IGRvCiAgICAgICAgaWYgWyAhIC1kICIkdm9sIiBdOyB0aGVuCiAgICAgICAgICAgIGVjaG8gIiAgQ3JlYXRpbmcgZGlyZWN0b3J5OiAnJHZvbCcuLi4iCiAgICAgICAgICAgIGlmICEgbWtkaXIgLXAgIiR2b2wiOyB0aGVuCiAgICAgICAgICAgICAgICBlY2hvICJFcnJvcjogRmFpbGVkIHRvIGNyZWF0ZSAnJHZvbCcuIiA+JjIKICAgICAgICAgICAgICAgIGVjaG8gIiAgICAgICBZb3UgbWF5IG5lZWQgZWxldmF0ZWQgcGVybWlzc2lvbnMuIFRyeTogc3VkbyAuL3FzLnNoIHByZXBhcmUiID4mMgogICAgICAgICAgICAgICAgc3VjY2Vzcz0xCiAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgIGVjaG8gIiAgU3VjY2Vzc2Z1bGx5IGNyZWF0ZWQgJyR2b2wnLiIKICAgICAgICAgICAgZmkKICAgICAgICBlbHNlCiAgICAgICAgICAgIGVjaG8gIiAgRGlyZWN0b3J5ICckdm9sJyBhbHJlYWR5IGV4aXN0cy4gU2tpcHBpbmcuIgogICAgICAgIGZpCiAgICBkb25lCiAgICByZXR1cm4gJHN1Y2Nlc3MKfQoKY21kX2J1aWxkKCkgewogICAgZWNobyAiUnVubmluZyAnYnVpbGQnIGNvbW1hbmQ6IENoZWNraW5nIGVudmlyb25tZW50IGFuZCBidWlsZGluZyBQR0QgRG9ja2VyIGltYWdlLi4uIgogICAgZWNobyAiUGVyZm9ybWluZyBzeXN0ZW0gY2hlY2tzLi4uIgoKICAgIGNoZWNrX2RvY2tlciB8fCByZXR1cm4gMQogICAgY2hlY2tfZG9ja2VyX2NvbXBvc2UgfHwgcmV0dXJuIDEKICAgIGNoZWNrX3N1YnNjcmlwdGlvbl90b2tlbiB8fCByZXR1cm4gMQoKICAgIGVjaG8gIkFsbCBjaGVja3MgcGFzc2VkLiBCdWlsZGluZyBEb2NrZXIgaW1hZ2UgJ3BnZCcgZnJvbSBEb2NrZXJmaWxlLnBnZS4uLiIKCiAgICAjIFRoZSAnLicgY29udGV4dCByZWZlcnMgdG8gdGhlIGN1cnJlbnQgZGlyZWN0b3J5CiAgICBkb2NrZXIgYnVpbGQgLWYgRG9ja2VyZmlsZS5wZ2UgLS1idWlsZC1hcmcgRURCX1NVQlNDUklQVElPTl9UT0tFTj0iJEVEQl9TVUJTQ1JJUFRJT05fVE9LRU4iIC10IHBnZCAuCgogICAgQlVJTERfU1RBVFVTPSQ/CiAgICBpZiBbICRCVUlMRF9TVEFUVVMgLWVxIDAgXTsgdGhlbgogICAgICAgIGVjaG8gIkRvY2tlciBpbWFnZSAncGdkJyBidWlsdCBzdWNjZXNzZnVsbHkhIgogICAgZWxzZQogICAgICAgIGVjaG8gIkVycm9yOiBEb2NrZXIgaW1hZ2UgYnVpbGQgZmFpbGVkLiBFeGl0IGNvZGU6ICRCVUlMRF9TVEFUVVMiID4mMgogICAgZmkKICAgIHJldHVybiAkQlVJTERfU1RBVFVTCn0KCmNtZF9zdGFydCgpIHsKICAgIGVjaG8gIlJ1bm5pbmcgJ3N0YXJ0JyBjb21tYW5kOiBTdGFydGluZyBEb2NrZXIgQ29tcG9zZSBzZXJ2aWNlcy4uLiIKICAgIGVjaG8gIlBlcmZvcm1pbmcgc3lzdGVtIGNoZWNrcyBmb3Igc3RhcnQuLi4iCgogICAgY2hlY2tfZG9ja2VyIHx8IHJldHVybiAxCiAgICBjaGVja19kb2NrZXJfY29tcG9zZSB8fCByZXR1cm4gMQoKICAgIGlmIFsgISAtZiAiZG9ja2VyLWNvbXBvc2UueW1sIiBdOyB0aGVuCiAgICAgICAgZWNobyAiRXJyb3I6ICdkb2NrZXItY29tcG9zZS55bWwnIG5vdCBmb3VuZCBpbiB0aGUgY3VycmVudCBkaXJlY3RvcnkuIiA+JjIKICAgICAgICBlY2hvICJQbGVhc2UgZW5zdXJlIHlvdSBhcmUgaW4gdGhlIGNvcnJlY3QgZGlyZWN0b3J5IHdoZXJlIGZpbGVzIHdlcmUgZXh0cmFjdGVkLiIgPiYyCiAgICAgICAgcmV0dXJuIDEKICAgIGZpCgogICAgZG9ja2VyIGNvbXBvc2UgdXAgLWQKICAgIFNUQVJUX1NUQVRVUz0kPwogICAgaWYgWyAkU1RBUlRfU1RBVFVTIC1lcSAwIF07IHRoZW4KICAgICAgICBlY2hvICJEb2NrZXIgQ29tcG9zZSBzZXJ2aWNlcyBzdGFydGVkIHN1Y2Nlc3NmdWxseSBpbiBkZXRhY2hlZCBtb2RlLiIKICAgIGVsc2UKICAgICAgICBlY2hvICJFcnJvcjogRG9ja2VyIENvbXBvc2Ugc2VydmljZXMgZmFpbGVkIHRvIHN0YXJ0LiBFeGl0IGNvZGU6ICRTVEFSVF9TVEFUVVMiID4mMgogICAgZmkKICAgIHJldHVybiAkU1RBUlRfU1RBVFVTCn0KCmNtZF9zdG9wKCkgewogICAgZWNobyAiUnVubmluZyAnc3RvcCcgY29tbWFuZDogU3RvcHBpbmcgRG9ja2VyIENvbXBvc2Ugc2VydmljZXMuLi4iCiAgICBlY2hvICJQZXJmb3JtaW5nIHN5c3RlbSBjaGVja3MgZm9yIHN0b3AuLi4iCgogICAgY2hlY2tfZG9ja2VyIHx8IHJldHVybiAxCiAgICBjaGVja19kb2NrZXJfY29tcG9zZSB8fCByZXR1cm4gMQoKICAgIGlmIFsgISAtZiAiZG9ja2VyLWNvbXBvc2UueW1sIiBdOyB0aGVuCiAgICAgICAgZWNobyAiRXJyb3I6ICdkb2NrZXItY29tcG9zZS55bWwnIG5vdCBmb3VuZCBpbiB0aGUgY3VycmVudCBkaXJlY3RvcnkuIiA+JjIKICAgICAgICBlY2hvICJQbGVhc2UgZW5zdXJlIHlvdSBhcmUgaW4gdGhlIGNvcnJlY3QgZGlyZWN0b3J5IHdoZXJlIGZpbGVzIHdlcmUgZXh0cmFjdGVkLiIgPiYyCiAgICAgICAgcmV0dXJuIDEKICAgIGZpCgogICAgZG9ja2VyIGNvbXBvc2UgZG93bgogICAgU1RPUF9TVEFUVVM9JD8KICAgIGlmIFsgJFNUT1BfU1RBVFVTIC1lcSAwIF07IHRoZW4KICAgICAgICBlY2hvICJEb2NrZXIgQ29tcG9zZSBzZXJ2aWNlcyBzdG9wcGVkIGFuZCByZW1vdmVkIHN1Y2Nlc3NmdWxseS4iCiAgICBlbHNlCiAgICAgICAgZWNobyAiRXJyb3I6IERvY2tlciBDb21wb3NlIHNlcnZpY2VzIGZhaWxlZCB0byBzdG9wL3JlbW92ZS4gRXhpdCBjb2RlOiAkU1RPUF9TVEFUVVMiID4mMgogICAgZmkKICAgIHJldHVybiAkU1RPUF9TVEFUVVMKfQoKY21kX3BzcWwoKSB7CiAgICBlY2hvICJSdW5uaW5nICdwc3FsJyBjb21tYW5kOiBDb25uZWN0aW5nIHRvIHBnZGRiIHZpYSBob3N0LTEuLi4iCiAgICBlY2hvICJQZXJmb3JtaW5nIHN5c3RlbSBjaGVja3MgZm9yIHBzcWwuLi4iCgogICAgY2hlY2tfZG9ja2VyIHx8IHJldHVybiAxCiAgICBjaGVja19kb2NrZXJfY29tcG9zZSB8fCByZXR1cm4gMQoKICAgIGlmIFsgISAtZiAiZG9ja2VyLWNvbXBvc2UueW1sIiBdOyB0aGVuCiAgICAgICAgZWNobyAiRXJyb3I6ICdkb2NrZXItY29tcG9zZS55bWwnIG5vdCBmb3VuZCBpbiB0aGUgY3VycmVudCBkaXJlY3RvcnkuIiA+JjIKICAgICAgICBlY2hvICJQbGVhc2UgZW5zdXJlIHlvdSBhcmUgaW4gdGhlIGNvcnJlY3QgZGlyZWN0b3J5IHdoZXJlIGZpbGVzIHdlcmUgZXh0cmFjdGVkLiIgPiYyCiAgICAgICAgcmV0dXJuIDEKICAgIGZpCgogICAgZG9ja2VyIGNvbXBvc2UgZXhlYyBob3N0LTEgcHNxbCBwZ2RkYiAiJEAiCiAgICBQU1FMX1NUQVRVUz0kPwogICAgcmV0dXJuICRQU1FMX1NUQVRVUwp9CgpjbWRfYmFzaCgpIHsKICAgIGVjaG8gIlJ1bm5pbmcgJ2Jhc2gnIGNvbW1hbmQ6IE9wZW5pbmcgYSBiYXNoIHNoZWxsIGluIGhvc3QtMSBjb250YWluZXIuLi4iCiAgICBlY2hvICJQZXJmb3JtaW5nIHN5c3RlbSBjaGVja3MgZm9yIGJhc2guLi4iCgogICAgY2hlY2tfZG9ja2VyIHx8IHJldHVybiAxCiAgICBjaGVja19kb2NrZXJfY29tcG9zZSB8fCByZXR1cm4gMQoKICAgIGlmIFsgISAtZiAiZG9ja2VyLWNvbXBvc2UueW1sIiBdOyB0aGVuCiAgICAgICAgZWNobyAiRXJyb3I6ICdkb2NrZXItY29tcG9zZS55bWwnIG5vdCBmb3VuZCBpbiB0aGUgY3VycmVudCBkaXJlY3RvcnkuIiA+JjIKICAgICAgICBlY2hvICJQbGVhc2UgZW5zdXJlIHlvdSBhcmUgaW4gdGhlIGNvcnJlY3QgZGlyZWN0b3J5IHdoZXJlIGZpbGVzIHdlcmUgZXh0cmFjdGVkLiIgPiYyCiAgICAgICAgcmV0dXJuIDEKICAgIGZpCgogICAgZG9ja2VyIGNvbXBvc2UgZXhlYyBob3N0LTEgYmFzaCAiJEAiCiAgICBCQVNIX1NUQVRVUz0kPwogICAgcmV0dXJuICRCQVNIX1NUQVRVUwp9CgpjbWRfY2xlYW51cCgpIHsKICAgIGVjaG8gIlJ1bm5pbmcgJ2NsZWFudXAnIGNvbW1hbmQ6IFJlbW92aW5nIHZvbHVtZSBkaXJlY3Rvcmllcy4uLiIKICAgIGxvY2FsIHZvbHVtZXM9KCJob3N0LTEtdm9sdW1lIiAiaG9zdC0yLXZvbHVtZSIgImhvc3QtMy12b2x1bWUiKQogICAgbG9jYWwgc3VjY2Vzcz0wCgogICAgZm9yIHZvbCBpbiAiJHt2b2x1bWVzW0BdfSI7IGRvCiAgICAgICAgaWYgWyAtZCAiJHZvbCIgXTsgdGhlbgogICAgICAgICAgICBlY2hvICIgIFJlbW92aW5nIGRpcmVjdG9yeTogJyR2b2wnLi4uIgogICAgICAgICAgICBpZiAhIHJtIC1yZiAiJHZvbCI7IHRoZW4KICAgICAgICAgICAgICAgIGVjaG8gIkVycm9yOiBGYWlsZWQgdG8gcmVtb3ZlICckdm9sJy4iID4mMgogICAgICAgICAgICAgICAgZWNobyAiICAgICAgIFlvdSBtYXkgbmVlZCBlbGV2YXRlZCBwZXJtaXNzaW9ucy4gVHJ5OiBzdWRvIC4vcXMuc2ggY2xlYW51cCIgPiYyCiAgICAgICAgICAgICAgICBzdWNjZXNzPTEKICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAgZWNobyAiICBTdWNjZXNzZnVsbHkgcmVtb3ZlZCAnJHZvbCcuIgogICAgICAgICAgICBmaQogICAgICAgIGVsc2UKICAgICAgICAgICAgZWNobyAiICBEaXJlY3RvcnkgJyR2b2wnIGRvZXMgbm90IGV4aXN0LiBTa2lwcGluZy4iCiAgICAgICAgZmkKICAgIGRvbmUKICAgIHJldHVybiAkc3VjY2Vzcwp9CgojIC0tLSBNYWluIFNjcmlwdCBMb2dpYyAtLS0KCnNob3dfaGVscCgpIHsKICAgIGVjaG8gIlVzYWdlOiBxcy5zaCA8Y29tbWFuZD4iCiAgICBlY2hvICIiCiAgICBlY2hvICJDb21tYW5kczoiCiAgICBlY2hvICIgIHByZXBhcmUgIC0gQ3JlYXRlcyAnaG9zdC1YLXZvbHVtZScgZGlyZWN0b3JpZXMgZm9yIERvY2tlciB2b2x1bWVzLiIKICAgIGVjaG8gIiAgYnVpbGQgICAgLSBDaGVja3MgZW52aXJvbm1lbnQsIGJ1aWxkcyB0aGUgJ3BnZCcgRG9ja2VyIGltYWdlLiIKICAgIGVjaG8gIiAgc3RhcnQgICAgLSBTdGFydHMgRG9ja2VyIENvbXBvc2Ugc2VydmljZXMgKGRvY2tlciBjb21wb3NlIHVwIC1kKS4iCiAgICBlY2hvICIgIHN0b3AgICAgIC0gU3RvcHMgYW5kIHJlbW92ZXMgRG9ja2VyIENvbXBvc2Ugc2VydmljZXMgKGRvY2tlciBjb21wb3NlIGRvd24pLiIKICAgIGVjaG8gIiAgcHNxbCAgICAgLSBDb25uZWN0cyB0byBwZ2RkYiBvbiBob3N0LTEgdmlhIHBzcWwgKGRvY2tlciBjb21wb3NlIGV4ZWMgaG9zdC0xIHBzcWwgcGdkZGIpLiIKICAgIGVjaG8gIiAgICAgICAgICAgICBBZGRpdGlvbmFsIGFyZ3VtZW50cyBhcmUgcGFzc2VkIGRpcmVjdGx5IHRvIHBzcWwuIgogICAgZWNobyAiICBiYXNoICAgICAtIE9wZW5zIGEgYmFzaCBzaGVsbCBpbiB0aGUgaG9zdC0xIGNvbnRhaW5lciAoZG9ja2VyIGNvbXBvc2UgZXhlYyBob3N0LTEgYmFzaCkuIgogICAgZWNobyAiICAgICAgICAgICAgIEFkZGl0aW9uYWwgYXJndW1lbnRzIGFyZSBwYXNzZWQgZGlyZWN0bHkgdG8gYmFzaC4iCiAgICBlY2hvICIgIGNsZWFudXAgIC0gUmVtb3ZlcyAnaG9zdC1YLXZvbHVtZScgZGlyZWN0b3JpZXMuIgogICAgZWNobyAiIgogICAgZWNobyAiTm90ZTogRW5zdXJlIEVEQl9TVUJTQ1JJUFRJT05fVE9LRU4gaXMgc2V0IGZvciB0aGUgJ2J1aWxkJyBjb21tYW5kLiIKfQoKIyBDaGVjayBmb3IgY29tbWFuZCBhcmd1bWVudAppZiBbIC16ICIkMSIgXTsgdGhlbgogICAgZWNobyAiRXJyb3I6IE5vIGNvbW1hbmQgcHJvdmlkZWQuIiA+JjIKICAgIHNob3dfaGVscAogICAgZXhpdCAxCmZpCgpDT01NQU5EPSIkMSIKc2hpZnQgIyBSZW1vdmUgdGhlIGNvbW1hbmQgZnJvbSBhcmd1bWVudHMsIHBhc3MgcmVtYWluaW5nIHRvIHN1YmNvbW1hbmQKCmNhc2UgIiRDT01NQU5EIiBpbgogICAgcHJlcGFyZSkKICAgICAgICBjbWRfcHJlcGFyZSAiJEAiCiAgICAgICAgOzsKICAgIGJ1aWxkKQogICAgICAgIGNtZF9idWlsZCAiJEAiCiAgICAgICAgOzsKICAgIHN0YXJ0KQogICAgICAgIGNtZF9zdGFydCAiJEAiCiAgICAgICAgOzsKICAgIHN0b3ApCiAgICAgICAgY21kX3N0b3AgIiRAIgogICAgICAgIDs7CiAgICBwc3FsKQogICAgICAgIGNtZF9wc3FsICIkQCIKICAgICAgICA7OwogICAgYmFzaCkKICAgICAgICBjbWRfYmFzaCAiJEAiCiAgICAgICAgOzsKICAgIGNsZWFudXApCiAgICAgICAgY21kX2NsZWFudXAgIiRAIgogICAgICAgIDs7CiAgICAtaHwtLWhlbHB8aGVscCkKICAgICAgICBzaG93X2hlbHAKICAgICAgICA7OwogICAgKikKICAgICAgICBlY2hvICJFcnJvcjogVW5rbm93biBjb21tYW5kICckQ09NTUFORCcuIiA+MgogICAgICAgIHNob3dfaGVscAogICAgICAgIGV4aXQgMQogICAgICAgIDs7CmVzYWMKCmV4aXQgJD8KCg== +EOF_QS_SH_ + +echo "Files unpacked successfully into the current directory: '$(pwd)'." +echo "" + +# Make necessary scripts executable +if [ -f "qs.sh" ]; then # <--- UPDATED to qs.sh + chmod +x qs.sh + echo "Made 'qs.sh' executable." +fi +# Assuming docker-entrypoint.sh might also need executable permissions if present +if [ -f "docker-entrypoint.sh" ]; then + chmod +x docker-entrypoint.sh + echo "Made 'docker-entrypoint.sh' executable." +fi + +echo "" +echo "You can now use the 'qs.sh' command to manage your PGD Docker environment." +echo "For available commands, run: ./qs.sh help" +echo "" +echo "Common next steps:" +echo "1. Create volume directories: ./qs.sh prepare" +echo "2. Build the PGD Docker image: export EDB_SUBSCRIPTION_TOKEN=\"YOUR_EDB_TOKEN\"; ./qs.sh build" +echo "3. Start the PGD services: ./qs.sh start" +echo "" +echo "Remember to clean up these files and created volumes manually when you are done. Example:" +echo " ./qs.sh stop" +echo " ./qs.sh cleanup" +echo " rm Dockerfile.pge docker-compose.yml docker-entrypoint.sh qs.sh" # <--- UPDATED list +echo "" + +# The script exits here after printing instructions. +# No automatic cleanup of extracted files as they are in the user's CWD. diff --git a/product_docs/docs/pgd/6/get-started/essential-standard.mdx b/product_docs/docs/pgd/6/get-started/essential-standard.mdx new file mode 100644 index 00000000000..e1106f5b8be --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/essential-standard.mdx @@ -0,0 +1,34 @@ +--- +title: An introduction to PGD Essential +navTitle: Introducing PGD Essential +--- + +EDB Postgres Distributed (PGD) Essential is a simplified version of PGD Expanded, designed to help you get started with distributed databases quickly and easily. It provides the core features of PGD, enabling high availability and disaster recovery without the complexity of advanced configurations. + +At the core of PGD are data nodes, Postgres databases that are part of a PGD cluster. PGD enables these databases to replicate data efficiently between nodes, ensuring that your data is always available and up-to-date. PGD Essential simplifies this process by providing a standard architecture that is easy to set up and manage. + +The standard architecture is built around a single data group, which is the basic architectural element for EDB Postgres Distributed systems. Within a group, nodes cooperate to select which nodes handle incoming write or read traffic, and identify when nodes are available or out of sync with the rest of the group. Groups are most commonly used on a single location where the nodes are in the same data center and where you have just the one group in the cluster, we also call it the one-location architecture. + +## Standard/One-location architecture + +The one-location architecture consists of a single PGD cluster with three nodes. The nodes are located in the same data center or region. Ideally they are in different availability zones, but that isn't required. The nodes are connected to each other using a high-speed network. + +The nodes are configured as a data group which means that they replicate data to each other within the same group. While PGD can handle multiple writers in a network, this requires more advanced conflict management and is not supported in PGD Essential. + +Therefore, in the standard architecture, one node is designated as the write leader node, which handles all write transactions. The other nodes in the group are read-only nodes that replicate data from the write leader. + + +The write leader node is one node selected by the nodes in the group to handle all the writes. It is responsible for accepting write transactions and replicating them to the other nodes in the group. If the write leader node fails, the other nodes in the group will elect a new write leader node. + +Applications can connect to any node in the cluster using PGD's Connection Manager ports which runs on every data node. It will automatically route read and write transactions to the write leader. It can also route read only transactions to the other nodes in the group. + +
+ +![Standard architecture](/pgd/latest/expanded-how-to/architectures/images/1x3-cluster.svg) + +
+ +In this diagram, you can see the applications connecting to the PGD cluster through the Connection Manager ports. The Connection Manager is responsible for routing the read and write transactions to the appropriate nodes in the group. The write leader is responsible for handling all write transactions and is shown in at the top in AZ1 in green. + +The other nodes in the group are read-only nodes that replicate data from the write leader. Applications connecting to the read-only nodes Connection Manager read/write ports will have their queries and changes routed to the write leader. All the time, the nodes are talking to each other replicationing data to ensure they are in sync. + diff --git a/product_docs/docs/pgd/6/get-started/expanded-examples.mdx b/product_docs/docs/pgd/6/get-started/expanded-examples.mdx new file mode 100644 index 00000000000..7167ed77147 --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/expanded-examples.mdx @@ -0,0 +1,27 @@ +--- +title: Expanded Examples and Use Cases +navTitle: Expanded Examples +--- + +While PGD Essential delivers the core functionality needed to get high availability and/or disaster recover use cases up and running quickly, there are many advanced use cases that can be implemented with PGD Essential. This section provides examples of how to implement some of these advanced use cases. + +## Use Cases + +### Use Case 1: Multi-Master Replication + +By default, PGD Essential uses the PGD Connection Manager to send your requests to the right node. This node is the write leader and by directing your requests there, it allows conflicts to be rapidly resolved. + +With PGD Expanded, you can send your requests to any node in the cluster, and PGD will replicate the changes to the other nodes. Configurable conflict management then allows you to choose how to resolve conflicts. + +### Use Case 2: Data Distribution + +PGD Expanded allows you to distribute your data across multiple nodes in the cluster, including subscriber-only read-only nodes. These nodes can be located in multiple data centers or availability zones. This allows you to scale your database's read capacity horizontally, adding more nodes to the cluster as needed. + +### Use Case 3: Geo-Distribution + +PGD Expanded allows you to distribute your data across multiple regions, replicating data to all the nodes in the cluster. Multiple Data groups can be located in different locations to ensure high availability and resilience in that location. + +### Use Case 4: Tiered Tables + +An optional element of PGD Expanded is the ability to create tiered tables. These tables can be used to tier data between hot data, being replicated within the cluster and cold data being written to a Iceberg/Delta tables data lake. The cold data remains queryable as Tiered Tables uses PGAA which allows you to query the data lake as if it were a table in the database. + diff --git a/product_docs/docs/pgd/6/get-started/first-cluster.mdx b/product_docs/docs/pgd/6/get-started/first-cluster.mdx new file mode 100644 index 00000000000..079944be8ad --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/first-cluster.mdx @@ -0,0 +1,137 @@ +--- +title: Creating your first cluster (PGD Essential) +navTitle: First Cluster +description: "Creating your first cluster with EDB Postgres Distributed Essential." +--- + +This part of the Getting Started guide will help you create a local cluster using Docker Compose. This is a great way to get familiar with the EDB Postgres Distributed (PGD) Essential features and functionality. + +## Prerequisites + +- Docker and Docker Compose installed on your local machine. + + +## Install the PGD Docker Quickstart kit + +To create your first PGD cluster, you can use the Docker Compose file provided by EDB. This will set up a local cluster with three nodes, which is perfect for testing and development purposes. + +1. Make sure you have Docker and Docker Compose installed on your local machine. You can follow the [Docker installation guide](https://docs.docker.com/get-docker/) if you haven't done so already. + +1. Open a terminal and on the machine where you have docker installed, create a new directory for your PGD cluster, for example: + + ```bash + mkdir pgd-cluster + cd pgd-cluster + ``` + +3. Run the following command to download the PGD Docker Compose file: + + ```bash + curl https://enterprisedb.com/docs/pgd/latest/get-started/assets/pgd_quickstart.sh | bash + ``` + + This will download the PGD Docker Quickstart kit, which includes the Docker Compose file and other necessary files to get started with PGD Essential. + +4. Once the download is complete, you will need to prepare the environment for the PGD cluster. This is done by running the following command: + + ```bash + ./qs.sh prepare + ``` + + This command will create the necessary directories and files for the PGD cluster. + +5. Now you have to build the Docker images for the PGD cluster. You can do this by running the following command: + + ```bash + export EDB_SUBSCRIPTION_TOKEN=... + ./qs.sh build + ``` + + This command will build the Docker image needed for the PGD Quickstart cluster. + +6. After the images are built, you can start the PGD cluster using Docker Compose. Run the following command: + + ```bash + ./qs.sh start + ``` + + This command will start the Docker containers and create a local cluster with the default configuration, running in the background. + + +## Accessing the PGD Cluster + +1. Once the containers are up and running, you can access the PGD cluster using the following command: + + ```bash + docker compose exec host-1 psql pgddb + ``` + + This command will connect you directly to the first node of the cluster using the `psql` command-line interface. + + This is how you would connect to the database for maintenance and management tasks. + + For application and user access you will usually connect using the connection manager which, by default, is running on TCP port 6432 of all the hosts in the cluster. + +1. You can connect to the write leader node in the cluster using the following command: + + ```bash + docker compose exec host-1 psql -h host-1 -p 6432 pgddb + ``` + + You can replace `-h host-1` with the name of any host in the cluster, as they all run the connection manager. + + If you have the psql client installed on your local machine, you can also connect to the cluster using the following command: + + ```bash + export PGPASSWORD=secret + psql -h localhost -p 6432 -U postgres pgddb + ``` + + This connects to the connection manager running on the host-3 container on port 6432. This is then routed to the write leader node in the cluster. + + ```bash + pgddb=# select node_name from bdr.local_node_summary; + node_name + ----------- + node-1 + (1 row) + ``` + + +1. To use the PGD CLI from outside the containers, you can run the following command: + +```bash +docker compose exec host-1 pgd nodes list +__OUTPUT__ + Node Name | Group Name | Node Kind | Join State | Node Status +-----------+------------+-----------+------------+------------- + node-1 | group-1 | data | ACTIVE | Up + node-2 | group-1 | data | ACTIVE | Up + node-3 | group-1 | data | ACTIVE | Up +``` + +This pgd command will lists the nodes in the cluster and their status. + +You can also get a shell on the host-1 container and run the pgd command directly: + +```bash +docker compose exec host-1 bash +pgd nodes list +__OUTPUT__ + Node Name | Group Name | Node Kind | Join State | Node Status +-----------+------------+-----------+------------+------------- + node-1 | group-1 | data | ACTIVE | Up + node-2 | group-1 | data | ACTIVE | Up + node-3 | group-1 | data | ACTIVE | Up +``` + +This will give you access to the PGD CLI and allow you to run any PGD commands directly on the host-1 container. + +## Next Steps + +Now that you have created your first PGD cluster, you can explore the following topics: + +- [Working with SQL and the cluster](first-steps/working-with-sql) to understand how to connect and interact with the cluster using SQL commands. +- [Loading data](first-steps/loading-data) into the cluster using the `COPY` command or `pg_dump` and `pg_restore`. +- [Using PGD CLI](first-steps/using-cli) to monitor and manage the cluster. + diff --git a/product_docs/docs/pgd/6/get-started/first-steps/index.mdx b/product_docs/docs/pgd/6/get-started/first-steps/index.mdx new file mode 100644 index 00000000000..a03e2962f57 --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/first-steps/index.mdx @@ -0,0 +1,16 @@ +--- +title: First steps with your Quickstart PGD Cluster +navTitle: First Steps +description: "Learn how to connect to your PGD cluster, load data, and work with SQL." +navigation: +- working-with-sql +- loading-data +- using-cli +--- + +Now that you have created your first PGD cluster, you can start working with it. This guide will help you connect to the cluster, load data, and perform basic SQL operations. + +- [Working with SQL and the PGD Cluster](working-with-sql) +- [Loading Data into your PGD Cluster](loading-data) +- [Using the PGD CLI](using-cli) + diff --git a/product_docs/docs/pgd/6/get-started/first-steps/loading-data.mdx b/product_docs/docs/pgd/6/get-started/first-steps/loading-data.mdx new file mode 100644 index 00000000000..c437b41bb6f --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/first-steps/loading-data.mdx @@ -0,0 +1,124 @@ +--- +title: Loading Data into your PGD Cluster +navTitle: Loading Data +--- + +PGD is, at its core, a Postgres database, so you can use the same tools and methods to load data into your PGD cluster as you would with any PostgreSQL database. To get you started, this guide will walk you through the process of loading data into your PGD cluster. + +## Online CSV Importing + +First, we are going to show how you can import data from an online CSV file into your PGD cluster. In this case, it's some historical baseball data from [Baseball Databank](https://github.com/cbwinslow/baseballdatabank). We are going to use the `\COPY` command in psql to import directly from a URL. One thing `\COPY` doesn't do is create the table for you, so we will need to create the table first. + +Connect to your PGD cluster using `psql`, either using `docker compose exec host-1 psql` or if you have `psql` installed locally, using that to connect to port 6432 on your host machine. + +```sql +CREATE TABLE batters ( + id SERIAL, + playerid VARCHAR(9), + yearid INTEGER, + stint INTEGER, + teamid VARCHAR(3), + lgid VARCHAR(2), + g INTEGER, + ab INTEGER, + r INTEGER, + h INTEGER, + "2b" INTEGER, + "3b" INTEGER, + hr INTEGER, + rbi INTEGER, + sb INTEGER, + cs INTEGER, + bb INTEGER, + so INTEGER, + ibb INTEGER, + hbp INTEGER, + sh INTEGER, + sf INTEGER, + gidp INTEGER, + PRIMARY KEY (id) +); +``` + +Now we can import the CSV data into the `batters` table using the `\COPY` command: + +```sql +\COPY batters(playerid,yearid,stint,teamid,lgid,g,ab,r,h,"2b","3b",hr,rbi,sb,cs,bb,so,ibb,hbp,sh,sf,gidp) FROM PROGRAM 'curl "https://raw.githubusercontent.com/cbwinslow/baseballdatabank/master/core/Batting.csv"' DELIMITER ',' CSV HEADER +``` + +This command uses `curl` to fetch the CSV file from the URL and pipes it directly into the `\COPY` command, which imports the data into the `batters` table. The batters(...) entry defines which fields in the row the CSV data should go to. +The `DELIMITER ',' CSV HEADER` options specify that the file is a CSV, using commas, with a header row, that gets skipped. + +Copy and the command and paste it into your `psql` session. If everything is set up correctly, you should see the data being imported without any errors. You should see output indicating the number of rows copied, like this: + +```console +COPY 110495 +``` + +To verify that the data has been loaded correctly, you can run a simple query: + +```sql +SELECT COUNT(*) FROM batters; +``` + +You should see a result like this: + +```console + count +------- + 110495 +(1 row) +``` + +This confirms that 110,495 rows have been successfully imported into the `batters` table. + +Let's quickly user it to work out who 1998's home run leader was + +```sql +SELECT playerid, yearid, teamid, hr +FROM batters +WHERE yearid = 1998 +ORDER BY hr DESC +LIMIT 1; +``` + +You should see output like this: + +```console + playerid | yearid | teamid | hr +-----------+--------+--------+---- + mcgwima01 | 1998 | SLN | 70 +(1 row) +``` + +And if we want to put that into the context of the top 5 highest ranked home run hitters in 1998, we can do: + +```sql +SELECT SELECT playerid, yearid, teamid, + rank() OVER (PARTITION BY yearid ORDER BY hr desc) hr_rank, + hr +FROM batters +WHERE yearid = 1998 +ORDER BY hr_rank LIMIT 5; +``` + +You should see output like this: + +```console + playerid | yearid | teamid | hr_rank | hr +-----------+--------+--------+---------+---- + mcgwima01 | 1998 | SLN | 1 | 70 + sosasa01 | 1998 | CHN | 2 | 66 + griffke02 | 1998 | SEA | 3 | 56 + vaughgr01 | 1998 | SDN | 4 | 50 + belleal01 | 1998 | CHA | 5 | 49 +(5 rows) +``` + +With PGD, you can enjoy the full power of PostgreSQL, including advanced SQL features like window functions, to analyze your data, but with the added benefit of it being fully replicated and highly available across multiple nodes when a node fails. + +## Next Steps + +Now that you have loaded some data into your PGD cluster, you can explore the following topics: + +- [Using the PGD CLI](using-cli) to manage your PGD cluster from the command line. diff --git a/product_docs/docs/pgd/6/get-started/first-steps/using-cli.mdx b/product_docs/docs/pgd/6/get-started/first-steps/using-cli.mdx new file mode 100644 index 00000000000..f0f745e247a --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/first-steps/using-cli.mdx @@ -0,0 +1,451 @@ +--- +title: Using PGD CLI +navTitle: Using PGD CLI +--- + +PGD CLI is a command-line interface for managing and monitoring your EDB Postgres Distributed (PGD) clusters. It provides a set of commands to perform various operations on the cluster, such as creating nodes, joining nodes, and managing replication. + +It's already installed and configured if you are using the [Quickstart Docker Compose kit](/pgd/latest/get-started/first-cluster). + +To verify the installation, log into the first host in your PGD cluster: + +```shell +docker compose exec host-1 bash +``` + +and check the version of PGD CLI: + +```sql +pgd --version +__OUTPUT__ +pgd-cli version 6.0.1 +``` + +!!! note +You can also run any of the following commands from outside the containers, using the `docker compose exec` command to run them in the context of the first host in your PGD cluster: + +```shell +docker compose exec host-1 pgd +``` + +And you can run the `pgd` command from any host in the cluster, as they all have the PGD CLI installed and configured. +!!! + +## Getting started with PGD CLI + +Start by viewing the cluster's overall status with the `pgd cluster show` command: + +```shell +pgd cluster show +__OUTPUT__ +# Summary + Group Name | Parent Group | Group Type | Node Name | Node Kind +------------+--------------+------------+-----------+----------- + group-1 | pgd | data | node-1 | data + group-1 | pgd | data | node-2 | data + group-1 | pgd | data | node-3 | data + pgd | | global | | + +# Health + Check | Status | Details +-------------------+--------+------------------------------------------------- + Connections | Ok | All BDR nodes are accessible + Raft | Ok | Raft Consensus is working correctly + Replication Slots | Ok | All PGD replication slots are working correctly + Clock Skew | Ok | Clock drift is within permissible limit + Versions | Ok | All nodes are running the same PGD version + +# Clock Drift + Reference Node | Node Name | Clock Drift +----------------+-----------+------------- + node-3 | node-2 | * + node-3 | node-1 | * + ``` + +This command provides a summary of the cluster, its nodes, and their health status. It also shows the clock drift between nodes, which is important for replication consistency. + +You can also view the status of individual nodes using the `pgd node show` command: + +```shell +pgd node node-1 show +__OUTPUT__ +# Summary + Node Property | Value +-----------------+------------ + Node Name | node-1 + Group Name | group-1 + Node Kind | data + Join State | ACTIVE + Node Status | Up + Node ID | 4153941939 + Snowflake SeqID | 1 + Database | pgddb + +# Options + Option Name | Option Value +----------------+-------------------------------------------------- + route_dsn | port=5432 dbname=pgddb host=host-1 user=postgres + route_fence | false + route_priority | -1 + route_reads | true + route_writes | true + ``` + + The structure of the pgd CLI commands is hierarchical, with commands grouped by functionality. You can view the available commands and their descriptions by running: + +```shell +pgd --help +__OUTPUT__ +Manages PGD clusters + +Usage: pgd [OPTIONS] + +Commands: + cluster Cluster-level commands + group Group related commands + groups Groups listing commands + node Node related commands + nodes Nodes listing commands + events Event log commands + replication Replication related commands + raft Raft related commands + commit-scope Commit scope management commands + assess PGD compatibility assessment of Postgres server + completion Generate the autocompletion script for pgd for the specified shell + +Options: + -V, --version Print version + +Global Options: + -f, --config-file Sets the configuration file path + --dsn Sets the PostgreSQL connection string e.g. "host=localhost port=6000 user=postgres dbname=postgres" [env: PGD_CLI_DSN=] + -o, --output Sets the output format for tables [env: PGD_CLI_OUTPUT=] [default: psql] [possible values: json, psql, modern, markdown, simple] + --debug Print debug messages, useful while troubleshooting [env: PGD_CLI_DEBUG=] + -h, --help Print help +``` + +Commands such as `group`, `node` take a group or a node name as their next argument, followed by a specific command. Commands such as `cluster`, `groups`, and `nodes` do not require a group or node name, as they operate at the cluster level or list all groups or nodes. + +You can also get help for a specific command by running: + +```shell +pgd --help +``` + +## Viewing cluster status + +To view the overall status of your PGD cluster, we have already used the `pgd cluster show` command. This shows all the cluster information. To see just the health status of the cluster, you can use the `--health` option: + +```shell +pgd cluster show --health +__OUTPUT__ + Check | Status | Details +-------------------+--------+------------------------------------------------- + Connections | Ok | All BDR nodes are accessible + Raft | Ok | Raft Consensus is working correctly + Replication Slots | Ok | All PGD replication slots are working correctly + Clock Skew | Ok | Clock drift is within permissible limit + Versions | Ok | All nodes are running the same PGD version + ``` + +Or if you want to see the summary status only, you can use the `--summary` option: + +```shell +pgd cluster show --summary +__OUTPUT__ + Group Name | Parent Group | Group Type | Node Name | Node Kind +------------+--------------+------------+-----------+----------- + group-1 | pgd | data | node-1 | data + group-1 | pgd | data | node-2 | data + group-1 | pgd | data | node-3 | data + pgd | | global | | + ``` + +## Viewing groups and group status + +To view the status of all groups in the cluster, you can use the `pgd groups list` command: + +```shell +pgd groups list +__OUTPUT__ + Group Name | Parent Group Name | Group Type | Nodes +------------+-------------------+------------+------- + group-1 | pgd | data | 3 + pgd | | global | 0 +``` + +Now we can see the top level group `pgd` and the data group `group-1` with 3 nodes in it. All nodes are a member of the top-level group which coordinates all activity across the cluster. +The data group `group-1` is a group of three data nodes which are replicating data between themselves, routing incoming queries within the group to the write leader node in the group. + +We can dig deeper into the group details using the `pgd group show` command: + +```shell +pgd group group-1 show +__OUTPUT__ +# Summary + Group Property | Value +-------------------+--------- + Group Name | group-1 + Parent Group Name | pgd + Group Type | data + Write Leader | node-1 + Commit Scope | + +# Nodes + Node Name | Node Kind | Join State | Node Status +-----------+-----------+------------+------------- + node-1 | data | ACTIVE | Up + node-2 | data | ACTIVE | Up + node-3 | data | ACTIVE | Up + +# Options + Option Name | Option Value +-----------------------------------+---------------------- + analytics_storage_location | (inherited) + apply_delay | 00:00:00 (inherited) + check_constraints | true (inherited) + default_commit_scope | (inherited) + enable_raft | true + enable_routing | true + enable_wal_decoder | false (inherited) + http_port | (inherited) + location | + num_writers | -1 (inherited) + read_only_consensus_timeout | (inherited) + read_only_max_client_connections | (inherited) + read_only_max_server_connections | (inherited) + read_only_port | (inherited) + read_write_consensus_timeout | (inherited) + read_write_max_client_connections | (inherited) + read_write_max_server_connections | (inherited) + read_write_port | (inherited) + route_reader_max_lag | -1 + route_writer_max_lag | -1 + route_writer_wait_flush | false + streaming_mode | default (inherited) + use_https | true + ``` + +This command provides a summary of the group, its nodes, and their status. It also shows the group options, such as whether routing is enabled, the HTTP port for monitoring, and other configuration settings. + +Like the cluster command, you can also use the `--summary` options to view just the summary of the group: + +```shell +pgd group group-1 show --summary +__OUTPUT__ + Group Property | Value +-------------------+--------- + Group Name | group-1 + Parent Group Name | pgd + Group Type | data + Write Leader | node-1 + Commit Scope | +``` + +Now we can see the group is a child of the top-level group `pgd`, it is a data group, and the write leader node in the group is `node-1`. There are no commit scopes set for this group, which means it is using the default commit scope. + +The `--nodes` option can be used to view the nodes in the group: + +```shell +pgd group group-1 show --nodes +__OUTPUT__ + Node Name | Node Kind | Join State | Node Status +-----------+-----------+------------+------------- + node-1 | data | ACTIVE | Up + node-2 | data | ACTIVE | Up + node-3 | data | ACTIVE | Up +``` + +And, similarly, you can use the `--options` option to view the group options: + +```shell +pgd group group-1 show --options +__OUTPUT__ + Option Name | Option Value +-----------------------------------+---------------------- + analytics_storage_location | (inherited) + apply_delay | 00:00:00 (inherited) + check_constraints | true (inherited) + default_commit_scope | (inherited) + enable_raft | true + enable_routing | true + enable_wal_decoder | false (inherited) + http_port | (inherited) + location | + num_writers | -1 (inherited) + read_only_consensus_timeout | (inherited) + read_only_max_client_connections | (inherited) + read_only_max_server_connections | (inherited) + read_only_port | (inherited) + read_write_consensus_timeout | (inherited) + read_write_max_client_connections | (inherited) + read_write_max_server_connections | (inherited) + read_write_port | (inherited) + route_reader_max_lag | -1 + route_writer_max_lag | -1 + route_writer_wait_flush | false + streaming_mode | default (inherited) + use_https | true +``` + +As you can see, many of the options are inherited from the parent group, which is the top-level group `pgd`. The `enable_raft` and `enable_routing` options are set to `true`, which means that the group is using Raft consensus for replication and routing queries (that are made through the connection manager port) to the write leader node. + +Let's take a look at the parent group `pgd` using the `pgd group pgd show` command: + +```shell +pgd group pgd show +__OUTPUT__ +# Summary + Group Property | Value +-------------------+-------- + Group Name | pgd + Parent Group Name | + Group Type | global + Write Leader | + Commit Scope | +``` + +This shows that the top-level group `pgd` is a global group, which means it is not a data group and does not have any data nodes of its own. In this case, it is just userd to coordinate the activity of the data groups in the cluster. It does not have a write leader, as it does not have any data nodes. + +The next part of the output shows the nodes in the group, which is empty: + +```console +# Nodes + Node Name | Node Kind | Join State | Node Status +-----------+-----------+------------+------------- +``` + +The options for the `pgd` group are shown next: + +```console +# Options + Option Name | Option Value +-----------------------------------+-------------- + analytics_storage_location | + apply_delay | 00:00:00 + check_constraints | true + default_commit_scope | + enable_raft | true + enable_routing | false + enable_wal_decoder | false + http_port | + location | + num_writers | -1 + read_only_consensus_timeout | + read_only_max_client_connections | + read_only_max_server_connections | + read_only_port | + read_write_consensus_timeout | + read_write_max_client_connections | + read_write_max_server_connections | + read_write_port | + route_reader_max_lag | -1 + route_writer_max_lag | -1 + route_writer_wait_flush | false + streaming_mode | default + use_https | true +``` + +These are the options for the top-level group `pgd`. This is where `group-1` inherits its options from. +Here though, the `enable_routing` option is set to `false`, which means that the top-level group does not route queries to any data nodes, because it does not have any data nodes of its own. +The `enable_raft` option is set to `true`, which means that the top-level group uses Raft consensus to coordinate management of the cluster. + +Where options are not set, the default values are used, such as the `apply_delay` option which is set to `00:00:00`, meaning there is no delay in applying changes to the cluster. + +## Viewing nodes and node status + +To view the status of all nodes in the cluster, you can use the `pgd nodes list` command: + +```shell +pgd nodes list +__OUTPUT__ + Node Name | Group Name | Node Kind | Join State | Node Status +------------+------------+-----------+------------+------------- + node-1 | group-1 | data | ACTIVE | Up + node-2 | group-1 | data | ACTIVE | Up + node-3 | group-1 | data | ACTIVE | Up +``` + +You can also view the status of a specific node using the `pgd node show` command: + +```shell +pgd node node-1 show +__OUTPUT__ +# Summary + Node Property | Value +-----------------+------------ + Node Name | node-1 + Group Name | group-1 + Node Kind | data + Join State | ACTIVE + Node Status | Up + Node ID | 4153941939 + Snowflake SeqID | 1 + Database | pgddb + +# Options + Option Name | Option Value +----------------+-------------------------------------------------- + route_dsn | port=5432 dbname=pgddb host=host-1 user=postgres + route_fence | false + route_priority | -1 + route_reads | true + route_writes | true +``` + +Here we can see more about the node itself. We can see the node's name and group it belongs to, that it is a data node, that it is actively joined to the group and that it is up and running. The node ID is a unique identifier for the node, and the Snowflake SeqID is used for ordering events in the cluster. Finally, we can see that its database is `pgddb`, which is the default database created in the Quickstart Docker Compose kit. + +The options for the node are shown next, and these are specific to this particular node: + +- `route_dsn` is the connection string for the node, which is used by the connection manager to route queries to this node. +- `route_fence` is set to `false`, which means that the node does not have a fence set up to prevent routing queries to it. +- `route_priority` is set to `-1`, which means that the node does not have a specific priority for routing queries. +- `route_reads` and `route_writes` are both set to `true`, which means that the node can handle both read and write queries. + +These are used by the connection manager when routing queries to the node. They are also how you can control which nodes are active, without taking them down. Setting `route_fence` to `true` will prevent the connection manager from routing queries to this node, while still allowing it to be part of the cluster and replicate data. + +## Setting node options + +You can set options for a node using the `pgd node set` command. For example, to set the `route_fence` option to `true` for the `node-1`, you can run: + +```shell +pgd node node-1 set-option route_fence true +``` + +If we now try and connect to the `node-1`'s connection manager: + +```shell +psql -h host-1 -p 6432 +``` + +We get a connection. But it is not routed to the `node-1` node, as it is fenced off from routing queries. Instead, it is routed to the current write leader in the group, which is `node-2`: + +```sql +select node_name from bdr.local_node_summary; + node_name + ----------- + node-2 +(1 row) +``` + +If we exit and undo the fencing by running: + +```shell +pgd node node-1 set-option route_fence false +``` + +We can now connect to the `node-1` node's connection manager again: + +```shell +psql -h host-1 -p 6432 +``` +And we can see that we are now connected to the `node-1` node: + +```sql +select node_name from bdr.local_node_summary; + node_name + ----------- + node-1 +(1 row) +``` + diff --git a/product_docs/docs/pgd/6/get-started/first-steps/working-with-sql.mdx b/product_docs/docs/pgd/6/get-started/first-steps/working-with-sql.mdx new file mode 100644 index 00000000000..4efc3e4a345 --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/first-steps/working-with-sql.mdx @@ -0,0 +1,108 @@ +--- +title: Working with SQL and the PGD Cluster +navTitle: Working with SQL +description: "Working with SQL and the PGD Cluster" +--- + +The first step in working with your PGD cluster is to connect to it using SQL. You can do this using the `psql` command-line interface or any other SQL client that supports PostgreSQL. + +## Connecting to the PGD Cluster + +With PGD Essential, unless you are performing maintenance tasks, you will usually connect to the cluster using the connection manager, which is running on TCP port 6432 of all the hosts in the cluster. + +You can connect to the write leader node in the cluster using the following command: + +```bash +psql -h -p 6432 -U +``` + +As we have a new cluster running with no users (apart from the `postgres` superuser) and one replicated database (`pgddb`), you can connect to the cluster using the following command: + +```bash +psql -h host-1 -p 6432 -U postgres pgddb +``` + +This connects to the connection manager running on the `host-1` container on port 6432, which is then routed to the write leader node in the cluster. You can replace `host-1` with the name of any host in the cluster, as they all run the connection manager. + +If we run the following command, we can see which node we are connected to in the cluster: + +```sql +select node_name from bdr.local_node_summary; + node_name +----------- + node-1 +``` + +Which doesn't surprise us, as we connected to the `host-1` container, which is running the `node-1` node in the cluster. + +If we exit `psql`, and reconnect with: + +```bash +psql -h host-2 -p 6432 -U postgres pgddb +``` + +We can see that we are now connected to the `node-1` node in the cluster: + +```sql +select node_name from bdr.local_node_summary; + node_name +----------- + node-1 +``` + +That's the connection manager routing us to the write leader node in the cluster, which is `node-1`. To confirm this, we can run: + +```sql +\! pgd group group-1 show --summary +__OUTPUT__ + Group Property | Value +-------------------+--------- + Group Name | group-1 + Parent Group Name | pgd + Group Type | data + Write Leader | node-1 + Commit Scope | +``` + +(You can use the `\!` command in `psql` to run shell commands directly from within the `psql` session.) + +## Working with SQL + +Now that you are connected to the cluster, you can start working with SQL commands. You can create tables, insert data, and run queries just like you would in a regular PostgreSQL database. + +For example, you can create a table and insert some data: + +```sql +CREATE TABLE users ( + id SERIAL PRIMARY KEY, + name VARCHAR(100), + email VARCHAR(100) UNIQUE +); +INSERT INTO users (name, email) VALUES +('Alice', 'alice@example.com'), +('Bob', 'bob@example.com'); +``` + +You can then query the data: + +```sql +SELECT * FROM users; + id | name | email +----+--------+--------------------- + 2 | Alice | alice@example.com + 3 | Bob | bob@example.com +(2 rows) +``` + +You can also run more complex queries, join tables, and use all the features of PostgreSQL. It's not withing the scope of this guide to cover all SQL commands, but you can refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql.html) for more information on SQL syntax and commands. + +## Differences with PGD + +What is important is that those SQL commands are replicated across the cluster. PGD has taken care of the replication for you. For example, that `serial` key has automatically been converted to a globally unique key across the cluster, so you can insert data on any node in the cluster and it will be replicated to all other nodes. For PGD Essential, this is less important as you are required to connect to the write leader, but with PGD Expanded, you can connect to any node in the cluster and run SQL commands, and this automatic change enables you to do that without worrying about conflicts or duplicates. With PGD Essentia you are future proofed and can easily move to PGD Expanded later, with no changes to your SQL commands or application code. + +## Next Steps + +Now that you have connected to your PGD cluster and run some SQL commands, you can explore the following topics: + +- [Loading Data into your PGD Cluster](loading-data) to learn how to import data from external sources. +- [Using the PGD CLI](using-cli) to manage your PGD cluster from the command line. diff --git a/product_docs/docs/pgd/6/get-started/index.mdx b/product_docs/docs/pgd/6/get-started/index.mdx new file mode 100644 index 00000000000..0f7b982028e --- /dev/null +++ b/product_docs/docs/pgd/6/get-started/index.mdx @@ -0,0 +1,32 @@ +--- +title: Get started with PGD +navTitle: Get started +description: "Get started with EDB Postgres Distributed, installing and configuring the software, and creating your first cluster." +navigation: +- essential-standard +- essential-near-far +- first-cluster +- first-steps +- expanded-examples +--- + +To begin using any edition of EDB Postgres Distributed, we recommend you first try our local installation and configuration guide. + +This guide will help you install and configure the software, and create your first cluster. + +## What is EDB Postgres Distributed? + +EDB Postgres Distributed (PGD) is a distributed database solution that provides high availability, scalability, and fault tolerance for PostgreSQL databases. It allows you to create clusters of PostgreSQL instances that can work together to provide a single, unified database system. + +## What is EDB Postgres Distributed Essential? + +EDB Postgres Distributed Essential is a streamlined version of PGD that focuses on delivering core distributed database functionality with minimal complexity. It is designed for users who need basic high availability and disaster recovery features without the advanced capabilities offered by PGD Expanded, the full version. + +## What is the PGD Essential Standard architecture + +Get to know what EDB Postgres Distributed Essential is all about in [Essential Standard](essential-standard). + +## Create your first PGD Essential cluster with Docker Compose + +Use the [Docker Compose](https://docs.docker.com/compose/) file to [create your first PGD Essential cluster](first-cluster) with three nodes. This is a great way to get started with PGD Essential and see how it works in a real-world scenario and a stepping stone to deploying a production cluster with PGD Essential or PGD Expanded. + diff --git a/product_docs/docs/pgd/6/index.mdx b/product_docs/docs/pgd/6/index.mdx new file mode 100644 index 00000000000..8ed78998154 --- /dev/null +++ b/product_docs/docs/pgd/6/index.mdx @@ -0,0 +1,50 @@ +--- +title: "EDB Postgres Distributed (PGD)" +navTitle: EDB Postgres Distributed +description: EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication. +indexCards: simple +redirects: + - /edb-postgres-ai/migration-etl/pgd/ +navigation: + - "#Getting Started" + - get-started + - "#How To..." + - essential-how-to + - expanded-how-to + - "#In Depth" + - concepts + - "#Reference" + - reference + - terminology + - "#Appendix" + - compatibility + - rel_notes + - known_issues +navRootedTo: /edb-postgres-ai/databases +categories: + - /edb-postgres-ai/platforms-and-tools/high-availability/ +pdf: true +directoryDefaults: + version: "6.0.1" +--- + +Welcome to the PGD 6.0 documentation. PGD 6.0 is now available in two editions, Essential and Expanded. + +## Why PGD? + +Modern data architectures require an extensible approach to data management, whether the requirement is for high availability, disaster recovery or multi-region data distribution. PGD is designed to meet these needs, and in PGD 6.0 we have made it easier to get started with PGD, while also providing a pathway to using advanced features as your use case becomes more complex. + +## What does PGD enable? + +PGD enables you to build a distributed database architecture that can span multiple regions, data centers, or cloud providers. It provides multi-master replication and data distribution. Postgres databases can be deployed into data groups within the cluster and data within each node can be distributed across multiple nodes. + +## What are the differences between PGD Essential and PGD Expanded? + +PGD Expanded is the full-featured version of PGD. It includes all the features of PGD Essential, as well as additional features such as advanced conflict management, data distribution, and support for large-scale deployments. PGD Expanded is designed for users who need the most advanced features and capabilities of PGD. + +PGD Essential is a simplified version of PGD Expanded. It is designed for users who want to get started with PGD quickly and easily, without the need for advanced features or complex configurations. PGD Essential includes the core features of PGD but enables them in a way that makes replication and availability simple. It therefore does not include some of the more advanced features available in PGD Expanded. + +PGD Essential limits the number of data nodes in a cluster to 4 and the number of groups to 2. It also limits the number of nodes in a group to 4. PGD Expanded does not have these limitations. + +Learn more about PGD in [Get Started with PGD](/pgd/latest/get-started/). + diff --git a/product_docs/docs/pgd/6/known_issues.mdx b/product_docs/docs/pgd/6/known_issues.mdx new file mode 100644 index 00000000000..753dbcb7472 --- /dev/null +++ b/product_docs/docs/pgd/6/known_issues.mdx @@ -0,0 +1,246 @@ +--- +title: 'Known issues and limitations' +navTitle: 'Known issues and limitations' +description: 'Known issues and limitations in EDB Postgres Distributed 6' +--- + +## Known issues + +These are currently known issues in EDB Postgres Distributed 6. +These known issues are tracked in PGD's ticketing system and are expected to be resolved in a future release. + +- If the resolver for the `update_origin_change` conflict + is set to `skip`, `synchronous_commit=remote_apply` is used, and + concurrent updates of the same row are repeatedly applied on two + different nodes, then one of the update statements might hang due + to a deadlock with the PGD writer. As mentioned in + [Conflicts](/pgd/latest/reference/conflict-management/conflicts/), `skip` isn't the default + resolver for the `update_origin_change` conflict, and this + combination isn't intended to be used in production. It discards + one of the two conflicting updates based on the order of arrival + on that node, which is likely to cause a divergent cluster. + In the rare situation that you do choose to use the `skip` + conflict resolver, note the issue with the use of the + `remote_apply` mode. + +- The Decoding Worker feature doesn't work with CAMO/Eager/Group Commit. +Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled. + +- Lag Control doesn't adjust commit delay in any way on a fully isolated node, that's in case all other nodes are unreachable or not operational. +As soon as at least one node connects, replication Lag Control picks up its work and adjusts the PGD commit delay again. + +- For time-based Lag Control, PGD currently uses the lag time, measured by commit timestamps, rather than the estimated catch up time that's based on historic apply rates. + +- Changing the CAMO partners in a CAMO pair isn't currently possible. +It's possible only to add or remove a pair. +Adding or removing a pair doesn't require a restart of Postgres or even a reload of the configuration. + +- Group Commit can't be combined with [CAMO](/pgd/latest/reference/commit-scopes/camo/). + +- Transactions using Eager Replication can't yet execute DDL. The TRUNCATE command is allowed. + +- Parallel Apply isn't currently supported in combination with Group Commit. Make sure to disable it when using Group Commit by either (a) Setting `num_writers` to 1 for the node group using [`bdr.alter_node_group_option`](/pgd/latest/reference/tables-views-functions/nodes-management-interfaces/#bdralter_node_group_option) or (b) using the GUC [`bdr.writers_per_subscription`](/pgd/latest/reference/tables-views-functions/pgd-settings#bdrwriters_per_subscription). See [Configuration of generic replication](/pgd/latest/reference/tables-views-functions/pgd-settings#generic-replication). + +- There currently is no protection against altering or removing a commit scope. +Running transactions in a commit scope that's concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. +Make sure that any transactions using a specific commit scope have finished before altering or removing it. + +- The [PGD CLI](/pgd/latest/reference/cli) can return stale data on the state of the cluster if it's still connecting to nodes that were previously parted from the cluster. +Edit the [`pgd-cli-config.yml`](/pgd/latest/reference/cli/configuring_cli/#using-a-configuration-file) file, or change your [`--dsn`](/pgd/latest/reference/cli/configuring_cli/#using-database-connection-strings-in-the-command-line) settings to ensure only active nodes in the cluster are listed for connection. + +To modify a commit scope safely, use [`bdr.alter_commit_scope`](/pgd/latest/reference/tables-views-functions/functions#bdralter_commit_scope). + +- DDL run in serializable transactions can face the error: `ERROR: could not serialize access due to read/write dependencies among transactions`. A workaround is to run the DDL outside serializable transactions. + +- The EBD Postgres Advanced Server 17 data type [`BFILE`](/epas/latest/reference/sql_reference/02_data_types/03a_bfiles/) is not currently supported. This is due to `BFILE` being a file reference that is stored in the database, and the file itself is stored outside the database and not replicated. + +- EDB Postgres Advanced Server's native autopartioning is not supported in PGD. See [Restrictions on EDB Postgres Advanced Server-native automatic partitioning](/pgd/latest/reference/autopartition#restrictions-on-edb-postgres-advanced-server-native-automatic-partitioning) for more information. + +## Limitations + +Take these EDB Postgres Distributed (PGD) design limitations into account when planning your deployment. + +### Nodes + +- PGD can run hundreds of nodes, assuming adequate hardware and network. However, + for mesh-based deployments, we generally don’t recommend running more than 48 + nodes in one cluster. If you need extra read scalability beyond the 48-node + limit, you can add subscriber-only nodes without adding connections to the + mesh network. + +- The minimum recommended number of nodes in a group is three to provide fault + tolerance for PGD's consensus mechanism. With just two nodes, consensus would + fail if one of the nodes were unresponsive. Consensus is required for some PGD + operations, such as distributed sequence generation. For more information about + the consensus mechanism used by EDB Postgres Distributed, see [Architectural + details](/pgd/latest/reference/overview/basic-architecture/). + +### Multiple databases on single instances + +Support for using PGD for multiple databases on the same Postgres instance is +**deprecated** beginning with PGD 5 and will no longer be supported with PGD 6. As +we extend the capabilities of the product, the added complexity introduced +operationally and functionally is no longer viable in a multi-database design. + +It's best practice and we recommend that you configure only one database per PGD instance. + +The tooling such as the CLI and Connection Manager currently codify that recommendation. + +While it's still possible to host up to 10 databases in a single instance, +doing so incurs many immediate risks and current limitations: + +- If PGD configuration changes are needed, you must execute administrative commands + for each database. Doing so increases the risk for potential + inconsistencies and errors. + +- You must monitor each database separately, adding overhead. + +- Connection Manager works at the Postgres instance level, not at the database level, + meaning the leader node is the same for all databases. + +- Each additional database increases the resource requirements on the server. + Each one needs its own set of worker processes maintaining replication, for example, + logical workers, WAL senders, and WAL receivers. Each one also needs its own + set of connections to other instances in the replication cluster. These needs might + severely impact performance of all databases. + +- Synchronous replication methods, for example, CAMO and Group Commit, won’t work as + expected. Since the Postgres WAL is shared between the databases, a + synchronous commit confirmation can come from any database, not necessarily in + the right order of commits. + +- CLI integration assumes one database. + +### Durability options (Group Commit/CAMO) + +There are various limits on how the PGD durability options work. +These limitations are a product of the interactions between Group Commit and CAMO, and how they interact with PGD features such as the [WAL decoder](/pgd/latest/reference/decoding_worker/) and [transaction streaming](/pgd/latest/reference/transaction-streaming/). + +Also, there are limitations on interoperability with legacy synchronous replication, +interoperability with explicit two-phase commit, and unsupported combinations +within commit scope rules. + +The following limitations apply to the use of commit scopes and the various durability options they enable. + +#### General durability limitations + +- [Legacy synchronous replication](/pgd/latest/reference/commit-scopes/legacy-sync) uses a mechanism for transaction confirmation + different from the one used by CAMO, Eager, and Group Commit. The two aren't + compatible, so don't use them together. Whenever you use Group Commit, CAMO, + or Eager, make sure none of the PGD nodes are configured in + `synchronous_standby_names`. + +- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE + TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) + can't be used with CAMO, Group Commit, or Eager because those + features use two-phase commit underneath. + +#### Group Commit + +[Group Commit](/pgd/latest/reference/commit-scopes/group-commit) enables configurable synchronous commits over +nodes in a group. If you use this feature, take the following limitations into account: + +- Not all DDL can run when you use Group Commit. If you use unsupported DDL, a warning is logged, and the transactions commit scope is set to local. The only supported DDL operations are: + - Nonconcurrent `CREATE INDEX` + - Nonconcurrent `DROP INDEX` + - Nonconcurrent `REINDEX` of an individual table or index + - `CLUSTER` (of a single relation or index only) + - `ANALYZE` + - `TRUNCATE` + + +- Explicit two-phase commit isn't supported by Group Commit as it already uses two-phase commit. + +- Combining different commit decision options in the same transaction or + combining different conflict resolution options in the same transaction isn't + supported. + +- Currently, Raft commit decisions are extremely slow, producing very low TPS. + We recommended using them only with the `eager` conflict resolution setting + to get the Eager All-Node Replication behavior of PGD 4 and older. + +#### Eager + +[Eager](/pgd/latest/reference/commit-scopes/group-commit/#eager-conflict-resolution) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. + +Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It +also doesn't allow `LISTEN` or `UNLISTEN`. + +## CAMO + +[Commit At Most Once](/pgd/latest/reference/commit-scopes/camo) (CAMO) is a feature that aims to prevent +applications committing more than once. If you use this feature, take +these limitations into account when planning: + +- CAMO is designed to query the results of a recently failed COMMIT on the +origin node. In case of disconnection, the application must request the +transaction status from the CAMO partner. Ensure that you have as little delay +as possible after the failure before requesting the status. Applications must +not rely on CAMO decisions being stored for longer than 15 minutes. + +- If the application forgets the global identifier assigned, for example, +as a result of a restart, there's no easy way to recover +it. Therefore, we recommend that applications wait for outstanding +transactions to end before shutting down. + +- For the client to apply proper checks, a transaction protected by CAMO +can't be a single statement with implicit transaction control. You also can't +use CAMO with a transaction-controlling procedure or +in a `DO` block that tries to start or end transactions. + +- CAMO resolves commit status but doesn't resolve pending +notifications on commit. CAMO doesn't +allow the `NOTIFY` SQL command or the `pg_notify()` function. +They also don't allow `LISTEN` or `UNLISTEN`. + +- When replaying changes, CAMO transactions might detect conflicts just +the same as other transactions. If timestamp-conflict detection is used, +the CAMO transaction uses the timestamp of the prepare-on-the-origin +node, which is before the transaction becomes visible on the origin +node itself. + +- CAMO isn't currently compatible with transaction streaming. +Be sure to disable transaction streaming when planning to use +CAMO. You can configure this option globally or in the PGD node group. See +[Transaction streaming configuration](/pgd/latest/reference/transaction-streaming#configuration). + +- CAMO isn't currently compatible with decoding worker. +Be sure to not enable decoding worker when planning to use +CAMO. You can configure this option in the PGD node group. See +[Decoding worker disabling](/pgd/latest/reference/decoding_worker#enabling). + +- Not all DDL can run when you use CAMO. If you use unsupported DDL, a warning is logged and the transactions commit scope is set to local only. The only supported DDL operations are: + - Nonconcurrent `CREATE INDEX` + - Nonconcurrent `DROP INDEX` + - Nonconcurrent `REINDEX` of an individual table or index + - `CLUSTER` (of a single relation or index only) + - `ANALYZE` + - `TRUNCATE` + + +- Explicit two-phase commit isn't supported by CAMO as it already uses two-phase commit. + +- You can combine only CAMO transactions with the `DEGRADE TO` clause for +switching to asynchronous operation in case of lowered availability. + + +### Mixed PGD versions + +PGD was developed to [enable rolling upgrades of PGD](/pgd/latest/reference/upgrades) by allowing mixed versions of PGD to operate during the upgrade process. +We expect users to run mixed versions only during upgrades and, once an upgrade starts, that they complete that upgrade. +We don't support running mixed versions of PGD except during an upgrade. + +### Other limitations + +This noncomprehensive list includes other limitations that are expected and +are by design. We don't expect to resolve them in the future. +Consider these limitations when planning your deployment: + +- A `galloc` sequence might skip some chunks if you create the sequence in a + rolled back transaction and then create it again with the same name. Skipping chunks can + also occur if you create and drop the sequence when DDL replication isn't active + and then you create it again when DDL replication is active. The impact of + the problem is mild because the sequence guarantees aren't violated. The + sequence skips only some initial chunks. Also, as a workaround, you can + specify the starting value for the sequence as an argument to the + `bdr.alter_sequence_set_kind()` function. diff --git a/product_docs/docs/pgd/5.8/appusage/behavior.mdx b/product_docs/docs/pgd/6/reference/appusage/behavior.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/appusage/behavior.mdx rename to product_docs/docs/pgd/6/reference/appusage/behavior.mdx index 451df72c85f..c51b3eb4fd2 100644 --- a/product_docs/docs/pgd/5.8/appusage/behavior.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/behavior.mdx @@ -36,7 +36,7 @@ TRUNCATE commands is supported, but take care when truncating groups of tables connected by foreign keys. When replicating a truncate action, the subscriber truncates the same group of tables that was truncated on the origin, either explicitly specified or implicitly collected by CASCADE, except in cases where -replication sets are defined. See [Replication sets](../repsets) for +replication sets are defined. See [Replication sets](/pgd/latest/reference/repsets) for details and examples. This works correctly if all affected tables are part of the same subscription. But if some tables to truncate on the subscriber have foreign-key links to tables that aren't part of the same (or any) replication @@ -56,11 +56,11 @@ nodes in the presence of concurrent transactions on multiple nodes. If DML is executed on multiple nodes concurrently, then potential conflicts might occur if executing with asynchronous replication. You must either handle these or avoid them. Various avoidance mechanisms are possible, discussed in -[Conflicts](../conflict-management/conflicts). +[Conflicts](/pgd/latest/reference/conflict-management/conflicts). ### Sequences -Sequences need special handling, described in [Sequences](../sequences). This is +Sequences need special handling, described in [Sequences](/pgd/latest/reference/sequences). This is because in a cluster, sequences must be global to avoid nodes creating conflicting values. Global sequences are available with global locking to ensure integrity. diff --git a/product_docs/docs/pgd/5.8/appusage/dml-ddl.mdx b/product_docs/docs/pgd/6/reference/appusage/dml-ddl.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/appusage/dml-ddl.mdx rename to product_docs/docs/pgd/6/reference/appusage/dml-ddl.mdx diff --git a/product_docs/docs/pgd/5.8/appusage/extensions.mdx b/product_docs/docs/pgd/6/reference/appusage/extensions.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/appusage/extensions.mdx rename to product_docs/docs/pgd/6/reference/appusage/extensions.mdx index d98894a9253..d97cab4d933 100644 --- a/product_docs/docs/pgd/5.8/appusage/extensions.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/extensions.mdx @@ -45,7 +45,7 @@ PostgreSQL extensions provide SQL objects, such as functions, datatypes, and, op The relevant extension packages must be available on all nodes in the cluster. Otherwise extension installation can fail and impact cluster stability. -If PGD is deployed using [Trusted Postgres Architect](/tpa/latest/), configure extensions using that tool. +If PGD is deployed using [Trusted Postgres Architect](/tpa/latest/), configure extensions using that tool. For details, see [Adding Postgres extensions](/tpa/latest/reference/postgres_extension_configuration). The following is relevant for manually configured PGD installations. @@ -61,7 +61,7 @@ The order in which you specify other extensions generally doesn't matter. Howeve Configure `shared_preload_libraries` on all nodes in the cluster before installing the extension with `CREATE EXTENSION`. You must restart PostgreSQL to activate the new configuration. -See also [Postgres settings](../postgres-configuration/#postgres-settings). +See also [Postgres settings](/pgd/latest/reference/postgres-configuration/#postgres-settings). ### Installing the extension diff --git a/product_docs/docs/pgd/5.8/appusage/feature-compatibility.mdx b/product_docs/docs/pgd/6/reference/appusage/feature-compatibility.mdx similarity index 62% rename from product_docs/docs/pgd/5.8/appusage/feature-compatibility.mdx rename to product_docs/docs/pgd/6/reference/appusage/feature-compatibility.mdx index 9eaa673854f..9f4504ab875 100644 --- a/product_docs/docs/pgd/5.8/appusage/feature-compatibility.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/feature-compatibility.mdx @@ -12,35 +12,35 @@ Not all server features work with all commit scopes. This table shows the ones t Async
(default)
-Parallel
Apply
-Transaction
Streaming
-Single
Decoding
Worker
+Parallel
Apply
+Transaction
Streaming
+Single
Decoding
Worker
- Group Commit + Group Commit ⛔︎ ❌ ❌❗️ ✅ - CAMO + CAMO ⛔︎ ✅ ❌ ❌ - Lag Control + Lag Control ✅ ✅ ✅ ✅ - Synchronous Commit + Synchronous Commit ⛔︎ ✅ ✅ @@ -62,42 +62,42 @@ Not all server features work with all commit scopes. This table shows the ones t ## Commit scope/commit scope interoperability -Although you can't mix commit scopes, you can [combine rules](../commit-scopes/commit-scope-rules/#combining-rules) with an `AND` operator. This table shows where commit scopes can be combined. +Although you can't mix commit scopes, you can [combine rules](/pgd/latest/reference/commit-scopes/commit-scope-rules/#combining-rules) with an `AND` operator. This table shows where commit scopes can be combined. - - - - + + + + - + - + - + - + @@ -110,5 +110,4 @@ Although you can't mix commit scopes, you can [combine rules](../commit-scopes/c #### Notes -Each commit scope implicitly works with itself. - +Each commit scope implicitly works with itself. diff --git a/product_docs/docs/pgd/5.8/appusage/index.mdx b/product_docs/docs/pgd/6/reference/appusage/index.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/appusage/index.mdx rename to product_docs/docs/pgd/6/reference/appusage/index.mdx diff --git a/product_docs/docs/pgd/5.8/appusage/nodes-with-differences.mdx b/product_docs/docs/pgd/6/reference/appusage/nodes-with-differences.mdx similarity index 95% rename from product_docs/docs/pgd/5.8/appusage/nodes-with-differences.mdx rename to product_docs/docs/pgd/6/reference/appusage/nodes-with-differences.mdx index 2cbfe9f6a12..695e102bb6d 100644 --- a/product_docs/docs/pgd/5.8/appusage/nodes-with-differences.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/nodes-with-differences.mdx @@ -20,7 +20,7 @@ definitions, such as a source that's a normal table replicating to a partitioned table, including support for updates that change partitions on the target. It can be faster if the partitioning definition is the same on the source and target since dynamic partition routing doesn't need to execute at apply time. -For details, see [Replication sets](../repsets). +For details, see [Replication sets](/pgd/latest/reference/repsets). By default, all columns are replicated. @@ -69,11 +69,11 @@ value of a table's storage parameter `user_catalog_table` must be identical on all nodes. A table being replicated must be owned by the same user/role on each node. See -[Security and roles](../security) for details. +[Security and roles](/pgd/latest/reference/security) for details. Roles can have different passwords for connection on each node, although by default changes to roles are replicated to each node. See [DDL -replication](../ddl) to specify how to alter a role password on only a subset of +replication](/pgd/latest/reference/ddl) to specify how to alter a role password on only a subset of nodes or locally. ## Comparison between nodes with differences @@ -119,4 +119,4 @@ you can't add a node with a minor version if the cluster uses a newer protocol version. Doing so returns an error. Both of these features might be affected by specific restrictions. See [Release -notes](../rel_notes/) for any known incompatibilities. \ No newline at end of file +notes](/pgd/latest/rel_notes/) for any known incompatibilities. \ No newline at end of file diff --git a/product_docs/docs/pgd/5.8/appusage/rules.mdx b/product_docs/docs/pgd/6/reference/appusage/rules.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/appusage/rules.mdx rename to product_docs/docs/pgd/6/reference/appusage/rules.mdx diff --git a/product_docs/docs/pgd/5.8/appusage/table-access-methods.mdx b/product_docs/docs/pgd/6/reference/appusage/table-access-methods.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/appusage/table-access-methods.mdx rename to product_docs/docs/pgd/6/reference/appusage/table-access-methods.mdx index ac7f124647b..ae7c0156844 100644 --- a/product_docs/docs/pgd/5.8/appusage/table-access-methods.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/table-access-methods.mdx @@ -6,7 +6,7 @@ navTitle: Table access methods The [EDB Advanced Storage Pack](/pg_extensions/advanced_storage_pack/) provides a selection of table access methods (TAMs), available from EDB Postgres 15.0. -The following TAMs were certified for use with PGD 5.0: +The following TAMs were certified for use with PGD 6.0: * [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster) * [Refdata](/pg_extensions/advanced_storage_pack/#refdata) diff --git a/product_docs/docs/pgd/5.8/appusage/timing.mdx b/product_docs/docs/pgd/6/reference/appusage/timing.mdx similarity index 85% rename from product_docs/docs/pgd/5.8/appusage/timing.mdx rename to product_docs/docs/pgd/6/reference/appusage/timing.mdx index 62300ef3c8f..eb84cb10354 100644 --- a/product_docs/docs/pgd/5.8/appusage/timing.mdx +++ b/product_docs/docs/pgd/6/reference/appusage/timing.mdx @@ -8,7 +8,7 @@ possible for a client connected to multiple PGD nodes or switching between them to read stale data. A [queue wait -function](/pgd/latest/reference/functions/#bdrwait_for_apply_queue) is provided +function](/pgd/latest/reference/tables-views-functions/functions/#bdrwait_for_apply_queue) is provided for clients or proxies to prevent such stale reads. The synchronous replication features of Postgres are available to PGD as well. diff --git a/product_docs/docs/pgd/5.8/scaling.mdx b/product_docs/docs/pgd/6/reference/autopartition.mdx similarity index 84% rename from product_docs/docs/pgd/5.8/scaling.mdx rename to product_docs/docs/pgd/6/reference/autopartition.mdx index 49bb625b580..5e4dd94928a 100644 --- a/product_docs/docs/pgd/5.8/scaling.mdx +++ b/product_docs/docs/pgd/6/reference/autopartition.mdx @@ -1,8 +1,7 @@ --- -title: PGD AutoPartition +title: AutoPartition in PGD +navTitle: AutoPartition description: How to use autopartitioning in PGD to split tables into several partitions. -redirects: - - ../bdr/scaling --- PGD AutoPartition allows you to split tables into several partitions. It lets @@ -19,7 +18,7 @@ your search_path, you need to schema qualify the name of each function. ## Auto creation of partitions -PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) +PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition) function to create or alter the definition of automatic range partitioning for a table. If no definition exists, it's created. Otherwise, later executions will alter the definition. @@ -31,34 +30,22 @@ table. Versions of PGD earlier than 5.5 don't support this feature and lock the An error is raised if the table isn't RANGE partitioned or a multi-column partition key is used. -By default, AutoPartition manages partitions globally. In other words, when a -partition is created on one node, the same partition is created on all other -nodes in the cluster. Using the default makes all partitions consistent and -guaranteed to be available. For this capability, AutoPartition makes use of -Raft. - -You can change this behavior by setting `managed_locally` to `true`. In that -case, all partitions are managed locally on each node. Managing partitions +By default, AutoPartition manages partitions locally. Managing partitions locally is useful when the partitioned table isn't a replicated table. In that case, you might not need or want to have all partitions on all nodes. For -example, the built-in -[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +example, the built-in [`bdr.conflict_history`](/pgd/latest/reference/tables-views-functions/catalogs-visible#bdrconflict_history) table isn't a replicated table. It's managed by AutoPartition locally. Each node creates partitions for this table locally and drops them once they're old enough. Also consider: -- You can't later change tables marked as `managed_locally` to be managed -globally and vice versa. - -- Activities are performed only when the entry is marked `enabled = on`. +- Activities are performed only when the entry is marked `enabled = on`. -- We recommend that you don't manually create or drop partitions for tables +- We recommend that you don't manually create or drop partitions for tables managed by AutoPartition. Doing so can make the AutoPartition metadata inconsistent and might cause it to fail. - ## AutoPartition examples Daily partitions, keep data for one month: @@ -145,7 +132,7 @@ upper bound. ## Stopping automatic creation of partitions Use -[`bdr.drop_autopartition()`](/pgd/latest/reference/autopartition#bdrdrop_autopartition) +[`bdr.drop_autopartition()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrdrop_autopartition) to drop the autopartitioning rule for the given relation. All pending work items for the relation are deleted, and no new work items are created. @@ -155,7 +142,7 @@ Partition creation is an asynchronous process. AutoPartition provides a set of functions to wait for the partition to be created, locally or on all nodes. Use -[`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions) +[`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition_wait_for_partitions) to wait for the creation of partitions on the local node. The function takes the partitioned table name and a partition key column value and waits until the partition that holds that value is created. @@ -164,14 +151,14 @@ The function waits only for the partitions to be created locally. It doesn't guarantee that the partitions also exist on the remote nodes. To wait for the partition to be created on all PGD nodes, use the -[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) +[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) function. This function internally checks local as well as all remote nodes and waits until the partition is created everywhere. ## Finding a partition Use the -[`bdr.autopartition_find_partition()`](/pgd/latest/reference/autopartition#bdrautopartition_find_partition) +[`bdr.autopartition_find_partition()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition_find_partition) function to find the partition for the given partition key value. If a partition to hold that value doesn't exist, then the function returns NULL. Otherwise it returns the Oid of the partition. @@ -179,10 +166,10 @@ of the partition. ## Enabling or disabling autopartitioning Use -[`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) +[`bdr.autopartition_enable()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition_enable) to enable autopartitioning on the given table. If autopartitioning is already enabled, then no action occurs. Similarly, use -[`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) +[`bdr.autopartition_disable()`](/pgd/latest/reference/tables-views-functions/autopartition#bdrautopartition_disable) to disable autopartitioning on the given table. ## Restrictions on EDB Postgres Advanced Server-native automatic partitioning diff --git a/product_docs/docs/pgd/5.8/backup.mdx b/product_docs/docs/pgd/6/reference/backup-restore.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/backup.mdx rename to product_docs/docs/pgd/6/reference/backup-restore.mdx index 6d5fdc5c381..7b28fa13769 100644 --- a/product_docs/docs/pgd/5.8/backup.mdx +++ b/product_docs/docs/pgd/6/reference/backup-restore.mdx @@ -1,13 +1,13 @@ --- title: Backup and recovery -description: Backup and recovery in PGD +description: Backup and recovery originalFilePath: backup.md redirects: - /bdr/latest/backup/ - /bdr/latest/monitoring/ - --- + PGD is designed to be a distributed, highly available system. If one or more nodes of a cluster are lost, the best way to replace them is to clone new nodes directly from the remaining nodes. @@ -15,8 +15,8 @@ is to clone new nodes directly from the remaining nodes. The role of backup and recovery in PGD is to provide for disaster recovery (DR), such as in the following situations: -- Loss of all nodes in the cluster -- Significant, uncorrectable data corruption across multiple nodes +- Loss of all nodes in the cluster +- Significant, uncorrectable data corruption across multiple nodes as a result of data corruption, application error, or security breach @@ -63,18 +63,18 @@ PostgreSQL node running the BDR extension. Consider these specific points when applying PostgreSQL backup techniques to PGD: -- PGD operates at the level of a single database, while a physical +- PGD operates at the level of a single database, while a physical backup includes all the databases in the instance. Plan your databases to allow them to be easily backed up and restored. -- Backups make a copy of just one node. In the simplest case, +- Backups make a copy of just one node. In the simplest case, every node has a copy of all data, so you need to back up only one node to capture all data. However, the goal of PGD isn't met if the site containing that single copy goes down, so the minimum is at least one node backup per site (with many copies, and so on). -- However, each node might have unreplicated local data, or the +- However, each node might have unreplicated local data, or the definition of replication sets might be complex so that all nodes don't subscribe to all replication sets. In these cases, backup planning must also include plans for how to back up any unreplicated @@ -129,7 +129,7 @@ replication origin. With PostgreSQL PITR, you can use the standard syntax: -``` +```text recovery_target_time = T1 ``` @@ -168,7 +168,7 @@ by `T1`, even though they weren't applied on `N1` until later. To request multi-origin PITR, use the standard syntax in the `postgresql.conf` file: -``` +```text recovery_target_time = T1 ``` @@ -176,13 +176,13 @@ You need to specify the list of replication origins that are restored to `T1` in You can use a separate `multi_recovery.conf` file by way of a new parameter, `recovery_target_origins`: -``` +```text recovery_target_origins = '*' ``` Or you can specify the origin subset as a list in `recovery_target_origins`: -``` +```text recovery_target_origins = '1,3' ``` @@ -237,7 +237,7 @@ of a single PGD node, optionally plus WAL archives: To clean up leftover PGD metadata: -1. Drop the PGD node using [`bdr.drop_node`](/pgd/latest/reference/functions-internal#bdrdrop_node). +1. Drop the PGD node using [`bdr.drop_node`](/pgd/latest/reference/tables-views-functions/functions-internal#bdrdrop_node). 2. Fully stop and restart PostgreSQL (important!). #### Cleanup of replication origins diff --git a/product_docs/docs/pgd/5.8/cdc-failover.mdx b/product_docs/docs/pgd/6/reference/cdc-failover.mdx similarity index 93% rename from product_docs/docs/pgd/5.8/cdc-failover.mdx rename to product_docs/docs/pgd/6/reference/cdc-failover.mdx index 67d55dd5afe..b83085b630e 100644 --- a/product_docs/docs/pgd/5.8/cdc-failover.mdx +++ b/product_docs/docs/pgd/6/reference/cdc-failover.mdx @@ -1,14 +1,10 @@ --- title: CDC Failover support navTitle: CDC Failover support -description: CDC Failover support (PGD Logical Slot Failover) with EDB Postgres Advanced Server and EDB Postgres Extended Server (PGD 5.7 and later only). +description: CDC Failover support (PGD Logical Slot Failover) with EDB Postgres Advanced Server and EDB Postgres Extended Server deepToC: true --- -!!!warning Availability -This is a PGD 5.7 and later feature. It is not supported on earlier versions of PGD. -!!! - ## Background Earlier versions of PGD have allowed the creation of logical replication slots on nodes that can provide a feed of the logical changes happening to the data in the database. These logical replication slots have been local to the node and not replicated. Apart from only replicating changes on the particular node, this behavior has presented challenges when faced with node failover in the cluster. In that scenario, a consumer of the logical replication off a node that fails has no replica of the slot on another node to continue consuming from. @@ -17,7 +13,7 @@ While solutions to this can be engineered using a subscriber-only node as an int ## CDC Failover support -To address this need, PGD 5.7 introduces CDC Failover support. This is an optionally enabled feature that activates automatic logical slot replication across the cluster. This, in turn, allows a consumer of a logical slot’s replication to receive change data from any node when a failure occurs. +To address this need, PGD introduced CDC Failover support. This is an optionally enabled feature that activates automatic logical slot replication across the cluster. This, in turn, allows a consumer of a logical slot’s replication to receive change data from any node when a failure occurs. ### How CDC Failover works @@ -43,7 +39,7 @@ Currently, there's no way to ensure exactly-once delivery, and we expect consumi ## Enabling CDC Failover support -To enable CDC Failover support run the SQL command and call the [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option) function with the following parameters: +To enable CDC Failover support run the SQL command and call the [`bdr.alter_node_group_option`](/pgd/latest/reference/tables-views-functions/nodes-management-interfaces#bdralter_node_group_option) function with the following parameters: ```sql select bdr.alter_node_group_option(, @@ -52,9 +48,9 @@ select bdr.alter_node_group_option(, ``` -Replace `` with the name of your cluster’s top-level group. If you don't know the name, it's the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group). +Replace `` with the name of your cluster’s top-level group. If you don't know the name, it's the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/tables-views-functions/catalogs-visible#bdrnode_group). -If you do not know the name, it is the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/catalogs-visible#bdrnode_group). You can also use: +If you do not know the name, it is the group with a node_group_parent_id equal to 0 in [`bdr.node_group`](/pgd/latest/reference/tables-views-functions/catalogs-visible#bdrnode_group). You can also use: ```sql SELECT bdr.alter_node_group_option( @@ -78,7 +74,7 @@ Logical replication slots created before the option was set to `global` aren't r Failover slots can also be created with the `CREATE_REPLICATION_SLOT` command on a replication connection. -The status of failover slots is tracked in the [`bdr.failover_replication_slots`](/pgd/latest/reference/catalogs-visible#bdrfailover_replication_slots) table. +The status of failover slots is tracked in the [`bdr.failover_replication_slots`](/pgd/latest/reference/tables-views-functions/catalogs-visible#bdrfailover_replication_slots) table. ## CDC Failover support with Postgres 17+ diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/assess/index.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/assess/index.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/cli/command_ref/assess/index.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/assess/index.mdx index 26c5dd16e85..a2089b2d556 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/assess/index.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/assess/index.mdx @@ -20,7 +20,7 @@ pgd assess [OPTIONS] The assess command has no command specific options. -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/cluster/index.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/cluster/index.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/cli/command_ref/cluster/index.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/cluster/index.mdx diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/cluster/show.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/cluster/show.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/cli/command_ref/cluster/show.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/cluster/show.mdx index 9bed9861c08..78d6ce28697 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/cluster/show.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/cluster/show.mdx @@ -26,7 +26,7 @@ The following table lists the options available for the `pgd cluster show` comma Only one of the above options can be specified at a time. -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Clock Drift diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/cluster/verify.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/cluster/verify.mdx similarity index 96% rename from product_docs/docs/pgd/5.8/cli/command_ref/cluster/verify.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/cluster/verify.mdx index a260592fd72..de204fa84bf 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/cluster/verify.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/cluster/verify.mdx @@ -51,8 +51,6 @@ bdr.max_writers_per_subscription Ok bdr.raft_group_max_connections Ok bdr.replay_progress_frequency Ok bdr.role_replication Ok -bdr.standby_slot_names Ok -bdr.standby_slots_min_confirmed Ok bdr.start_workers Ok bdr.writers_per_subscription Ok bdr.xact_replication Ok diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/create.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/create.mdx similarity index 87% rename from product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/create.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/create.mdx index 4b89bcc32d2..8f0fcca7b56 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/create.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/create.mdx @@ -16,13 +16,13 @@ pgd commit-scope create [OPTIONS] [GROUP_NAME] Where `` is the name of the commit scope to create. -The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/commit-scopes) and [Commit Scope Rules](/pgd/latest/commit-scopes/commit-scope-rules/) for more information on the rule syntax. +The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/reference/commit-scopes) and [Commit Scope Rules](/pgd/latest/reference/commit-scopes/commit-scope-rules/) for more information on the rule syntax. The optional `[GROUP_NAME]` is the name of the group to which the commit scope belongs. If omitted, it defaults to the top-level group. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/drop.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/drop.mdx similarity index 90% rename from product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/drop.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/drop.mdx index 96af78d4045..985f1e13f31 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/drop.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/drop.mdx @@ -20,7 +20,7 @@ The optional `[GROUP_NAME]` is the name of the group to which the commit scope b ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/index.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/index.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/index.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/index.mdx diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/show.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/show.mdx similarity index 92% rename from product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/show.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/show.mdx index 685df3d934d..37118920558 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/show.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/show.mdx @@ -18,7 +18,7 @@ Where `` is the name of the commit scope for which you want to dis ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/update.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/update.mdx similarity index 83% rename from product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/update.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/update.mdx index f7c3262c2ac..48b41a40ed8 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/commit-scope/update.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/commit-scope/update.mdx @@ -16,13 +16,13 @@ pgd commit-scope update [OPTIONS] [GROUP_NAME] Where `` is the name of the commit scope to update. -The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/commit-scopes) and [Commit Scope Rules](/pgd/latest/commit-scopes/commit-scope-rules/) for more information on the rule syntax. +The `` is the rule that defines the commit scope. The rule specifies the conditions that must be met for a transaction to be considered committed. See [Commit Scopes](/pgd/latest/reference/commit-scopes) and [Commit Scope Rules](/pgd/latest/reference/commit-scopes/commit-scope-rules/) for more information on the rule syntax. The optional `[GROUP_NAME]` is the name of the group to which the commit scope belongs. If omitted, it defaults to the top-level group. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Examples diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/completion/index.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/completion/index.mdx similarity index 86% rename from product_docs/docs/pgd/5.8/cli/command_ref/completion/index.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/completion/index.mdx index 4e31162616a..a9cd841f854 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/completion/index.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/completion/index.mdx @@ -19,7 +19,7 @@ Possible values for shell are `bash`, `fish`, `zsh` and `powershell`. ## Options -No command specific options. See [Global Options](/pgd/latest/cli/command_ref/#global-options). +No command specific options. See [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Example diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/events/index.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/events/index.mdx similarity index 100% rename from product_docs/docs/pgd/5.8/cli/command_ref/events/index.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/events/index.mdx diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/events/show.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/events/show.mdx similarity index 98% rename from product_docs/docs/pgd/5.8/cli/command_ref/events/show.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/events/show.mdx index 4792221be77..c3253b6cde3 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/events/show.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/events/show.mdx @@ -24,7 +24,7 @@ The following table lists the options available for the `pgd events show` comman | | `--group ` | Only show events for the group with the specified name. | | `-n` |`--limit ` | Limit the number of events to show. Defaults to 20. | -See also [Global Options](/pgd/latest/cli/command_ref/#global-options). +See also [Global Options](/pgd/latest/reference/cli/command_ref/#global-options). ## Node States diff --git a/product_docs/docs/pgd/5.8/cli/command_ref/group/get-option.mdx b/product_docs/docs/pgd/6/reference/cli/command_ref/group/get-option.mdx similarity index 72% rename from product_docs/docs/pgd/5.8/cli/command_ref/group/get-option.mdx rename to product_docs/docs/pgd/6/reference/cli/command_ref/group/get-option.mdx index 0ba52568cf6..d4e1e1cce36 100644 --- a/product_docs/docs/pgd/5.8/cli/command_ref/group/get-option.mdx +++ b/product_docs/docs/pgd/6/reference/cli/command_ref/group/get-option.mdx @@ -25,7 +25,7 @@ And `
Group
Commit
CAMO Lag
Control
Synchronous
Commit
Group
Commit
CAMO Lag
Control
Synchronous
Commit
Group Commit Group Commit ⛔︎
CAMO CAMO ⛔︎
Lag Control Lag Control ⛔︎
Synchronous Commit Synchronous Commit
+ + + + + + + + + + + + + +
DescriptionAddresses
Built-in connection manager

New built-in connection manager which handles routing of connections automatically and allows enforcing of read-only connections to non-leader.

+
CLI cluster setup

The PGD CLI now allows initial cluster setup as well as adding nodes from command-line using pgd node setup command.

+
Set sequence kind on group create/join

Transform the sequences in distributed based on the bdr.default_sequence_kind GUC when creating/joining a bdr group instead of when creating the node as done in older versions.

+
Set startvalue for distributed sequences automatically

Set the startvalue for galloc sequences to the following valid number after the last used by the local sequence. With this change, when creating distributed sequences and specifically galloc, there is no need to adjust the startvalue based on what might be already used.

+
Enabling of automatic sync and reconciliation

Link to a detailed google doc is provided below

+
Add node_uuid column to bdr.node and bdr.local_node

The node_uuid uniquely identifies instance of a node of a given name. Random node_uuid is generated when node is created and remains constant for the lifetime of the node. The node_id column is now derived from node_uuid instead of node name.

+

For the time being a node needs to be fully parted before before node of the same name can be rejoined, this may be relaxed in future releases to permit rejoin as soon as part_node process for the old instance has commenced and before it completed.

+

For the time being upgrades from older PGD versions and mixed-version operation in clusters with older PGD nodes are not supported. This limitation will be addressed in future releases.

+
Change replication origin and slot naming scheme

Replication origin and slot names now use node uuid and thus correspond to particular incarnation of a node of a given name. Similarly node group uuid is used instead of group name. Hash of database name is used in lieu of database name.

+

Please note that origin and node names should be treated as opaque identifiers from user's perspective, one shouldn't rely on the structure of these names nor expect these to be particularly meaningful to a human operator.

+

The new naming scheme is as follows:

+

Slots Naming Convention

+
    +
  • normal slot to a node => bdr_node_<targetuuid>_<dbhash>
  • +
  • join slot for node => bdr_node_<targetuuid>_<dbhash>_tmp
  • +
  • group slot for a topgroup => bdr_group_<topgroupuuid>_<dbhash>
  • +
  • slot for any forwarding + lead to lead => bdr_node_<targetuuid>_<originidhex>_<dbhash>
  • +
  • analytics slot => bdr_analytics_<groupuuid>_<dbhash>
  • +
  • decoding slot => bdr_decoder_<topgroupuuid>_<dbhash>
  • +
+

Origins Naming Convention:

+
    +
  • normal origin to a node => bdr_<originuuid>_<dbhash>
  • +
  • fwd origin to a source node => bdr_<originuuid>_<sourceoidhex>_<dbhash>
  • +
+
Limit on the number of node groups allowed in the system for PGD Essential.

Ensure that no more than three node groups (one top group and two subgroups) can exist at any given time. If the limit is exceeded, an error is raised.

+
Enforced PGD Essential limits - data node count

Don't allow PGD Essential clusters to join more than 4 data nodes.

+
Added bdr.wait_node_confirm_lsn() function which waits until a given reaches a given LSN

bdr.wait_node_confirm_lsn() will look at the confirmed_flush_lsn of the given node when available, otherwise it will query pg_replication_origin_progress() of that node, and wait for the specified LSN to be reached by said node.

+
Subscriber-only nodes can now be added to data node groups

In previous versions, subscriber-only nodes could only be added to node groups of type "subscriber-only". In PGD 6, a subscriber-only node can be also be added to a data node group by specifying node_kind='subscriber_only' when using create_node. The join_node_group can then be done using a data node group.

+
Add bdr.local_analytics_slot_name() SQL function.

Returns name of analytics slot. This merely produces the correct name irrespective of whether analytics feature is in use.

+
Add node_uuid column to bdr.node_summary view.

Added to complement the addition of the node_uuid column to bdr.node and bdr.local_node

+
+ + +## Enhancements + + + + + + + + + + + + + + + + + + + + + + + +
DescriptionAddresses
Multiple conflicting rows resolution

Both pk_exists and multiple_unique_conflicts conflict types can now resolve more than one conflicting row by removing any old rows that are part of the conflict. The multiple_unique_conflicts now defaults to update_if_newer resolver, so it does not throw error by default anymore.

+
Improved bdr.stat_activity view

The backend_type now shows consistent worker type for PGD workers without the extra process identification. The wait_event_type and wait_event include more wait events now, instead of showing "extension" for some events. Also, connection management related columns are added to show real client address/port and whether the session is read-only.

+
The PARTED node is removed automatically from all nodes in the cluster.

From PGD 6.0.0, bdr.part_node functionality is enhanced to remove the parted node’s metadata automatically from all nodes in the cluster.

+
    +
  • For local node, it will remove all the node metadata, including information about remote nodes.
  • +
  • For remote node, it removes only metadata for that specific node. +Hence with this release
  • +
  • A node will remain in PART_CLEANUP state till group slots of all nodes are caught up to all the transactions originating from the PARTED node
  • +
  • A node will not remain in PARTED state as the node is removed as soon as it moves to PARTED state.
  • +
+
The --summary and --options flags for pgd node show CLI command.

Add the --summary and --options flags to pgd node show command to filter the output of the pgd node show command. +This also maintains symmetry with other show commands.

+
More GUCs verfied in pgd cluster verify CLI command.

Add the bdr.lock_table_locking and bdr.truncate_locking GUCs to list of GUCs verfied in pgd cluster verify command.

+
Table rewriting ALTER TABLE... ALTER COLUMN calls are now supported.

Changing a column's type command which causes the whole table to be rewritten and the change isn't binary coercible is now supported:

+
CREATE TABLE foo (c1 int,c2 int, c3 int, c4 box, UNIQUE(c1, c2) INCLUDE(c3,c4));
+ALTER TABLE foo ALTER c1 TYPE bigint; – results into table rewrite
+
+

This also includes support for ALTER TYPE when using the USING clause:

+
CREATE TABLE foo (id serial primary key,data text);
+ALTER TABLE foo ALTER data TYPE BYTEA USING data::bytea;
+
+

Table rewrites can hold an AccessExclusiveLock for extended periods on larger tables.

+
Restrictions on non-immutable ALTER TABLE... ADD COLUMN calls have been removed.

The restrictions on non-immutable ALTER TABLE... ADD COLUMN calls have been removed.

+
Synchronize roles and tablespaces during logical join

Roles and tablespaces are now synchronized before the schema is restored from +the join source node. If there are already existing roles or tablespaces (or EPAS +profiles, they will be updated to have the same settings, passwords etc. as the +ones from the join source node. +System roles (i.e. the ones created by initdb) are not synchronized.

+
Introduce bdr.node_group_config_summary view

The new bdr.node_group_config_summary view contains detailed information about group options, including effective value, source of the effective value, default value, whether the value can be inherited, etc. This is in similar spirit to pg_settings

+
Leader DML lock

New lock type leader DML lock is used by default for locking DDL statements that need to block DML. This lock locks on write-leaders only, no requiring all nodes to participate in the locking operation. Old behavior can be restored by adjusting bdr.ddl_locking configuration parameter.

+
Disabling bdr.xact_replication in run_on_* functions

Functions run_on_nodes, run_on_all_nodes and run_on_group now sets bdr.xact_replication to off by default.

+
Replica Identity full by default

The auto value for bdr.default_replica_identity changed to +REPLICA IDENTITY FULL. This setting prevents some edge cases in +conflict detection between inserts, updates and deletes across node +crashes and recovery.

+

When the PGD group is created and the database of the initial PGD node is not empty (i.e. has some tables with data) the REPLICA IDENTITY of all tables will be set according to bdr.default_replica_identity.

+
Tablespace replication as a DDL operation is supported.

Tablespace operations CREATE/ALTER/DROP TABLESPACE are now replicated as a DDL operation. Where users are +running a configuration with multiple nodes on the same machine, you will need to enable the developer option allow_in_place_tablespace.

+
Improve the CLI debug messages.

Improve the formating of the log messages to be more readable and symmetrical with Postgres log messages.

+
New column for pgd cluster verify --settings CLI command output.

Add the recommended_value column to the result of the pgd cluster verify --settings command. +The column will not be displayed in tabular output but will be displayed in JSON output.

+
Display sorted output for CLI.

The output for the commands with tabular output are now sorted by the resource name. +Commands that display more than one resource will sort output by each resource column in order.

+
Subscriber-only nodes replication.

Subscriber-only nodes now receive data only after it has been replicated to majority of data nodes. This does not require any special configuration. Subsequently bdr.standby_slot_names and bdr.standby_slots_min_confirmed options are removed as similar physical standby functionality is provided in pg_failover_slots extension and in PG17+.

+
automatic node sync and reconciliation is enabled by default.

The GUC bdr.enable_auto_sync_reconcile was off by default, but is made on by default in 6.0. This GUC setting ensures that when a node is down for some time, all other nodes get caught up equally with respect to this node automatically. It also ensures that if there are any prepared transactions that are orphaned by the node going down, they are resolved, either aborted or committed as per the rules of the commit scope that created them.

+
Remove the deprecated legacy CLI commands.

Remove the old (PGD 5 and below) CLI commands, which were deprecated but supported for backward compatibility.

+
Commit scope logic is now only run on data nodes.

Previously, non-data nodes would attempt to handle, but not process commit scope logic, which could lead to confusing, albeit harmless log messages.

+
Explicitly log the start and stop of dump and restore operations.

This provides greater visibility into the node cloning process and assists with debugging possible issues.

+
+ + +## Changes + + + + + + +
DescriptionAddresses
Routing is now enabled by default on subgroups

Routing (and by extension raft) is now enabled by default on data-groups (subgroups with data nodes).

+
Function bdr.join_node_group may no longer be executed in a transaction.

As it is not possible to roll back a group join, it can not form part of an idempotent transaction.

+
Deprecated pause_in_standby parameter removed from function bdr.join_node_group().

pause_in_standby has been deprecated since PGD 5.0.0. Logical standby nodes should be specified as such when executing bdr.create_node()

+
BDR global sequences can no longer created as or set to UNLOGGED

Unlogged BDR sequences may display unexpected behaviour following a server crash. Existing unlogged BDR sequences may be converted to logged ones.

+
+ + +## Bug Fixes + + + + + + + + + + + + + + + +
DescriptionAddresses
Fix the CLI pgd cluster show command issues on a degraded cluster.

The pgd cluster show command failed with an error for clock drift if only one node was up and running in a N node cluster. +The command now returns valid output for the other components, health and summary, while reporting an appropriate error for clock-drift.

+
Fix the CLI pgd node show command issue if a non-existent node is specified.

The pgd node show command crashed if a non-existent node is specified to the command. +The command is fixed to fail gracefully with appropriate error message.

+
Fixed the timestamp parsing issue for pgd replication show CLI command.

The pgd replication show command previously crashed when formatting EPAS timestamps.

+
Fixed issue where parting node may belong to a non-existing group

When parting a given node, that same node may have subscriptions whose +origin was already parted and the group dropped. Previously this would break PGD, and has since been fixed.

+
num_writers should be positive or -1

The num_writers option, used in bdr.alter_node_group_option() and bdr.alter_node_group_config() should be positive or -1.

+
Fix replication breakage with updates to non-unique indexes

Fixes the case where an update to a table with non-unique indexes results in the ERROR +concurrent INSERT when looking for delete rows, which breaks replication.

+
43523,43802,45244,47815
Fix Raft leader election timeout/failure after upgrade

Ensure that any custom value set in the deprecated GUC bdr.raft_election_timeout +is applied to the replacement bdr.raft_global_election_timeout

+
Ensure that disables subscriptions on subscriber-only nodes are not re-enabled

During subscription reconfiguration, if there is no change required to a subscription, +do not enable it since it could have been disabled explicitly by the user. +Skip reconfiguring subscriptions if there are no leadership changes.

+
46519
Subscriber-only nodes will not take a lock when running DDL

Subscriber-only nodes will no longer attempt to take a lock on the cluster when running DDL. The DDL will be executed locally and not replicated to other nodes.

+
47233
Fixed hang in database system shutdown.

Fixed non-transactional WAL message acknowledgment by downstream that could cause a WAL sender to never exit during fast database system shutdown.

+
49022
Fixed deadlock issue in bdr_init_physical.

Fixed deadlock between bdr_init_physical cleaning unwanted node data and concurrent monitoring queries.

+
46952
Fixed new cluster node consistency issue.

Fixed an issue when new node joining the cluster finishes CATCHUP phase before getting its replication progress against all data nodes. This may cause new node being out of sync with the cluster.

+
Ensure correct sequence type is displayed in CREATE SEQUENCE warnings

In some cases, warning messages referred to timeshard when the sequence +was actually snowflakeid.

+
+ + diff --git a/product_docs/docs/pgd/6/rel_notes/src/meta.yml b/product_docs/docs/pgd/6/rel_notes/src/meta.yml new file mode 100644 index 00000000000..18245a44d0b --- /dev/null +++ b/product_docs/docs/pgd/6/rel_notes/src/meta.yml @@ -0,0 +1,15 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/meta-schema.json + +product: EDB Postgres Distributed +shortname: pgd +title: EDB Postgres Distributed 6 release notes +description: Release notes for EDB Postgres Distributed 6 and later +intro: | + The EDB Postgres Distributed documentation describes the latest version of EDB Postgres Distributed 6, including minor releases and patches. The release notes provide information on what was new in each release. For new functionality introduced in a minor or patch release, the content also indicates the release that introduced the feature. +columns: +- 0: + label: Release Date + key: shortdate +- 1: + label: "EDB Postgres Distributed" + key: version-link diff --git a/product_docs/docs/pgd/6/rel_notes/src/relnote_6.0.1.yml b/product_docs/docs/pgd/6/rel_notes/src/relnote_6.0.1.yml new file mode 100644 index 00000000000..c79219c8547 --- /dev/null +++ b/product_docs/docs/pgd/6/rel_notes/src/relnote_6.0.1.yml @@ -0,0 +1,447 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/EnterpriseDB/docs/refs/heads/develop/tools/automation/generators/relgen/relnote-schema.json +product: EDB Postgres Distributed +version: 6.0.1 +date: 9 June 2025 +intro: | + PGD 6 delivers simpler, more resilient high availability for Postgres. Traditional streaming replication often requires downtime for upgrades and routine maintenance—and depends on complex tooling. PGD solves these challenges with a built-in, logical replication-based architecture that enables online upgrades and maintenance without disrupting applications, helping teams keep services running smoothly even during operational changes. It also provides seamless failover and eliminates the need for external proxies, load balancers, or consensus systems. +highlights: | + - **New built-in Connection Manager**: Automatically routes client connections to the correct node, simplifies application architecture, supports dynamic topology changes, and includes a built-in session pooler and dedicated read/write and read-only ports, all without external software or complex configuration. This new component replaces PGD Proxy, which is no longer available starting with PGD 6. + - **Predefined Commit Scopes**: Simplify consistency choices with built-in transaction durability profiles—no complicated setup needed. Choose the right balance of performance and protection, with scopes defined in system catalogs and ready to use out of the box. + - **New CLI command for Cluster Setup**: The [pgd node setup](/pgd/latest/reference/cli/command_ref/node/setup/) command now enables initial cluster creation and node addition directly from the command line. This gives users more flexibility in how they deploy PGD and allows deployment tools to standardize on a consistent method. +relnotes: +- relnote: Table rewriting `ALTER TABLE... ALTER COLUMN` calls are now supported. + details: | + Changing a column's type command which causes the whole table to be rewritten and the change isn't binary coercible is now supported: + ```sql + CREATE TABLE foo (c1 int,c2 int, c3 int, c4 box, UNIQUE(c1, c2) INCLUDE(c3,c4)); + ALTER TABLE foo ALTER c1 TYPE bigint; – results into table rewrite + ``` + This also includes support for `ALTER TYPE` when using the `USING` clause: + ```sql + CREATE TABLE foo (id serial primary key,data text); + ALTER TABLE foo ALTER data TYPE BYTEA USING data::bytea; + ``` + Table rewrites can hold an AccessExclusiveLock for extended periods on larger tables. + jira: BDR-5724 + addresses: "" + type: Enhancement + impact: Medium + +- relnote: Restrictions on non-immutable `ALTER TABLE... ADD COLUMN` calls have been removed. + details: | + The restrictions on non-immutable `ALTER TABLE... ADD COLUMN` calls have been removed. + jira: BDR-5395 + addresses: "" + type: Enhancement + impact: Medium + +- relnote: Set sequence kind on group create/join + details: | + Transform the sequences in distributed based on the `bdr.default_sequence_kind` GUC when creating/joining a bdr group instead of when creating the node as done in older versions. + jira: BDR-5972 + type: Feature + impact: High +- relnote: Set startvalue for distributed sequences automatically + details: | + Set the startvalue for galloc sequences to the following valid number after the last used by the local sequence. With this change, when creating distributed sequences and specifically galloc, there is no need to adjust the startvalue based on what might be already used. + jira: BDR-5972 + type: Feature + impact: High + +- relnote: Synchronize roles and tablespaces during logical join + details: | + Roles and tablespaces are now synchronized before the schema is restored from + the join source node. If there are already existing roles or tablespaces (or EPAS + profiles, they will be updated to have the same settings, passwords etc. as the + ones from the join source node. + System roles (i.e. the ones created by initdb) are not synchronized. + jira: BDR-5976 + type: Enhancement + impact: Medium + +- relnote: Limit on the number of node groups allowed in the system for PGD Essential. + details: | + Ensure that no more than three node groups (one top group and two subgroups) can exist at any given time. If the limit is exceeded, an error is raised. + jira: BDR-6215 + type: Feature + impact: Medium + +- relnote: Enforced PGD Essential limits - data node count + details: | + Don't allow PGD Essential clusters to join more than 4 data nodes. + jira: BDR-6213 + type: Feature + impact: Medium + +- relnote: Routing is now enabled by default on subgroups + details: | + Routing (and by extension raft) is now enabled by default on data-groups (subgroups with data nodes). + jira: BDR-4956 + type: Change + impact: Medium + +- relnote: Fixed issue where parting node may belong to a non-existing group + details: | + When parting a given node, that same node may have subscriptions whose + origin was already parted and the group dropped. Previously this would break PGD, and has since been fixed. + jira: BDR-5461 + type: Bug fix + impact: Medium + +- relnote: Multiple conflicting rows resolution + details: | + Both `pk_exists` and `multiple_unique_conflicts` conflict types can now resolve more than one conflicting row by removing any old rows that are part of the conflict. The `multiple_unique_conflicts` now defaults to `update_if_newer` resolver, so it does not throw error by default anymore. + jira: BDR-6336 + type: Enhancement + impact: Highest + +- relnote: num_writers should be positive or -1 + details: | + The num_writers option, used in bdr.alter_node_group_option() and bdr.alter_node_group_config() should be positive or -1. + jira: BDR-6294 + type: Bug fix + impact: Medium + +- relnote: Introduce `bdr.node_group_config_summary` view + details: | + The new `bdr.node_group_config_summary` view contains detailed information about group options, including effective value, source of the effective value, default value, whether the value can be inherited, etc. This is in similar spirit to `pg_settings` + jira: BDR-4696 + type: Enhancement + impact: Medium + +- relnote: Added `bdr.wait_node_confirm_lsn()` function which waits until a given reaches a given LSN + details: | + `bdr.wait_node_confirm_lsn(`) will look at the confirmed_flush_lsn of the given node when available, otherwise it will query `pg_replication_origin_progress()` of that node, and wait for the specified LSN to be reached by said node. + jira: BDR-5200 + type: Feature + impact: Medium + +- relnote: Improved `bdr.stat_activity` view + details: | + The `backend_type` now shows consistent worker type for PGD workers without the extra process identification. The `wait_event_type` and `wait_event` include more wait events now, instead of showing "extension" for some events. Also, connection management related columns are added to show real client address/port and whether the session is read-only. + jira: BDR-4833, BDR-743 + type: Enhancement + impact: Highest + +- relnote: Leader DML lock + details: | + New lock type leader DML lock is used by default for locking DDL statements that need to block DML. This lock locks on write-leaders only, no requiring all nodes to participate in the locking operation. Old behavior can be restored by adjusting `bdr.ddl_locking` configuration parameter. + jira: BDR-6216 + type: Enhancement + impact: Medium + +- relnote: Built-in connection manager + details: | + New built-in connection manager which handles routing of connections automatically and allows enforcing of read-only connections to non-leader. + jira: BDR-6260 + type: Feature + impact: Highest + +- relnote: CLI cluster setup + details: | + The PGD CLI now allows initial cluster setup as well as adding nodes from command-line using `pgd node setup` command. + jira: BDR-5727 + type: Feature + impact: Highest + +- relnote: Disabling bdr.xact_replication in run_on_* functions + details: | + Functions `run_on_nodes`, `run_on_all_nodes` and `run_on_group` now sets `bdr.xact_replication` to `off` by default. + jira: BDR-1331 + type: Enhancement + impact: Medium + +- relnote: Replica Identity full by default + details: | + The `auto` value for `bdr.default_replica_identity` changed to + REPLICA IDENTITY FULL. This setting prevents some edge cases in + conflict detection between inserts, updates and deletes across node + crashes and recovery. + + When the PGD group is created and the database of the initial PGD node is not empty (i.e. has some tables with data) the REPLICA IDENTITY of all tables will be set according to `bdr.default_replica_identity`. + jira: BDR-5977 + type: Enhancement + impact: Medium + +- relnote: The PARTED node is removed automatically from all nodes in the cluster. + details: | + From PGD 6.0.0, bdr.part_node functionality is enhanced to remove the parted node’s metadata automatically from all nodes in the cluster. + - For local node, it will remove all the node metadata, including information about remote nodes. + - For remote node, it removes only metadata for that specific node. + Hence with this release + - A node will remain in PART_CLEANUP state till group slots of all nodes are caught up to all the transactions originating from the PARTED node + - A node will not remain in PARTED state as the node is removed as soon as it moves to PARTED state. + + jira: BDR-5975 + type: Enhancement + impact: High + +- relnote: Enabling of automatic sync and reconciliation + details: | + Link to a detailed google doc is provided below + jira: BDR-4798 + type: Feature + impact: High + +- relnote: Subscriber-only nodes can now be added to data node groups + details: | + In previous versions, subscriber-only nodes could only be added to node groups of type "subscriber-only". In PGD 6, a subscriber-only node can be also be added to a data node group by specifying node_kind='subscriber_only' when using create_node. The join_node_group can then be done using a data node group. + jira: BDR-6106 + type: Feature + impact: Medium + +- relnote: Add node_uuid column to bdr.node and bdr.local_node + details: | + The node_uuid uniquely identifies instance of a node of a given name. Random node_uuid is generated when node is created and remains constant for the lifetime of the node. The node_id column is now derived from node_uuid instead of node name. + + For the time being a node needs to be fully parted before before node of the same name can be rejoined, this may be relaxed in future releases to permit rejoin as soon as part_node process for the old instance has commenced and before it completed. + + For the time being upgrades from older PGD versions and mixed-version operation in clusters with older PGD nodes are not supported. This limitation will be addressed in future releases. + jira: BDR-6222 + type: Feature + impact: High + +- relnote: Change replication origin and slot naming scheme + details: | + Replication origin and slot names now use node uuid and thus correspond to particular incarnation of a node of a given name. Similarly node group uuid is used instead of group name. Hash of database name is used in lieu of database name. + + Please note that origin and node names should be treated as opaque identifiers from user's perspective, one shouldn't rely on the structure of these names nor expect these to be particularly meaningful to a human operator. + + The new naming scheme is as follows: + + #### Slots Naming Convention + + * normal slot to a node => `bdr_node__` + * join slot for node => `bdr_node___tmp` + * group slot for a topgroup => `bdr_group__` + * slot for any forwarding + lead to lead => `bdr_node___` + * analytics slot => `bdr_analytics__` + * decoding slot => `bdr_decoder__` + + #### Origins Naming Convention: + + * normal origin to a node => `bdr__` + * fwd origin to a source node => `bdr___` + jira: BDR-6157 + type: Feature + impact: High + +- relnote: Add `bdr.local_analytics_slot_name()` SQL function. + details: | + Returns name of analytics slot. This merely produces the correct name irrespective of whether analytics feature is in use. + jira: BDR-6469 + type: Feature + impact: Low + +- relnote: Add node_uuid column to `bdr.node_summary` view. + details: | + Added to complement the addition of the node_uuid column to bdr.node and bdr.local_node + jira: BDR-6478 + type: Feature + impact: Low + +- relnote: Tablespace replication as a DDL operation is supported. + details: | + Tablespace operations `CREATE/ALTER/DROP TABLESPACE` are now replicated as a DDL operation. Where users are + running a configuration with multiple nodes on the same machine, you will need to enable the developer option [`allow_in_place_tablespace`](https://www.postgresql.org/docs/current/runtime-config-developer.html#GUC-ALLOW-IN-PLACE-TABLESPACES). + jira: BDR-5401 + type: Enhancement + impact: Medium + +- relnote: Remove the deprecated legacy CLI commands. + details: | + Remove the old (PGD 5 and below) CLI commands, which were deprecated but supported for backward compatibility. + jira: BDR-6333 + type: Enhancement + impact: Low + +- relnote: Improve the CLI debug messages. + details: | + Improve the formating of the log messages to be more readable and symmetrical with Postgres log messages. + jira: BDR-6101 + type: Enhancement + impact: Medium + +- relnote: The `--summary` and `--options` flags for `pgd node show` CLI command. + details: | + Add the `--summary` and `--options` flags to `pgd node show` command to filter the output of the `pgd node show` command. + This also maintains symmetry with other `show` commands. + jira: BDR-6145 + type: Enhancement + impact: High + +- relnote: More GUCs verfied in `pgd cluster verify` CLI command. + details: | + Add the `bdr.lock_table_locking` and `bdr.truncate_locking` GUCs to list of GUCs verfied in `pgd cluster verify` command. + jira: BDR-5308 + type: Enhancement + impact: High + +- relnote: New column for `pgd cluster verify --settings` CLI command output. + details: | + Add the `recommended_value` column to the result of the `pgd cluster verify --settings` command. + The column will not be displayed in tabular output but will be displayed in JSON output. + jira: BDR-5308 + type: Enhancement + impact: Medium + +- relnote: Display sorted output for CLI. + details: | + The output for the commands with tabular output are now sorted by the resource name. + Commands that display more than one resource will sort output by each resource column in order. + jira: BDR-6094 + type: Enhancement + impact: Medium + +- relnote: Fix the CLI `pgd cluster show` command issues on a degraded cluster. + details: | + The `pgd cluster show` command failed with an error for clock drift if only one node was up and running in a N node cluster. + The command now returns valid output for the other components, `health` and `summary`, while reporting an appropriate error for `clock-drift`. + jira: BDR-6135 + type: Bug Fix + impact: High + +- relnote: Fix the CLI `pgd node show` command issue if a non-existent node is specified. + details: | + The `pgd node show` command crashed if a non-existent node is specified to the command. + The command is fixed to fail gracefully with appropriate error message. + jira: BDR-6292 + type: Bug Fix + impact: High + +- relnote: Commit scope logic is now only run on data nodes. + details: | + Previously, non-data nodes would attempt to handle, but not process commit scope logic, which could lead to confusing, albeit harmless log messages. + jira: BDR-6325 + type: Enhancement + impact: Low + +- relnote: Explicitly log the start and stop of dump and restore operations. + details: | + This provides greater visibility into the node cloning process and assists with debugging possible issues. + jira: BDR-4501 + type: Enhancement + impact: Low + +- relnote: Function `bdr.join_node_group` may no longer be executed in a transaction. + details: | + As it is not possible to roll back a group join, it can not form part of an idempotent transaction. + jira: BDR-6337 + type: Change + impact: Low + +- relnote: Deprecated `pause_in_standby` parameter removed from function `bdr.join_node_group()`. + details: | + `pause_in_standby` has been deprecated since PGD 5.0.0. Logical standby nodes should be specified as such when executing `bdr.create_node()` + jira: BDR-6385 + type: Change + impact: Low + +- relnote: BDR global sequences can no longer created as or set to `UNLOGGED` + details: | + Unlogged BDR sequences may display unexpected behaviour following a server crash. Existing unlogged BDR sequences may be converted to logged ones. + jira: BDR-6103 + type: Change + impact: Low + +- relnote: Subscriber-only nodes replication. + component: BDR + details: | + Subscriber-only nodes now receive data only after it has been replicated to majority of data nodes. This does not require any special configuration. Subsequently bdr.standby_slot_names and bdr.standby_slots_min_confirmed options are removed as similar physical standby functionality is provided in pg_failover_slots extension and in PG17+. + jira: BDR-5961 + addresses: "" + type: Enhancement + impact: Medium + +- relnote: Fixed deadlock issue in bdr_init_physical. + component: BDR + details: | + Fixed deadlock between bdr_init_physical cleaning unwanted node data and concurrent monitoring queries. + jira: BDR-6313 + addresses: 46952 + type: Bug Fix + impact: Low + +- relnote: Fixed new cluster node consistency issue. + component: BDR + details: | + Fixed an issue when new node joining the cluster finishes CATCHUP phase before getting its replication progress against all data nodes. This may cause new node being out of sync with the cluster. + jira: BDR-5961 + addresses: "" + type: Bug Fix + impact: Low + +- relnote: Fixed the timestamp parsing issue for `pgd replication show` CLI command. + details: | + The `pgd replication show` command previously crashed when formatting EPAS timestamps. + jira: BDR-6347 + type: Bug Fix + impact: High + +- relnote: Fix replication breakage with updates to non-unique indexes + component: BDR + details: | + Fixes the case where an update to a table with non-unique indexes results in the ERROR + `concurrent INSERT when looking for delete rows`, which breaks replication. + jira: BDR-5811 + addresses: "43523,43802,45244,47815" + type: Bug Fix + impact: Medium + +- relnote: Ensure correct sequence type is displayed in CREATE SEQUENCE warnings + component: BDR + details: | + In some cases, warning messages referred to `timeshard` when the sequence + was actually `snowflakeid`. + jira: BDR-6266 + addresses: "" + type: Bug Fix + impact: Low + +- relnote: Fix Raft leader election timeout/failure after upgrade + component: BDR + details: | + Ensure that any custom value set in the deprecated GUC `bdr.raft_election_timeout` + is applied to the replacement `bdr.raft_global_election_timeout` + jira: BDR-6068 + addresses: "" + type: Bug Fix + impact: Medium + +- relnote: Ensure that disables subscriptions on subscriber-only nodes are not re-enabled + component: BDR + details: | + During subscription reconfiguration, if there is no change required to a subscription, + do not enable it since it could have been disabled explicitly by the user. + Skip reconfiguring subscriptions if there are no leadership changes. + jira: BDR-6270 + addresses: "46519" + type: Bug Fix + impact: Medium + +- relnote: Subscriber-only nodes will not take a lock when running DDL + details: | + Subscriber-only nodes will no longer attempt to take a lock on the cluster when running DDL. The DDL will be executed locally and not replicated to other nodes. + component: BDR + jira: BDR-3767 + addresses: "47233" + type: Bug Fix + impact: Medium + +- relnote: automatic node sync and reconciliation is enabled by default. + details: | + The GUC [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/tables-views-functions/pgd-settings#bdrenable_auto_sync_reconcile) was off by default, but is made on by default in 6.0. This GUC setting ensures that when a node is down for some time, all other nodes get caught up equally with respect to this node automatically. It also ensures that if there are any prepared transactions that are orphaned by the node going down, they are resolved, either aborted or committed as per the rules of the commit scope that created them. + component: BDR + jira: BDR-6115 + type: Enhancement + impact: Medium + +- relnote: Fixed hang in database system shutdown. + component: BDR + details: | + Fixed non-transactional WAL message acknowledgment by downstream that could cause a WAL sender to never exit during fast database system shutdown. + jira: BDR-6484 + addresses: 49022 + type: Bug Fix + impact: Medium + diff --git a/product_docs/docs/pgd/5.8/terminology.mdx b/product_docs/docs/pgd/6/terminology.mdx similarity index 90% rename from product_docs/docs/pgd/5.8/terminology.mdx rename to product_docs/docs/pgd/6/terminology.mdx index 12953ae6ca4..8d58021d7b4 100644 --- a/product_docs/docs/pgd/5.8/terminology.mdx +++ b/product_docs/docs/pgd/6/terminology.mdx @@ -29,7 +29,7 @@ How [Raft](#replicated-available-fault-tolerance-raft) makes group-wide decision Generically, a cluster is a group of multiple systems arranged to appear to end users as one system. See also [PGD cluster](#pgd-cluster) and [Postgres cluster](#postgres-cluster). -#### DDL (data definition language) +#### DDL (data definition language) The subset of SQL commands that deal with defining and managing the structure of a database. DDL statements can create, modify, and delete objects (that is, schemas, tables, and indexes) in the database. Common DDL commands are CREATE, ALTER, and DROP. @@ -63,9 +63,9 @@ A more efficient method of replicating changes in the database. While physical s #### Node -A general term for an element of a distributed system. A node can play host to any service. In PGD, [PGD nodes](#pgd-node) run a Postgres database, the BDR extension, and optionally a PGD Proxy service. +A general term for an element of a distributed system. A node can play host to any service. In PGD, [PGD nodes](#pgd-node) run a Postgres database, the BDR extension and the Connection Manager. -Typically, for high availability, each node runs on separate physical hardware, but that's not always the case. For example, a proxy might share a hardware node with a database. +Typically, for high availability, each node runs on separate physical hardware, but that's not always the case. #### Node groups @@ -73,11 +73,11 @@ PGD nodes in PGD clusters can be organized into groups to reflect the logical op #### PGD cluster -A group of multiple redundant database systems and proxies arranged to avoid single points of failure while appearing to end users as one system. PGD clusters can be run on Docker instances, cloud instances or “bare” Linux hosts, or a combination of those platforms. A PGD cluster can also include backup and proxy nodes. The data nodes in a cluster are grouped together in a top-level group and into various local [node groups](#node-groups). +A group of multiple redundant database systems and proxies arranged to avoid single points of failure while appearing to end users as one system. PGD clusters can be run on Docker instances, cloud instances or “bare” Linux hosts, or a combination of those platforms. A PGD cluster can also include backup nodes. The data nodes in a cluster are grouped together in a top-level group and into various local [node groups](#node-groups). #### PGD node -In a PGD cluster are nodes that run databases and participate in the PGD cluster. A typical PGD node runs a Postgres database, the BDR extension, and optionally a PGD Proxy service. PGD modes are also referred to as *data nodes*, which suggests they store data. However, some PGD nodes, specifically [witness nodes](#witness-nodes), don't do that. +In a PGD cluster are nodes that run databases and participate in the PGD cluster. A typical PGD node runs a Postgres database, the BDR extension, and the Connection Manager. PGD modes are also referred to as *data nodes*, which suggests they store data. However, some PGD nodes, specifically [witness nodes](#witness-nodes), don't do that. #### Physical replication @@ -90,7 +90,7 @@ Traditionally, in PostgreSQL, a number of databases running on a single server i #### Quorum A quorum is the minimum number of voting nodes needed to participate in a distributed vote. It ensures that the decision made has validity. For example, -when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](ddl/ddl-locking/) and Raft decisions. +when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](/pgd/latest/reference/ddl/ddl-locking/) and Raft decisions. #### Replicated available fault tolerance (Raft) @@ -138,7 +138,7 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd #### Write leader -In an Always-on architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In an Always-on architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, the PGD Connection Manager routes queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. #### Writer diff --git a/product_docs/docs/pge/15/deploy_options.mdx b/product_docs/docs/pge/15/deploy_options.mdx index 98e36bf3818..0dfe22e222d 100644 --- a/product_docs/docs/pge/15/deploy_options.mdx +++ b/product_docs/docs/pge/15/deploy_options.mdx @@ -9,6 +9,6 @@ The deployment options include: - [Installing](installing) on a virtual machine or physical server using native packages -- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) +- Deploying it with [EDB Postgres Distributed](/pgd/latest/) - Deploying it on [EDB Postgres AI Cloud Service](/edb-postgres-ai/cloud-service/) with extreme high availability cluster types diff --git a/product_docs/docs/pge/16/deploy_options.mdx b/product_docs/docs/pge/16/deploy_options.mdx index 7499961f6ff..0f7d0c04ace 100644 --- a/product_docs/docs/pge/16/deploy_options.mdx +++ b/product_docs/docs/pge/16/deploy_options.mdx @@ -9,6 +9,6 @@ The deployment options include: - [Installing](installing) on a virtual machine or physical server using native packages -- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) +- Deploying it with [EDB Postgres Distributed](/pgd/latest/) - Deploying it on [EDB Postgres AI Cloud Service](/edb-postgres-ai/cloud-service/) with extreme-high-availability cluster types diff --git a/product_docs/docs/pge/17/deploy_options.mdx b/product_docs/docs/pge/17/deploy_options.mdx index 7499961f6ff..0f7d0c04ace 100644 --- a/product_docs/docs/pge/17/deploy_options.mdx +++ b/product_docs/docs/pge/17/deploy_options.mdx @@ -9,6 +9,6 @@ The deployment options include: - [Installing](installing) on a virtual machine or physical server using native packages -- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) +- Deploying it with [EDB Postgres Distributed](/pgd/latest/) - Deploying it on [EDB Postgres AI Cloud Service](/edb-postgres-ai/cloud-service/) with extreme-high-availability cluster types diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/node_joins.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/node_joins.mdx index f1e97d4b5f1..622f42ce71c 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/node_joins.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/node_joins.mdx @@ -9,14 +9,14 @@ joining: - Logical join - This method uses the [bdr.join_node_group](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function to integrate the new node into the PGD group. + This method uses the [bdr.join_node_group](/pgd/latest/reference/tables-views-functions/nodes-management-interfaces#bdrjoin_node_group) function to integrate the new node into the PGD group. It's important that the joining node doesn't contain any schemas or data present in the PGD group. We recommend that the new database contain only the BDR extension, as data synchronization occurs during the join. - Physical join - This method uses the [bdr_init_physical](/pgd/latest/reference/nodes/#bdr_init_physical) command to speed up the joining process. You can prepare data in advance before executing `bdr_init_physical`. + This method uses the [bdr_init_physical](/pgd/latest/reference/tables-views-functions/nodes/#bdr_init_physical) command to speed up the joining process. You can prepare data in advance before executing `bdr_init_physical`. For more information about join methods, see [Creating and joining PGD groups](/pgd/latest/node_management/creating_and_joining/#creating-and-joining-pgd-groups). diff --git a/product_docs/docs/tpa/23/reference/bdr.mdx b/product_docs/docs/tpa/23/reference/bdr.mdx index dc8f3c0bf8b..95f4f3daf1c 100644 --- a/product_docs/docs/tpa/23/reference/bdr.mdx +++ b/product_docs/docs/tpa/23/reference/bdr.mdx @@ -168,7 +168,7 @@ is mentioned in `bdr_node_groups`), it will join that group instead of ### bdr_commit_scopes This is an optional list of -[commit scopes](https://www.enterprisedb.com/docs/pgd/latest/reference/commit-scopes/) +[commit scopes](https://www.enterprisedb.com/docs/pgd/latest/reference/tables-views-functions/commit-scopes/) that must exist in the PGD database (available for PGD 4.1 and above). ```yaml diff --git a/src/styles/_docs.scss b/src/styles/_docs.scss index 081759b2cf1..98b7910f321 100644 --- a/src/styles/_docs.scss +++ b/src/styles/_docs.scss @@ -134,7 +134,7 @@ h1, h2, h3, h4, h5, h6 { cursor: pointer; } -.tabs__tab>label:has(input:checked) { @extend .active;} +.tabs__tab>label:has(input:checked) { @extend .active; font-weight: bold; } .tabs__tab:focus-within { outline: 2px solid #333;