System functions v5
Perform PGD management primarily by using functions you call from SQL.
All functions in PGD are exposed in the bdr
schema. Schema qualify any calls to these
functions instead of putting bdr
in the
search_path
.
Version information functions
bdr.bdr_version
This function retrieves the textual representation of the version of the BDR extension currently in use.
bdr.bdr_version_num
This function retrieves the version number of the BDR extension that is currently in use. Version numbers are monotonically increasing, allowing this value to be used for less-than and greater-than comparisons.
The following formula returns the version number consisting of major version, minor version, and patch release into a single numerical value:
System information functions
bdr.get_relation_stats
Returns the relation information.
bdr.get_subscription_stats
Returns the current subscription statistics.
System and progress information parameters
PGD exposes some parameters that you can query using SHOW
in psql
or using PQparameterStatus
(or equivalent) from a client
application.
bdr.local_node_id
When you initialize a session, this is set to the node id the client is connected to. This allows an application to figure out the node it's connected to, even behind a transparent proxy.
It's also used with Connection pools and proxies.
bdr.last_committed_lsn
After every COMMIT
of an asynchronous transaction, this parameter is updated to
point to the end of the commit record on the origin node. Combining it with bdr.wait_for_apply_queue
,
allows applications
to perform causal reads across multiple nodes, that is, to wait until a transaction
becomes remotely visible.
transaction_id
As soon as Postgres assigns a transaction id, if CAMO is enabled, this parameter is updated to show the transaction id just assigned.
bdr.is_node_connected
Synopsis
Returns boolean by checking if the walsender for a given peer is active on this node.
bdr.is_node_ready
Synopsis
Returns boolean by checking if the lag is lower than the given span or
lower than the timeout
for TO ASYNC
otherwise.
Consensus function
bdr.consensus_disable
Disables the consensus worker on the local node until server restart or until
it's reenabled using bdr.consensus_enable
(whichever happens first).
Warning
Disabling consensus disables some features of PGD and affects availability of the EDB Postgres Distributed cluster if left disabled for a long time. Use this function only when working with Technical Support.
bdr.consensus_enable
Reenabled disabled consensus worker on local node.
bdr.consensus_proto_version
Returns currently used consensus protocol version by the local node.
Needed by the PGD group reconfiguration internal mechanisms.
bdr.consensus_snapshot_export
Synopsis
Generate a new PGD consensus snapshot from the currently committed-and-applied state of the local node and return it as bytea.
By default, a snapshot for the highest supported Raft version is
exported. But you can override that by passing an explicit version
number.
The exporting node doesn't have to be the current Raft leader, and it doesn't
need to be completely up to date with the latest state on the leader. However, bdr.consensus_snapshot_import()
might not accept such a snapshot.
The new snapshot isn't automatically stored to the local node's
bdr.local_consensus_snapshot
table. It's only returned to the caller.
The generated snapshot might be passed to bdr.consensus_snapshot_import()
on
any other nodes in the same PGD node group that's behind the exporting node's
Raft log position.
The local PGD consensus worker must be disabled for this function to work. Typical usage is:
While the PGD consensus worker is disabled:
- DDL locking attempts on the node fail or time out.
- galloc sequences don't get new values.
- Eager and CAMO transactions pause or error.
- Other functionality that needs the distributed consensus system is disrupted. The required downtime is generally very brief.
Depending on the use case, it might be practical to extract a snapshot that
already exists from the snapshot
field of the bdr.local_consensus_snapshot
table and use that instead. Doing so doesn't require you to stop the consensus worker.
bdr.consensus_snapshot_import
Synopsis
Import a consensus snapshot that was exported by
bdr.consensus_snapshot_export()
, usually from another node in the same PGD
node group.
It's also possible to use a snapshot extracted directly from the snapshot
field of the bdr.local_consensus_snapshot
table on another node.
This function is useful for resetting a PGD node's catalog state to a known good state in case of corruption or user error.
You can import the snapshot if the importing node's apply_index
is less than
or equal to the snapshot-exporting node's commit_index
when the
snapshot was generated. (See bdr.get_raft_status()
.) A node that can't accept
the snapshot because its log is already too far ahead raises an error
and makes no changes. The imported snapshot doesn't have to be completely
up to date, as once the snapshot is imported the node fetches the remaining
changes from the current leader.
The PGD consensus worker must be disabled on the importing node for this
function to work. See notes on bdr.consensus_snapshot_export()
for details.
It's possible to use this function to force the local node to generate a new Raft snapshot by running:
This approach might also truncate the Raft logs up to the current applied log position.
bdr.consensus_snapshot_verify
Synopsis
Verify the given consensus snapshot that was exported by
bdr.consensus_snapshot_export()
. The snapshot header contains the
version with which it was generated and the node tries to verify it
against the same version.
The snapshot might have been exported on the same node or any other node in the cluster. If the node verifying the snapshot doesn't support the version of the exported snapshot, then an error is raised.
bdr.get_consensus_status
Returns status information about the current consensus (Raft) worker.
bdr.get_raft_status
Returns status information about the current consensus (Raft) worker.
Alias for bdr.get_consensus_status
.
bdr.raft_leadership_transfer
Synopsis
Request the node identified by node_name
to be the Raft leader. The
request can be initiated from any of the PGD nodes and is
internally forwarded to the current leader to transfer the leadership to
the designated node. The designated node must be an ACTIVE PGD node
with full voting rights.
If wait_for_completion
is false, the request is served on
a best-effort basis. If the node can't become a leader in the
bdr.raft_election_timeout
period, then some other capable node
becomes the leader again. Also, the leadership can change over the
period of time per Raft protocol. A true
return result indicates
only that the request was submitted successfully.
If wait_for_completion
is true
, then the function waits until
the given node becomes the new leader and possibly waits infinitely if
the requested node fails to become Raft leader (for example, due to network
issues). We therefore recommend that you always set a statement_timeout
with wait_for_completion
to prevent an infinite loop.
Utility functions
bdr.wait_slot_confirm_lsn
Allows you to wait until the last write on this session was replayed to one or all nodes.
Waits until a slot passes a certain LSN. If no position is supplied, the current write position is used on the local node.
If no slot name is passed, it waits until all PGD slots pass the LSN.
The function polls every 1000 ms for changes from other nodes.
If a slot is dropped concurrently, the wait ends for that slot.
If a node is currently down and isn't updating its slot, then the wait continues.
You might want to set statement_timeout
to complete earlier in that case.
Synopsis
Parameters
slot_name
— Name of replication slot or, if NULL, all PGD slots (only).target_lsn
— LSN to wait for or, if NULL, use the current write LSN on the local node.
bdr.wait_for_apply_queue
The function bdr.wait_for_apply_queue
allows a PGD node to wait for
the local application of certain transactions originating from a given
PGD node. It returns only after all transactions from that peer
node are applied locally. An application or a proxy can use this
function to prevent stale reads.
For convenience, PGD provides a variant of this function for CAMO and the CAMO partner node. See bdr.wait_for_camo_partner_queue.
In case a specific LSN is given, that's the point in the recovery
stream from which the peer waits. You can use this
with bdr.last_committed_lsn
retrieved from that peer node on a
previous or concurrent connection.
If the given target_lsn
is NULL, this function checks the local
receive buffer and uses the LSN of the last transaction received from
the given peer node, effectively waiting for all transactions already
received to be applied. This is especially useful in case the peer
node has failed and it's not known which transactions were sent.
In this case, transactions that are still in transit or
buffered on the sender side aren't waited for.
Synopsis
Parameters
peer_node_name
— The name of the peer node from which incoming transactions are expected to be queued and to wait for. If NULL, waits for all peer node's apply queue to be consumed.target_lsn
— The LSN in the replication stream from the peer node to wait for, usually learned by way ofbdr.last_committed_lsn
from the peer node.
bdr.get_node_sub_receive_lsn
You can use this function on a subscriber to get the last LSN that was received from the given origin. It can be either unfiltered or filtered to take into account only relevant LSN increments for transactions to be applied.
The difference between the output of this function and the output of
bdr.get_node_sub_apply_lsn()
measures the size of the corresponding
apply queue.
Synopsis
Parameters
node_name
— The name of the node that's the source of the replication stream whose LSN is being retrieved.committed
—; The default (true) makes this function take into account only commits of transactions received rather than the last LSN overall. This includes actions that have no effect on the subscriber node.
bdr.get_node_sub_apply_lsn
You can use this function on a subscriber to get the last LSN that was received and applied from the given origin.
Synopsis
Parameters
node_name
— the name of the node that's the source of the replication stream whose LSN is being retrieved.
bdr.run_on_all_nodes
Function to run a query on all nodes.
Warning
This function runs an arbitrary query on a remote node with the privileges of the user used for the internode connections as specified in the node's DSN. Use caution when granting privileges to this function.
Synopsis
Parameters
query
— Arbitrary query to execute.
Notes
This function connects to other nodes and executes the query, returning a result from each of them in JSON format. Multiple rows might be returned from each node, encoded as a JSON array. Any errors, such as being unable to connect because a node is down, are shown in the response field. No explicit statement_timeout or other runtime parameters are set, so defaults are used.
This function doesn't go through normal replication. It uses direct client
connection to all known nodes. By default, the connection is created
with bdr.ddl_replication = off
, since the commands are already being sent
to all of the nodes in the cluster.
Be careful when using this function since you risk breaking replication
and causing inconsistencies between nodes. Use either transparent DDL
replication or bdr.replicate_ddl_command()
to replicate DDL.
DDL might be blocked in a future release.
Example
It's useful to use this function in monitoring, for example, as in the following query:
This query returns something like this on a two-node cluster:
bdr.run_on_nodes
Function to run a query on a specified list of nodes.
Warning
This function runs an arbitrary query on remote nodes with the privileges of the user used for the internode connections as specified in the node's DSN. Use caution when granting privileges to this function.
Synopsis
Parameters
node_names
— Text ARRAY of node names where query is executed.query
— Arbitrary query to execute.
Notes
This function connects to other nodes and executes the query, returning a result from each of them in JSON format. Multiple rows can be returned from each node, encoded as a JSON array. Any errors, such as being unable to connect because a node is down, are shown in the response field. No explicit statement_timeout or other runtime parameters are set, so defaults are used.
This function doesn't go through normal replication. It uses direct client
connection to all known nodes. By default, the connection is created
with bdr.ddl_replication = off
, since the commands are already being sent
to all of the nodes in the cluster.
Be careful when using this function since you risk breaking replication
and causing inconsistencies between nodes. Use either transparent DDL
replication or bdr.replicate_ddl_command()
to replicate DDL.
DDL might be blocked in a future release.
bdr.run_on_group
Function to run a query on a group of nodes.
Warning
This function runs an arbitrary query on remote nodes with the privileges of the user used for the internode connections as specified in the node's DSN. Use caution when granting privileges to this function.
Synopsis
Parameters
node_group_name
— Name of node group where query is executed.query
— Arbitrary query to execute.
Notes
This function connects to other nodes and executes the query, returning a result from each of them in JSON format. Multiple rows can be returned from each node, encoded as a JSON array. Any errors, such as being unable to connect because a node is down, are shown in the response field. No explicit statement_timeout or other runtime parameters are set, so defaults are used.
This function doesn't go through normal replication. It uses direct client
connection to all known nodes. By default, the connection is created
with bdr.ddl_replication = off
, since the commands are already being sent
to all of the nodes in the cluster.
Be careful when using this function since you risk breaking replication
and causing inconsistencies between nodes. Use either transparent DDL
replication or bdr.replicate_ddl_command()
to replicate DDL.
DDL might be blocked in a future release.
bdr.global_lock_table
This function acquires a global DML locks on a given table. See DDL locking details for information about global DML lock.
Synopsis
Parameters
relation
— Name or oid of the relation to lock.
Notes
This function acquires the global DML lock independently of the
ddl_locking
setting.
The bdr.global_lock_table
function requires UPDATE
, DELETE
, or TRUNCATE
privilege on the locked relation
unless bdr.backwards_compatibility
is
set to 30618 or lower.
bdr.wait_for_xid_progress
You can use this function to wait for the given transaction (identified by its XID) originated at the given node (identified by its node id) to make enough progress on the cluster. The progress is defined as the transaction being applied on a node and this node having seen all other replication changes done before the transaction is applied.
Synopsis
Parameters
origin_node_id
— Node id of the node where the transaction originated.origin_topxid
— XID of the transaction.allnodes
— Iftrue
then wait for the transaction to progress on all nodes. Otherwise wait only for the current node.
Notes
You can use the function only for those transactions that
replicated a DDL command because only those transactions are tracked
currently. If a wrong origin_node_id
or origin_topxid
is supplied,
the function might wait forever or until statement_timeout
occurs.
bdr.local_group_slot_name
Returns the name of the group slot on the local node.
Example
bdr.node_group_type
Returns the type of the given node group. Returned value is the same as what
was passed to bdr.create_node_group()
when the node group was created,
except normal
is returned if the node_group_type
was passed as NULL
when the group was created.
Example
bdr.alter_node_kind
PGD5 introduced a concept of Task Manager Leader node. The node is selected
automatically by PGD, but for upgraded clusters, its important to set the
node_kind
properly for all nodes in the cluster. The user is expected to
do this manually after upgrading to the latest PGD version by calling
bdr.alter_node_kind()
SQL function for each node.
Synopsis
Parameters
node_name
— Name of the node to change kind.node_kind
— Kind of the node, which can be one of:data
,standby
,witness
, orsubscriber-only
.
bdr.alter_subscription_skip_changes_upto
Because logical replication can replicate across versions, doesn't replicate global changes like roles, and can replicate selectively, sometimes the logical replication apply process can encounter an error and stop applying changes.
Wherever possible, fix such problems by making changes to the
target side. CREATE
any missing table that's blocking replication,
CREATE
a needed role, GRANT
a necessary permission, and so on. But occasionally a
problem can't be fixed that way and it might be necessary to skip entirely over a
transaction.
Changes are skipped as entire transactions—all or nothing. To decide where to
skip to, use log output to find the commit LSN, per the example that follows, or peek
the change stream with the logical decoding functions.
Unless a transaction made only one change, you often need to manually apply the transaction's effects on the target side, so it's important to save the problem transaction whenever possible, as shown in the examples that follow.
It's possible to skip over changes without
bdr.alter_subscription_skip_changes_upto
by using
pg_catalog.pg_logical_slot_get_binary_changes
to skip to the LSN of interest,
so this is a convenience function. It does do a faster skip, although it
might bypass some kinds of errors in logical decoding.
This function works only on disabled subscriptions.
The usual sequence of steps is:
- Identify the problem subscription and LSN of the problem commit.
- Disable the subscription.
- Save a copy of the transaction using
pg_catalog.pg_logical_slot_peek_changes
on the source node, if possible. bdr.alter_subscription_skip_changes_upto
on the target node.- Apply repaired or equivalent changes on the target manually, if necessary.
- Reenable the subscription.
Warning
It's easy to make problems worse when using this function. Don't do anything unless you're certain it's the only option.
Synopsis
Example
Apply of a transaction is failing with an error, and you've determined that lower-impact fixes such as changes on the target side can't resolve this issue. You determine that you must skip the transaction.
In the error logs, find the commit record LSN to skip to, as in this example:
In this portion of log, you have the information you need: the_target_lsn: 0/300AC18 the_subscription: bdr_regression_bdrgroup_node1_node2
Next, disable the subscription so the apply worker doesn't try to connect to the replication slot:
You can't skip only parts of the transaction: it's all or nothing. So we strongly recommend that you save a record of it by copying it out on the provider side first, using the subscription's slot name.
This example is broken into multiple lines for readability,
but issue it in a single line. \copy
doesn't
support multi-line commands.
You can skip the change by changing peek
to get
, but
bdr....skip_changes_upto
does a faster skip that avoids decoding
and outputting all the data:
You can apply the same changes (or repaired versions of them) manually to the target node, using the dumped transaction contents as a guide.
Finally, reenable the subscription:
Global advisory locks
PGD supports global advisory locks. These locks are similar to the advisory locks available in PostgreSQL except that the advisory locks supported by PGD are global. They follow semantics similar to DDL locks. So an advisory lock is obtained by majority consensus and can be used even if one or more nodes are down or lagging behind, as long as a majority of all nodes can work together.
Currently only EXCLUSIVE locks are supported. So if another node or another backend on the same node has already acquired the advisory lock on the object, then other nodes or backends must wait for the lock to be released.
Advisory lock is transactional in nature. So the lock is automatically released when the transaction ends unless it's explicitly released before the end of the transaction. In this case, it becomes available as soon as it's released. Session-level advisory locks aren't currently supported.
Global advisory locks are reentrant. So if the same resource is locked three times, you must then unlock it three times for it to be released for use in other sessions.
bdr.global_advisory_lock
This function acquires an EXCLUSIVE lock on the provided object. If the lock isn't
available, then it waits until the lock becomes available or the
bdr.global_lock_timeout
is reached.
Synopsis
parameters
key
— The object on which an advisory lock is acquired.
Synopsis
parameters
key1
— First part of the composite key.key2
— second part of the composite key.
bdr.global_advisory_unlock
This function releases a previously acquired lock on the application-defined source. The lock must have been obtained in the same transaction by the application. Otherwise, an error is raised.
Synopsis
Parameters
key
— The object on which an advisory lock is acquired.
Synopsis
Parameters
key1
— First part of the composite key.key2
— Second part of the composite key.
Monitoring functions
bdr.monitor_group_versions
To provide a cluster-wide version check, this function uses
PGD version information returned from the view
bdr.group_version_details
.
Synopsis
Notes
This function returns a record with fields status
and message
,
as explained in Monitoring.
This function calls bdr.run_on_all_nodes()
.
bdr.monitor_group_raft
To provide a cluster-wide Raft check, this function uses
PGD Raft information returned from the view
bdr.group_raft_details
.
Synopsis
Parameters
node_group_name
— the node group name that we want to check.
Notes
This function returns a record with fields status
and message
,
as explained in Monitoring.
This function calls bdr.run_on_all_nodes()
.
bdr.monitor_local_replslots
This function uses replication slot status information returned from the
view pg_replication_slots
(slot active or inactive) to provide a
local check considering all replication slots except the PGD group
slots.
Synopsis
Notes
This function returns a record with fields status
and message
,
as explained in Monitoring replication slots.
bdr.wal_sender_stats
If the decoding worker is enabled, this function shows information about the decoder slot and current LCR (logical change record) segment file being read by each WAL sender.
Synopsis
Output columns
pid
— PID of the WAL sender (corresponds topg_stat_replication
'spid
column).is_using_lcr
— Whether the WAL sender is sending LCR files. The next columns areNULL
ifis_using_lcr
isFALSE
.decoder_slot_name
— The name of the decoder replication slot.lcr_file_name
— The name of the current LCR file.
bdr.get_decoding_worker_stat
If the decoding worker is enabled, this function
shows information about the state of the decoding worker associated with the
current database. This also provides more granular information about decoding
worker progress than is available via pg_replication_slots
.
Synopsis
Output columns
pid
— The PID of the decoding worker (corresponds to the columnactive_pid
inpg_replication_slots
).decoded_upto_lsn
— LSN up to which the decoding worker read transactional logs.waiting
— Whether the decoding worker is waiting for new WAL.waiting_for_lsn
— The LSN of the next expected WAL.
Notes
For further details, see Monitoring WAL senders using LCR.
bdr.lag_control
If Lag Control is enabled, this function shows information about the commit delay and number of nodes conforming to their configured lag measure for the local node and current database.
Synopsis
Output columns
commit_delay
— Current runtime commit delay, in fractional milliseconds.commit_delay_maximum
— Configured maximum commit delay, in fractional milliseconds.commit_delay_adjustment
— Change to runtime commit delay possible during a sample interval, in fractional milliseconds.conforming_nodes
— Current runtime number of nodes conforming to lag measures.conforming_nodes_minimum
— Configured minimum number of nodes required to conform to lag measures, below which a commit delay adjustment is applied.lag_bytes_threshold
— Lag size at which a commit delay is applied, in kilobytes.lag_bytes_maximum
— Configured maximum lag size, in kilobytes.lag_time_threshold
— Lag time at which a commit delay is applied, in milliseconds.lag_time_maximum
— Configured maximum lag time, in milliseconds.sample_interval
— Configured minimum time between lag samples and possible commit delay adjustments, in milliseconds.
- On this page
- Version information functions
- System information functions
- System and progress information parameters
- Consensus function
- Utility functions
- bdr.wait_slot_confirm_lsn
- bdr.wait_for_apply_queue
- bdr.get_node_sub_receive_lsn
- bdr.get_node_sub_apply_lsn
- bdr.run_on_all_nodes
- bdr.run_on_nodes
- bdr.run_on_group
- bdr.global_lock_table
- bdr.wait_for_xid_progress
- bdr.local_group_slot_name
- bdr.node_group_type
- bdr.alter_node_kind
- bdr.alter_subscription_skip_changes_upto
- Global advisory locks
- Monitoring functions