DDL replication v5
DDL stands for data definition language, the subset of the SQL language that creates, alters, and drops database objects.
For operational convenience and correctness, PGD replicates most DDL actions, with these exceptions:
- Temporary or unlogged relations
- Certain DDL statements (mostly long running)
- Locking commands (
LOCK
) - Table maintenance commands (
VACUUM
,ANALYZE
,CLUSTER
,REINDEX
) - Actions of autovacuum
- Operational commands (
CHECKPOINT
,ALTER SYSTEM
) - Actions related to databases or tablespaces
Automatic DDL replication makes certain DDL changes easier without your having to manually distribute the DDL change to all nodes and ensure that they're consistent.
In the default replication set, DDL is replicated to all nodes by default. To replicate DDL, you must add a DDL replication filter to the replication set. See DDL replication filtering.
PGD is significantly different from standalone PostgreSQL when it comes to DDL replication. Treating it the same is the most common issue with PGD.
The main difference from table replication is that DDL replication doesn't replicate the result of the DDL but the statement itself. This works very well in most cases, although it introduces the requirement that the DDL must execute similarly on all nodes. A more subtle point is that the DDL must be immutable with respect to all datatype-specific parameter settings, including any datatypes introduced by extensions (not built in). For example, the DDL statement must execute correctly in the default encoding used on each node.
DDL replication options
The bdr.ddl_replication
parameter specifies replication behavior.
bdr.ddl_replication = on
is the default and replicates DDL to the
default replication set, which by default means all nodes. Nondefault
replication sets don't replicate DDL unless they have a
DDL filter
defined for them.
You can also replicate DDL to specific replication sets using the
function bdr.replicate_ddl_command()
. This can be helpful if you
want to run DDL commands when a node is down or if you want to have
indexes or partitions that exist on a subset of nodes or rep sets,
for example, all nodes at site1.
While we don't recommend it, you can skip automatic DDL replication and
execute it manually on each node using bdr.ddl_replication
configuration
parameters.
When set, it makes PGD skip both the global locking and the replication of executed DDL commands. You must then run the DDL manually on all nodes.
Warning
Executing DDL manually on each node without global locking can cause the whole PGD group to stop replicating if conflicting DDL or DML executes concurrently.
The bdr.ddl_replication
parameter can be set only by the bdr_superuser,
by superuser, or in the config
file.
Executing DDL on PGD systems
A PGD group isn't the same as a standalone PostgreSQL server. It's based on asynchronous multi-master replication without central locking and without a transaction coordinator. This has important implications when executing DDL.
DDL that executes in parallel continues to do so with PGD. DDL execution respects the parameters that affect parallel operation on each node as it executes, so you might notice differences in the settings between nodes.
Prevent the execution of conflicting DDL, otherwise DDL replication causes errors and the replication stops.
PGD offers three levels of protection against those problems:
ddl_locking = 'dml'
is the best option for operations, usable when you execute
DDL from only one node at a time. This isn't the default, but we recommend
that you use this setting if you can control where DDL is executed from. Doing so
ensures that there are no inter-node conflicts. Intra-node conflicts are already
handled by PostgreSQL.
ddl_locking = on
is the strictest option and is best when DDL might execute
from any node concurrently and you want to ensure correctness.
ddl_locking = off
is the least strict option and is dangerous in general use.
This option skips locks altogether, avoiding any performance overhead, which makes
it a useful option when creating a new and empty database schema.
These options can be set only by the bdr_superuser, by the superuser, or in the config file
.
When using the bdr.replicate_ddl_command
, you can set this
parameter directly with the third argument, using the specified
bdr.ddl_locking
setting only for the DDL commands passed to that
function.
DDL locking details
Two kinds of locks enforce correctness of replicated DDL with PGD.
The first kind is known as a global DDL lock and is used only when ddl_locking = on
.
A global DDL lock prevents any other DDL from executing on the cluster while
each DDL statement runs. This ensures full correctness in the general case but
is too strict for many simple cases. PGD acquires a global lock on
DDL operations the first time in a transaction where schema changes are made.
This effectively serializes the DDL-executing transactions in the cluster. In
other words, while DDL is running, no other connection on any node can run
another DDL command, even if it affects different tables.
To acquire a lock on DDL operations, the PGD node executing DDL contacts the other nodes in a PGD group and asks them to grant it the exclusive right to execute DDL. The lock request is sent by the regular replication stream, and the nodes respond by the replication stream as well. So it's important that nodes (or at least a majority of the nodes) run without much replication delay. Otherwise it might take a long time for the node to acquire the DDL lock. Once the majority of nodes agrees, the DDL execution is carried out.
The ordering of DDL locking is decided using the Raft protocol. DDL statements executed on one node are executed in the same sequence on all other nodes.
To ensure that the node running a DDL has seen effects of all prior DDLs run in the cluster, it waits until it catches up with the node that ran the previous DDL. If the node running the current DDL is lagging behind in replication with respect to the node that ran the previous DDL, then it might take a long time to acquire the lock. Hence it's preferable to run DDLs from a single node or the nodes that have nearly caught up with replication changes originating at other nodes.
The second kind is known as a global DML lock or relation DML lock. This kind of lock is used when
either ddl_locking = on
or ddl_locking = dml
, and the DDL statement might cause
in-flight DML statements to fail. These failures can occur when you add or modify a constraint,
such as a unique constraint, check constraint, or NOT NULL constraint.
Relation DML locks affect only one relation at a time. Relation DML
locks ensure that no DDL executes while there are changes in the queue that
might cause replication to halt with an error.
To acquire the global DML lock on a table, the PGD node executing the DDL contacts all other nodes in a PGD group, asking them to lock the table against writes and waiting while all pending changes to that table are drained. Once all nodes are fully caught up, the originator of the DML lock is free to perform schema changes to the table and replicate them to the other nodes.
The global DML lock holds an EXCLUSIVE LOCK on the table on each node, so it blocks DML, other DDL, VACUUM, and index commands against that table while it runs. This is true even if the global DML lock is held for a command that normally doesn't take an EXCLUSIVE LOCK or higher.
Waiting for pending DML operations to drain can take a long time and even longer if replication is currently lagging. This means that schema changes affecting row representation and constraints, unlike with data changes, can be performed only while all configured nodes can be reached and are keeping up reasonably well with the current write rate. If such DDL commands must be performed while a node is down, first remove the down node from the configuration.
If a DDL statement isn't replicated, no global locks are acquired.
Locking behavior is specified by the bdr.ddl_locking
parameter, as
explained in Executing DDL on PGD systems:
ddl_locking = on
takes global DDL lock and, if needed, takes relation DML lock.ddl_locking = dml
skips global DDL lock and, if needed, takes relation DML lock.ddl_locking = off
skips both global DDL lock and relation DML lock.
Some PGD functions make DDL changes. For those functions, DDL locking behavior applies, which is noted in the documentation for each function.
Thus, ddl_locking = dml
is safe only when you can guarantee that
no conflicting DDL is executed from other nodes. With this setting,
the statements that require only the global DDL lock don't use the global
locking at all.
ddl_locking = off
is safe only when you can guarantee that there are no
conflicting DDL and no conflicting DML operations on the database objects
DDL executes on. If you turn locking off and then experience difficulties,
you might lose in-flight changes to data. The user application team needs to resolve any issues caused.
In some cases, concurrently executing DDL can properly be serialized. If these serialization failures occur, the DDL might reexecute.
DDL replication isn't active on logical standby nodes until they are promoted.
Some PGD management functions act like DDL, meaning that they attempt to take global locks, and their actions are replicated if DDL replication is active. The full list of replicated functions is listed in PGD functions that behave like DDL.
DDL executed on temporary tables never need global locks.
ALTER or DROP of an object created in the current transaction doesn't required global DML lock.
Monitoring of global DDL locks and global DML locks is shown in Monitoring.
Minimizing the impact of DDL
Good operational advice for any database, these points become even more important with PGD:
To minimize the impact of DDL, make transactions performing DDL short, don't combine them with lots of row changes, and avoid long-running foreign key or other constraint rechecks.
For
ALTER TABLE
, useADD CONSTRAINT NOT VALID
followed by another transaction withVALIDATE CONSTRAINT
rather than usingADD CONSTRAINT
alone.VALIDATE CONSTRAINT
waits until replayed on all nodes, which gives a noticeable delay to receive confirmations.When indexing, use the
CONCURRENTLY
option whenever possible.
An alternative way of executing long-running DDL is to disable DDL replication and then to execute the DDL statement separately on each node. You can still do this using a single SQL statement, as shown in the following example. Global locking rules still apply, so be careful not to lock yourself out with this type of usage, which is more of a workaround.
We recommend using the bdr.run_on_all_nodes()
technique with CREATE
INDEX CONCURRENTLY
, noting that DDL replication must be disabled for the whole
session because CREATE INDEX CONCURRENTLY
is a multi-transaction command.
Avoid CREATE INDEX
on production systems
since it prevents writes while it executes.
REINDEX
is replicated in versions up to 3.6 but not with PGD 3.7 or later.
Avoid using REINDEX
because of the AccessExclusiveLocks it holds.
Instead, use REINDEX CONCURRENTLY
(or reindexdb --concurrently
),
which is available in PG12+ or 2QPG11+.
You can disable DDL replication when using command-line utilities like this:
Multiple DDL statements might benefit from bunching into a single transaction rather than fired as individual statements, so take the DDL lock only once. This might not be desirable if the table-level locks interfere with normal operations.
If DDL is holding up the system for too long, you can safely
cancel the DDL on the originating node with Control-C in psql or with pg_cancel_backend()
.
You can't cancel a DDL lock from any other node.
You can control how long the global lock takes with optional
global locking timeout settings.
bdr.global_lock_timeout
limits how long the wait
for acquiring the global lock can take before it's canceled.
bdr.global_lock_statement_timeout
limits the runtime length of any statement
in transaction that holds global locks, and bdr.global_lock_idle_timeout
sets the maximum allowed idle time (time between statements) for a transaction
holding any global locks. You can disable all of these timeouts by setting
their values to zero.
Once the DDL operation has committed on the originating node, you can't cancel or abort it. The PGD group must wait for it to apply successfully on other nodes that confirmed the global lock and for them to acknowledge replay. For this reason, keep DDL transactions short and fast.
Handling DDL with down nodes
If the node initiating the global DDL lock goes down after it acquired the global lock (either DDL or DML), the lock stays active. The global locks don't time out, even if timeouts were set. In case the node comes back up, it releases all the global locks that it holds.
If it stays down for a long time or indefinitely,
remove the node from the PGD group to release the global locks. This
is one reason for executing emergency DDL using the SET
command as
the bdr_superuser to update the bdr.ddl_locking
value.
If one of the other nodes goes down after it confirmed the global lock but before the command acquiring it executed, the execution of that command requesting the lock continues as if the node were up.
As mentioned earlier, the global DDL lock requires only a majority of the nodes to respond, and so it works if part of the cluster is down, as long as a majority is running and reachable. But the DML lock can't be acquired unless the whole cluster is available.
With global DDL or global DML lock, if another node goes down, the command continues normally, and the lock is released.
Statement-specific DDL replication concerns
Not all commands can be replicated automatically. Such commands
are generally disallowed, unless DDL replication is turned off
by turning bdr.ddl_replication
off.
PGD prevents some DDL statements from running when it's active on a database. This protects the consistency of the system by disallowing statements that can't be replicated correctly or for which replication isn't yet supported.
If a statement isn't permitted under PGD, you can often find
another way to do the same thing. For example, you can't do an ALTER TABLE
,
which adds a column with a volatile default value. But generally you can
rephrase that as a series of independent ALTER TABLE
and UPDATE
statements
that work.
Generally, unsupported statements are prevented from
executing, raising a feature_not_supported
(SQLSTATE 0A000
) error.
Any DDL that references or relies on a temporary object can't be replicated by PGD and throws an error if executed with DDL replication enabled.
PGD DDL command handling matrix
The following table describes the utility or DDL commands that are allowed, the ones that are replicated, and the type of global lock they take when they're replicated.
For some more complex statements like ALTER TABLE
, these can differ depending
on the subcommands executed. Every such command has detailed explanation
under the following table.
Command | Allowed | Replicated | Lock |
---|---|---|---|
ALTER AGGREGATE | Y | Y | DDL |
ALTER CAST | Y | Y | DDL |
ALTER COLLATION | Y | Y | DDL |
ALTER CONVERSION | Y | Y | DDL |
ALTER DATABASE | Y | N | N |
ALTER DATABASE LINK | Y | Y | DDL |
ALTER DEFAULT PRIVILEGES | Y | Y | DDL |
ALTER DIRECTORY | Y | Y | DDL |
ALTER DOMAIN | Y | Y | DDL |
ALTER EVENT TRIGGER | Y | Y | DDL |
ALTER EXTENSION | Y | Y | DDL |
ALTER FOREIGN DATA WRAPPER | Y | Y | DDL |
ALTER FOREIGN TABLE | Y | Y | DDL |
ALTER FUNCTION | Y | Y | DDL |
ALTER INDEX | Y | Y | DDL |
ALTER LANGUAGE | Y | Y | DDL |
ALTER LARGE OBJECT | N | N | N |
ALTER MATERIALIZED VIEW | Y | N | N |
ALTER OPERATOR | Y | Y | DDL |
ALTER OPERATOR CLASS | Y | Y | DDL |
ALTER OPERATOR FAMILY | Y | Y | DDL |
ALTER PACKAGE | Y | Y | DDL |
ALTER POLICY | Y | Y | DDL |
ALTER PROCEDURE | Y | Y | DDL |
ALTER PROFILE | Y | Y | Details |
ALTER PUBLICATION | Y | Y | DDL |
ALTER QUEUE | Y | Y | DDL |
ALTER QUEUE TABLE | Y | Y | DDL |
ALTER REDACTION POLICY | Y | Y | DDL |
ALTER RESOURCE GROUP | Y | N | N |
ALTER ROLE | Y | Y | DDL |
ALTER ROUTINE | Y | Y | DDL |
ALTER RULE | Y | Y | DDL |
ALTER SCHEMA | Y | Y | DDL |
ALTER SEQUENCE | Details | Y | DML |
ALTER SERVER | Y | Y | DDL |
ALTER SESSION | Y | N | N |
ALTER STATISTICS | Y | Y | DDL |
ALTER SUBSCRIPTION | Y | Y | DDL |
ALTER SYNONYM | Y | Y | DDL |
ALTER SYSTEM | Y | N | N |
ALTER TABLE | Details | Y | Details |
ALTER TABLESPACE | Y | N | N |
ALTER TEXT SEARCH CONFIGURATION | Y | Y | DDL |
ALTER TEXT SEARCH DICTIONARY | Y | Y | DDL |
ALTER TEXT SEARCH PARSER | Y | Y | DDL |
ALTER TEXT SEARCH TEMPLATE | Y | Y | DDL |
ALTER TRIGGER | Y | Y | DDL |
ALTER TYPE | Y | Y | DDL |
ALTER USER MAPPING | Y | Y | DDL |
ALTER VIEW | Y | Y | DDL |
ANALYZE | Y | N | N |
BEGIN | Y | N | N |
CHECKPOINT | Y | N | N |
CLOSE | Y | N | N |
CLOSE CURSOR | Y | N | N |
CLOSE CURSOR ALL | Y | N | N |
CLUSTER | Y | N | N |
COMMENT | Y | Details | DDL |
COMMIT | Y | N | N |
COMMIT PREPARED | Y | N | N |
COPY | Y | N | N |
COPY FROM | Y | N | N |
CREATE ACCESS METHOD | Y | Y | DDL |
CREATE AGGREGATE | Y | Y | DDL |
CREATE CAST | Y | Y | DDL |
CREATE COLLATION | Y | Y | DDL |
CREATE CONSTRAINT | Y | Y | DDL |
CREATE CONVERSION | Y | Y | DDL |
CREATE DATABASE | Y | N | N |
CREATE DATABASE LINK | Y | Y | DDL |
CREATE DIRECTORY | Y | Y | DDL |
CREATE DOMAIN | Y | Y | DDL |
CREATE EVENT TRIGGER | Y | Y | DDL |
CREATE EXTENSION | Y | Y | DDL |
CREATE FOREIGN DATA WRAPPER | Y | Y | DDL |
CREATE FOREIGN TABLE | Y | Y | DDL |
CREATE FUNCTION | Y | Y | DDL |
CREATE INDEX | Y | Y | DML |
CREATE LANGUAGE | Y | Y | DDL |
CREATE MATERIALIZED VIEW | Y | N | N |
CREATE OPERATOR | Y | Y | DDL |
CREATE OPERATOR CLASS | Y | Y | DDL |
CREATE OPERATOR FAMILY | Y | Y | DDL |
CREATE PACKAGE | Y | Y | DDL |
CREATE PACKAGE BODY | Y | Y | DDL |
CREATE POLICY | Y | Y | DML |
CREATE PROCEDURE | Y | Y | DDL |
CREATE PROFILE | Y | Y | Details |
CREATE PUBLICATION | Y | Y | DDL |
CREATE QUEUE | Y | Y | DDL |
CREATE QUEUE TABLE | Y | Y | DDL |
CREATE REDACTION POLICY | Y | Y | DDL |
CREATE RESOURCE GROUP | Y | N | N |
CREATE ROLE | Y | Y | DDL |
CREATE ROUTINE | Y | Y | DDL |
CREATE RULE | Y | Y | DDL |
CREATE SCHEMA | Y | Y | DDL |
CREATE SEQUENCE | Details | Y | DDL |
CREATE SERVER | Y | Y | DDL |
CREATE STATISTICS | Y | Y | DDL |
CREATE SUBSCRIPTION | Y | Y | DDL |
CREATE SYNONYM | Y | Y | DDL |
CREATE TABLE | Details | Y | DDL |
CREATE TABLE AS | Details | Y | DDL |
CREATE TABLESPACE | Y | N | N |
CREATE TEXT SEARCH CONFIGURATION | Y | Y | DDL |
CREATE TEXT SEARCH DICTIONARY | Y | Y | DDL |
CREATE TEXT SEARCH PARSER | Y | Y | DDL |
CREATE TEXT SEARCH TEMPLATE | Y | Y | DDL |
CREATE TRANSFORM | Y | Y | DDL |
CREATE TRIGGER | Y | Y | DDL |
CREATE TYPE | Y | Y | DDL |
CREATE TYPE BODY | Y | Y | DDL |
CREATE USER MAPPING | Y | Y | DDL |
CREATE VIEW | Y | Y | DDL |
DEALLOCATE | Y | N | N |
DEALLOCATE ALL | Y | N | N |
DECLARE CURSOR | Y | N | N |
DISCARD | Y | N | N |
DISCARD ALL | Y | N | N |
DISCARD PLANS | Y | N | N |
DISCARD SEQUENCES | Y | N | N |
DISCARD TEMP | Y | N | N |
DO | Y | N | N |
DROP ACCESS METHOD | Y | Y | DDL |
DROP AGGREGATE | Y | Y | DDL |
DROP CAST | Y | Y | DDL |
DROP COLLATION | Y | Y | DDL |
DROP CONSTRAINT | Y | Y | DDL |
DROP CONVERSION | Y | Y | DDL |
DROP DATABASE | Y | N | N |
DROP DATABASE LINK | Y | Y | DDL |
DROP DIRECTORY | Y | Y | DDL |
DROP DOMAIN | Y | Y | DDL |
DROP EVENT TRIGGER | Y | Y | DDL |
DROP EXTENSION | Y | Y | DDL |
DROP FOREIGN DATA WRAPPER | Y | Y | DDL |
DROP FOREIGN TABLE | Y | Y | DDL |
DROP FUNCTION | Y | Y | DDL |
DROP INDEX | Y | Y | DDL |
DROP LANGUAGE | Y | Y | DDL |
DROP MATERIALIZED VIEW | Y | N | N |
DROP OPERATOR | Y | Y | DDL |
DROP OPERATOR CLASS | Y | Y | DDL |
DROP OPERATOR FAMILY | Y | Y | DDL |
DROP OWNED | Y | Y | DDL |
DROP PACKAGE | Y | Y | DDL |
DROP PACKAGE BODY | Y | Y | DDL |
DROP POLICY | Y | Y | DDL |
DROP PROCEDURE | Y | Y | DDL |
DROP PROFILE | Y | Y | DDL |
DROP PUBLICATION | Y | Y | DDL |
DROP QUEUE | Y | Y | DDL |
DROP QUEUE TABLE | Y | Y | DDL |
DROP REDACTION POLICY | Y | Y | DDL |
DROP RESOURCE GROUP | Y | N | N |
DROP ROLE | Y | Y | DDL |
DROP ROUTINE | Y | Y | DDL |
DROP RULE | Y | Y | DDL |
DROP SCHEMA | Y | Y | DDL |
DROP SEQUENCE | Y | Y | DDL |
DROP SERVER | Y | Y | DDL |
DROP STATISTICS | Y | Y | DDL |
DROP SUBSCRIPTION | Y | Y | DDL |
DROP SYNONYM | Y | Y | DDL |
DROP TABLE | Y | Y | DML |
DROP TABLESPACE | Y | N | N |
DROP TEXT SEARCH CONFIGURATION | Y | Y | DDL |
DROP TEXT SEARCH DICTIONARY | Y | Y | DDL |
DROP TEXT SEARCH PARSER | Y | Y | DDL |
DROP TEXT SEARCH TEMPLATE | Y | Y | DDL |
DROP TRANSFORM | Y | Y | DDL |
DROP TRIGGER | Y | Y | DDL |
DROP TYPE | Y | Y | DDL |
DROP TYPE BODY | Y | Y | DDL |
DROP USER MAPPING | Y | Y | DDL |
DROP VIEW | Y | Y | DDL |
EXECUTE | Y | N | N |
EXPLAIN | Y | Details | Details |
FETCH | Y | N | N |
GRANT | Y | Details | DDL |
GRANT ROLE | Y | Y | DDL |
IMPORT FOREIGN SCHEMA | Y | Y | DDL |
LISTEN | Y | N | N |
LOAD | Y | N | N |
LOAD ROW DATA | Y | Y | DDL |
LOCK TABLE | Y | N | Details |
MOVE | Y | N | N |
NOTIFY | Y | N | N |
PREPARE | Y | N | N |
PREPARE TRANSACTION | Y | N | N |
REASSIGN OWNED | Y | Y | DDL |
REFRESH MATERIALIZED VIEW | Y | N | N |
REINDEX | Y | N | N |
RELEASE | Y | N | N |
RESET | Y | N | N |
REVOKE | Y | Details | DDL |
REVOKE ROLE | Y | Y | DDL |
ROLLBACK | Y | N | N |
ROLLBACK PREPARED | Y | N | N |
SAVEPOINT | Y | N | N |
SECURITY LABEL | Y | Details | DDL |
SELECT INTO | Details | Y | DDL |
SET | Y | N | N |
SET CONSTRAINTS | Y | N | N |
SHOW | Y | N | N |
START TRANSACTION | Y | N | N |
TRUNCATE TABLE | Y | Details | Details |
UNLISTEN | Y | N | N |
VACUUM | Y | N | N |
ALTER SEQUENCE
Generally ALTER SEQUENCE
is supported, but when using global
sequences, some options have no effect.
ALTER SEQUENCE ... RENAME
isn't supported on galloc sequences (only).
ALTER SEQUENCE ... SET SCHEMA
isn't supported on galloc sequences (only).
ALTER TABLE
Generally, ALTER TABLE
commands are allowed. However, several
subcommands aren't supported.
ALTER TABLE disallowed commands
Some variants of ALTER TABLE
currently aren't allowed on a PGD node:
ADD COLUMN ... DEFAULT (non-immutable expression)
— This is not allowed because it currently results in different data on different nodes. See Adding a column for a suggested workaround.ADD CONSTRAINT ... EXCLUDE
— Exclusion constraints aren't supported for now. Exclusion constraints don't make much sense in an asynchronous system and lead to changes that can't be replayed.ALTER TABLE ... SET WITH[OUT] OIDS
— Isn't supported for the same reasons as inCREATE TABLE
.ALTER COLUMN ... SET STORAGE external
— Is rejected if the column is one of the columns of the replica identity for the table.RENAME
— Can't rename an Autopartitioned table.SET SCHEMA
— Can't set the schema of an Autopartitioned table.ALTER COLUMN ... TYPE
— Changing a column's type isn't supported if the command causes the whole table to be rewritten, which occurs when the change isn't binary coercible. Binary coercible changes might be allowed only one way. For example, the change fromVARCHAR(128)
toVARCHAR(256)
is binary coercible and therefore allowed, whereas the changeVARCHAR(256)
toVARCHAR(128)
isn't binary coercible and therefore normally disallowed. NonreplicatedALTER COLUMN ... TYPE
, can be allowed if the column is automatically castable to the new type (it doesn't contain theUSING
clause). An example follows. Table rewrites hold an AccessExclusiveLock for extended periods on larger tables, so such commands are likely to be infeasible on highly available databases in any case. See Changing a column's type for a suggested workaround.ALTER TABLE ... ADD FOREIGN KEY
— Isn't supported if current user doesn't have permission to read the referenced table or if the referenced table has RLS restrictions enabled that the current user can't bypass.
The following example fails because it tries to add a constant value of type timestamp
onto a column of type timestamptz
. The cast between timestamp
and timestamptz
relies on the time zone of the session and so isn't immutable.
Starting in PGD 3.7.4, you can add certain types of constraints, such as CHECK
and
FOREIGN KEY
constraints, without taking a DML lock. But
this requires a two-step process of first creating a NOT VALID
constraint
and then validating the constraint in a separate transaction with the ALTER TABLE ... VALIDATE CONSTRAINT
command. See Adding a CONSTRAINT
for more details.
ALTER TABLE locking
The following variants of ALTER TABLE
take only DDL lock and not a
DML lock:
ALTER TABLE ... ADD COLUMN ... (immutable) DEFAULT
ALTER TABLE ... ALTER COLUMN ... SET DEFAULT expression
ALTER TABLE ... ALTER COLUMN ... DROP DEFAULT
ALTER TABLE ... ALTER COLUMN ... TYPE
if it doesn't require rewriteALTER TABLE ... ALTER COLUMN ... SET STATISTICS
ALTER TABLE ... VALIDATE CONSTRAINT
ALTER TABLE ... ATTACH PARTITION
ALTER TABLE ... DETACH PARTITION
ALTER TABLE ... ENABLE TRIGGER
(ENABLE REPLICA TRIGGER
still takes a DML lock)ALTER TABLE ... CLUSTER ON
ALTER TABLE ... SET WITHOUT CLUSTER
ALTER TABLE ... SET ( storage_parameter = value [, ... ] )
ALTER TABLE ... RESET ( storage_parameter = [, ... ] )
ALTER TABLE ... OWNER TO
All other variants of ALTER TABLE
take a DML lock on the table being modified.
Some variants of ALTER TABLE
have restrictions, noted below.
ALTER TABLE examples
This next example works because the type change is binary coercible and so doesn't cause a table rewrite. It executes as a catalog-only change.
However, making this change to reverse the command isn't possible because
the change from VARCHAR(128)
to VARCHAR(20)
isn't binary coercible.
For workarounds, see Restricted DDL workarounds.
It's useful to provide context for different types of ALTER TABLE ...
ALTER COLUMN TYPE
(ATCT) operations that are possible in general and in
nonreplicated environments.
Some ATCT operations update only the metadata of the underlying column type and don't require a rewrite of the underlying table data. This is typically the case when the existing column type and the target type are binary coercible. For example:
You can also change the column type to VARCHAR
or TEXT
data types because of binary coercibility. Again, this is just a metadata
update of the underlying column type.
However, if you want to reduce the size of col2, then that leads to a rewrite of the underlying table data. Rewrite of a table is normally restricted.
To give an example with nontext types, consider col3 above with type INTEGER. An ATCT operation that tries to convert to SMALLINT or BIGINT fails in a similar manner as above.
In both of these failing cases, there's an automatic assignment cast from the current types to the target types. However, there's no binary coercibility, which ends up causing a rewrite of the underlying table data.
In such cases, in controlled DBA environments, you can change the type of a column to an automatically castable one by adopting a rolling upgrade for the type of this column in a nonreplicated environment on all the nodes, one by one. Suppose the DDL isn't replicated and the change of the column type is to an automatically castable one. You can then allow the rewrite locally on the node performing the alter, along with concurrent activity on other nodes on this same table. You can then repeat this nonreplicated ATCT operation on all the nodes one by one to bring about the desired change of the column type across the entire EDB Postgres Distributed cluster. Because this involves a rewrite, the activity still takes the DML lock for a brief period and thus requires that the whole cluster is available. With these specifics in place, you can carry out the rolling upgrade of the nonreplicated alter activity like this:
Due to automatic assignment casts being available for many data types,
this local nonreplicated ATCT operation supports a wide variety of
conversions. Also, ATCT operations that use a USING
clause
are likely to fail because of the lack of automatic assignment casts.
This example shows a few common conversions with automatic assignment casts:
This example isn't an exhaustive list of possibly allowable ATCT operations in a nonreplicated environment. Not all ATCT operations work. The cases where no automatic assignment is possible fail even if you disable DDL replication. So, while conversion from numeric types to text types works in a nonreplicated environment, conversion back from text type to numeric types fails.
While the ATCT operations in nonreplicated environments support a
variety of type conversions, the rewrite
can still fail if the underlying table data contains values that you can't
assign to the new data type. For example, suppose the current type for
a column is VARCHAR(256)
and you try a nonreplicated ATCT
operation to convert it into VARCHAR(128)
. If there's any existing data
in the table that's wider than 128 bytes, then the rewrite operation
fails locally.
If underlying table data meets the characteristics of the new type, then the rewrite succeeds. However, replication might fail if other nodes that haven't yet performed the nonreplicated rolling data type upgrade introduce new data that is wider than 128 bytes concurrently to this local ATCT operation. This brings replication to a halt in the cluster. So be aware of the data type restrictions and characteristics at the database and application levels while performing these nonreplicated rolling data type upgrade operations. We strongly recommend that you perform and test such ATCT operations in controlled and fully aware DBA environments. These ATCT operations are asymmetric, and backing out certain changes that fail can lead to table rewrites that take a long time.
Also, you can't perform the implicit castable ALTER activity in transaction blocks.
ALTER TYPE
ALTER TYPE
is replicated, but a global DML lock isn't
applied to all tables that use that data type, since PostgreSQL doesn't
record those dependencies. See Restricted DDL workarounds.
COMMENT ON
All variants of COMMENT ON
are allowed, but
COMMENT ON TABLESPACE/DATABASE/LARGE OBJECT
isn't replicated.
CREATE PROFILE or ALTER PROFILE
The PASSWORD_VERIFY_FUNCTION
associated with the profile should be IMMUTABLE
if the function is SECURITY DEFINER
.
Such a CREATE PROFILE
or ALTER PROFILE
command will be replicated but subsequent CREATE USER
or ALTER USER
commands using this profile will
break the replication due to the writer
worker throwing the error: cannot change current role within security-restricted operation
.
CREATE SEQUENCE
Generally CREATE SEQUENCE
is supported, but when using global
sequences, some options have no effect.
CREATE TABLE
Generally CREATE TABLE
is supported, but CREATE TABLE WITH OIDS
isn't
allowed on a PGD node.
CREATE TABLE AS and SELECT INTO
CREATE TABLE AS
and SELECT INTO
are allowed only if all subcommands are
also allowed.
EXPLAIN
Generally EXPLAIN
is allowed, but because EXPLAIN ANALYZE
can have side
effects on the database, there are some restrictions on it.
EXPLAIN ANALYZE Replication
EXPLAIN ANALYZE
follows replication rules of the analyzed statement.
EXPLAIN ANALYZE Locking
EXPLAIN ANALYZE
follows locking rules of the analyzed statement.
GRANT and REVOKE
Generally GRANT
and REVOKE
statements are supported, however
GRANT/REVOKE ON TABLESPACE/LARGE OBJECT
aren't replicated.
LOCK TABLE
LOCK TABLE isn't replicated, but it might acquire the global DML lock when
bdr.lock_table_locking
is set on
.
You can also use The bdr.global_lock_table()
function to explicitly request a global DML
lock.
SECURITY LABEL
All variants of SECURITY LABEL
are allowed, but
SECURITY LABEL ON TABLESPACE/DATABASE/LARGE OBJECT
isn't replicated.
TRUNCATE Replication
TRUNCATE
command is replicated as DML, not as a DDL statement. Whether
the TRUNCATE
on table is replicated depends on replication settings for
each affected table.
TRUNCATE Locking
Even though TRUNCATE
isn't replicated the same way as other DDL, it can acquire
the global DML lock when bdr.truncate_locking
is set to on
.
Role manipulation statements
Users are global objects in a PostgreSQL instance, which means they span multiple databases while PGD operates on an individual database level. This means that role manipulation statement handling needs extra thought.
PGD requires that any roles that are referenced by any replicated DDL must exist on all nodes. The roles don't have to have the same grants, password, and so on, but they must exist.
PGD replicates role manipulation statements if bdr.role_replication
is
enabled (default) and role manipulation statements are run in a PGD-enabled
database.
The role manipulation statements include the following:
CREATE ROLE
ALTER ROLE
DROP ROLE
GRANT ROLE
CREATE USER
ALTER USER
DROP USER
CREATE GROUP
ALTER GROUP
DROP GROUP
In general, either:
Configure the system with
bdr.role_replication = off
and deploy all role changes (user and group) by external orchestration tools like Ansible, Puppet, and Chef or explicitly replicated bybdr.replicate_ddl_command(...)
.Configure the system so that exactly one PGD-enabled database on the PostgreSQL instance has
bdr.role_replication = on
and run all role management DDL on that database.
We recommended that you run all role management commands in one database.
If role replication is turned off, then the administrator must ensure that any roles used by DDL on one node also exist on the other nodes. Otherwise PGD apply stalls with an error until the role is created on the other nodes.
PGD doesn't capture and replicate role management statements when they
run on a non-PGD-enabled database in a PGD-enabled PostgreSQL instance.
For example, if you have DBs 'bdrdb' (bdr group member) and 'postgres' (bare db),
and bdr.role_replication = on
, then a CREATE USER
run in bdrdb
is
replicated, but a CREATE USER
run in postgres
isn't.
Restricted DDL workarounds
Some of the limitations of PGD DDL operation handling can be worked around. Often splitting up the operation into smaller changes can produce the desired result that either isn't allowed as a single statement or requires excessive locking.
Adding a CONSTRAINT
You can add CHECK
and FOREIGN KEY
constraints without requiring a DML lock.
This involves a two-step process.
ALTER TABLE ... ADD CONSTRAINT ... NOT VALID
ALTER TABLE ... VALIDATE CONSTRAINT
Execute these steps in two different transactions. Both these steps take DDL lock only on the table and hence can be run even when one or more nodes are down. But to validate a constraint, PGD must ensure that:
- All nodes in the cluster see the
ADD CONSTRAINT
command. - The node validating the constraint applied replication changes from all other nodes prior to creating the NOT VALID constraint on those nodes.
So even though the new mechanism doesn't need all nodes
to be up while validating the constraint, it still requires that all
nodes applied the ALTER TABLE .. ADD CONSTRAINT ... NOT VALID
command and made enough progress. PGD waits for a consistent
state to be reached before validating the constraint.
The new facility requires the cluster to run with Raft protocol version 24 and beyond. If the Raft protocol isn't yet upgraded, the old mechanism is used, resulting in a DML lock request.
Adding a column
To add a column with a volatile default, run these commands in separate transactions:
This approach splits schema changes and row changes into separate transactions that PGD can execute and results in consistent data across all nodes in a PGD group.
For best results, batch the update into chunks so that you don't update more than
a few tens or hundreds of thousands of rows at once. You can do this using
a PROCEDURE
with embedded transactions.
The last batch of changes must run in a transaction that takes a global DML lock on the table. Otherwise you can miss rows that are inserted concurrently into the table on other nodes.
If required, you can run ALTER TABLE mytable ALTER COLUMN newcolumn NOT NULL;
after the UPDATE
has finished.
Changing a column's type
Changing a column's type can cause PostgreSQL to rewrite a table. In some cases though this rewriting can be avoided, for example:
You can rewrite this statement to avoid a table rewrite by making the restriction a table constraint rather than a datatype change. The constraint can then be validated in a subsequent command to avoid long locks, if you want.
If the validation fails, then you can UPDATE
just the failing rows.
You can use this technique for TEXT
and VARCHAR
using length()
or with
NUMERIC
datatype using scale()
.
In the general case for changing column type, first add a column of the desired type:
Create a trigger defined as BEFORE INSERT OR UPDATE ON mytable FOR EACH ROW ..
,
which assigns NEW.newcolumn
to NEW.oldcolumn
so that new writes to the
table update the new column automatically.
UPDATE
the table in batches to copy the value of oldcolumn
to
newcolumn
using a PROCEDURE
with embedded transactions. Batching the work
helps reduce replication lag if it's a big table. Updating by range of
IDs or whatever method you prefer is fine. Alternatively, you can update the whole table in one pass for
smaller tables.
CREATE INDEX ...
any required indexes on the new column. It's safe to
use CREATE INDEX ... CONCURRENTLY
run individually without DDL replication
on each node to reduce lock durations.
ALTER
the column to add a NOT NULL
and CHECK
constraints, if required.
BEGIN
a transaction.DROP
the trigger you added.ALTER TABLE
to add anyDEFAULT
required on the column.DROP
the old column.ALTER TABLE mytable RENAME COLUMN newcolumn TO oldcolumn
.COMMIT
.
Because you're dropping a column, you might have to re-create views, procedures,
and so on that depend on the table. Be careful if you CASCADE
drop the column,
as you must be sure you re-create everything that referred to it.
Changing other types
The ALTER TYPE
statement is replicated, but affected tables aren't locked.
When this DDL is used, ensure that the statement has successfully
executed on all nodes before using the new type. You can achieve this using
the bdr.wait_slot_confirm_lsn()
function.
This example ensures that the DDL is written to all nodes before using the new value in DML statements:
PGD functions that behave like DDL
The following PGD management functions act like DDL. This means that, if DDL replication is active and DDL filter settings allow it, they attempt to take global locks and their actions are replicate. For detailed information, see the documentation for the individual functions.
Replication set management
bdr.create_replication_set
bdr.alter_replication_set
bdr.drop_replication_set
bdr.replication_set_add_table
bdr.replication_set_remove_table
bdr.replication_set_add_ddl_filter
bdr.replication_set_remove_ddl_filter
Conflict management
bdr.alter_table_conflict_detection
bdr.column_timestamps_enable
bdr.column_timestamps_disable
Sequence management
bdr.alter_sequence_set_kind
Stream triggers
bdr.create_conflict_trigger
bdr.create_transform_trigger
bdr.drop_trigger