User visible catalogs and views v5
Catalogs and views are presented here in alphabetical order.
bdr.camo_decision_journal
A persistent journal of decisions resolved by a CAMO partner node
after a failover, in case bdr.logical_transaction_status
was
invoked. Unlike bdr.node_pre_commit
, this doesn't cover
transactions processed under normal operational conditions (i.e., both
nodes of a CAMO pair are running and connected). Entries in this journal
aren't ever cleaned up automatically. This is a diagnostic
tool that the system doesn't depend on.
bdr.camo_decision_journal
columns
Name | Type | Description |
---|---|---|
origin_node_id | oid | OID of the node where the transaction executed |
origin_xid | oid | Transaction ID on the remote origin node |
decision | char | 'c' for commit, 'a' for abort |
decision_ts | timestamptz | Decision time |
bdr.commit_scopes
Catalog storing all possible commit scopes that you can use for
bdr.commit_scope
to enable Group Commit.
bdr.commit_scopes
columns
Name | Type | Description |
---|---|---|
commit_scope_id | oid | ID of the scope to be referenced |
commit_scope_name | name | Name of the scope to be referenced |
commit_scope_origin_node_group | oid | Node group for which the rule applies, referenced by ID |
sync_scope_rule | text | Definition of the scope |
bdr.conflict_history
This table is the default table where conflicts are logged. The table is
RANGE partitioned on column local_time
and is managed by Autopartition.
The default data retention period is 30 days.
Access to this table is possible by any table owner, who can see all conflicts for the tables they own, restricted by row-level security.
For details, see Logging conflicts to a table.
bdr.conflict_history
columns
Name | Type | Description |
---|---|---|
sub_id | oid | Which subscription produced this conflict; can be joined to bdr.subscription table |
local_xid | xid | Local transaction of the replication process at the time of conflict |
local_lsn | pg_lsn | Local LSN at the time of conflict |
local_time | timestamp with time zone | Local time of the conflict |
remote_xid | xid | Transaction that produced the conflicting change on the remote node (an origin) |
remote_commit_lsn | pg_lsn | Commit LSN of the transaction which produced the conflicting change on the remote node (an origin) |
remote_commit_time | timestamp with time zone | Commit timestamp of the transaction that produced the conflicting change on the remote node (an origin) |
conflict_type | text | Detected type of the conflict |
conflict_resolution | text | Conflict resolution chosen |
conflict_index | regclass | Conflicting index (valid only if the index wasn't dropped since) |
reloid | oid | Conflicting relation (valid only if the index wasn't dropped since) |
nspname | text | Name of the schema for the relation on which the conflict has occurred at the time of conflict (doesn't follow renames) |
relname | text | Name of the relation on which the conflict has occurred at the time of conflict (does not follow renames) |
key_tuple | json | Json representation of the key used for matching the row |
remote_tuple | json | Json representation of an incoming conflicting row |
local_tuple | json | Json representation of the local conflicting row |
apply_tuple | json | Json representation of the resulting (the one that has been applied) row |
local_tuple_xmin | xid | Transaction that produced the local conflicting row (if local_tuple is set and the row isn't frozen) |
local_tuple_node_id | oid | Node that produced the local conflicting row (if local_tuple is set and the row isn't frozen) |
local_tuple_commit_time | timestamp with time zone | Last known change timestamp of the local conflicting row (if local_tuple is set and the row isn't frozen) |
bdr.conflict_history_summary
A view containing user-readable details on row conflict.
bdr.conflict_history_summary
columns
Name | Type | Description |
---|---|---|
nspname | text | Name of the schema |
relname | text | Name of the table |
local_time | timestamp with time zone | Local time of the conflict |
local_tuple_commit_time | timestamp with time zone | Time of local commit |
remote_commit_time | timestamp with time zone | Time of remote commit |
conflict_type | text | Type of conflict |
conflict_resolution | text | Resolution adopted |
bdr.consensus_kv_data
A persistent storage for the internal Raft-based KV store used by
bdr.consensus_kv_store()
and bdr.consensus_kv_fetch()
interfaces.
bdr.consensus_kv_data
Columns
Name | Type | Description |
---|---|---|
kv_key | text | Unique key |
kv_val | json | Arbitrary value in json format |
kv_create_ts | timestamptz | Last write timestamp |
kv_ttl | int | Time to live for the value in milliseconds |
kv_expire_ts | timestamptz | Expiration timestamp (kv_create_ts + kv_ttl ) |
bdr.crdt_handlers
This table lists merge ("handlers") functions for all CRDT data types.
bdr.crdt_handlers
Columns
Name | Type | Description |
---|---|---|
crdt_type_id | regtype | CRDT data type ID |
crdt_merge_id | regproc | Merge function for this data type |
bdr.ddl_replication
This view lists DDL replication configuration as set up by current DDL filters.
bdr.ddl_replication
columns
Name | Type | Description |
---|---|---|
set_ddl_name | name | Name of DDL filter |
set_ddl_tag | text | The command tags it applies on (regular expression) |
set_ddl_role | text | The roles it applies to (regular expression) |
set_name | name | Name of the replication set for which this filter is defined |
bdr.depend
This table tracks internal object dependencies inside PGD catalogs.
bdr.global_consensus_journal
This catalog table logs all the Raft messages that were sent while managing global consensus.
As for the bdr.global_consensus_response_journal
catalog, the
payload is stored in a binary encoded format, which can be decoded
with the bdr.decode_message_payload()
function. See the
bdr.global_consensus_journal_details
view for more details.
bdr.global_consensus_journal
columns
Name | Type | Description |
---|---|---|
log_index | int8 | ID of the journal entry |
term | int8 | Raft term |
origin | oid | ID of node where the request originated |
req_id | int8 | ID for the request |
req_payload | bytea | Payload for the request |
trace_context | bytea | Trace context for the request |
bdr.global_consensus_journal_details
This view presents Raft messages that were sent and the corresponding
responses, using the bdr.decode_message_payload()
function to decode
their payloads.
bdr.global_consensus_journal_details
columns
Name | Type | Description |
---|---|---|
node_group_name | name | Name of the node group |
log_index | int8 | ID of the journal entry |
term | int8 | Raft term |
request_id | int8 | ID of the request |
origin_id | oid | ID of the node where the request originated |
req_payload | bytea | Payload of the request |
origin_node_name | name | Name of the node where the request originated |
message_type_no | oid | ID of the PGD message type for the request |
message_type | text | Name of the PGD message type for the request |
message_payload | text | PGD message payload for the request |
response_message_type_no | oid | ID of the PGD message type for the response |
response_message_type | text | Name of the PGD message type for the response |
response_payload | text | PGD message payload for the response |
response_errcode_no | text | SQLSTATE for the response |
response_errcode | text | Error code for the response |
response_message | text | Error message for the response |
bdr.global_consensus_response_journal
This catalog table collects all the responses to the Raft messages that were received while managing global consensus.
As for the bdr.global_consensus_journal
catalog, the payload is
stored in a binary-encoded format, which can be decoded with the
bdr.decode_message_payload()
function. See the
bdr.global_consensus_journal_details
view for more details.
bdr.global_consensus_response_journal
columns
Name | Type | Description |
---|---|---|
log_index | int8 | ID of the journal entry |
res_status | oid | Status code for the response |
res_payload | bytea | Payload for the response |
trace_context | bytea | Trace context for the response |
bdr.global_lock
This catalog table stores the information needed for recovering the global lock state on server restart.
For monitoring usage, the
bdr.global_locks
view is preferable because the visible rows
in bdr.global_lock
don't necessarily reflect all global locking activity.
Don't modify the contents of this table. It is an important PGD catalog.
bdr.global_lock
columns
Name | Type | Description |
---|---|---|
ddl_epoch | int8 | DDL epoch for the lock |
origin_node_id | oid | OID of the node where the global lock has originated |
lock_type | oid | Type of the lock (DDL or DML) |
nspname | name | Schema name for the locked relation |
relname | name | Relation name for the locked relation |
groupid | oid | OID of the top level group (for Advisory locks) |
key1 | integer | First 32-bit key or lower order 32-bits of 64-bit key (for advisory locks) |
key2 | integer | Second 32-bit key or higher order 32-bits of 64-bit key (for advisory locks) |
key_is_bigint | boolean | True if 64-bit integer key is used (for advisory locks) |
bdr.global_locks
A view containing active global locks on this node. The bdr.global_locks
view
exposes PGD's shared-memory lock state tracking, giving administrators greater
insight into PGD's global locking activity and progress.
See Monitoring global locks for more information about global locking.
bdr.global_locks
columns
Name | Type | Description |
---|---|---|
origin_node_id | oid | The OID of the node where the global lock has originated |
origin_node_name | name | Name of the node where the global lock has originated |
lock_type | text | Type of the lock (DDL or DML) |
relation | text | Locked relation name (for DML locks) or keys (for advisory locks) |
pid | int4 | PID of the process holding the lock |
acquire_stage | text | Internal state of the lock acquisition process |
waiters | int4 | List of backends waiting for the same global lock |
global_lock_request_time | timestamptz | Time this global lock acquire was initiated by origin node |
local_lock_request_time | timestamptz | Time the local node started trying to acquire the local lock |
last_state_change_time | timestamptz | Time acquire_stage last changed |
Column details:
relation
: For DML locks,relation
shows the relation on which the DML lock is acquired. For global advisory locks,relation
column actually shows the two 32-bit integers or one 64-bit integer on which the lock is acquired.origin_node_id
andorigin_node_name
: If these are the same as the local node's ID and name, then the local node is the initiator of the global DDL lock, i.e., it is the node running the acquiring transaction. If these fields specify a different node, then the local node is instead trying to acquire its local DDL lock to satisfy a global DDL lock request from a remote node.pid
: The process ID of the process that requested the global DDL lock, if the local node is the requesting node. Null on other nodes. Query the origin node to determine the locker pid.global_lock_request_time
: The timestamp at which the global-lock request initiator started the process of acquiring a global lock. Can be null if unknown on the current node. This time is stamped at the beginning of the DDL lock request and includes the time taken for DDL epoch management and any required flushes of pending-replication queues. Currently only known on origin node.local_lock_request_time
: The timestamp at which the local node started trying to acquire the local lock for this global lock. This includes the time taken for the heavyweight session lock acquire but doesn't include any time taken on DDL epochs or queue flushing. If the lock is reacquired after local node restart, it becomes the node restart time.last_state_change_time
: The timestamp at which thebdr.global_locks.acquire_stage
field last changed for this global lock entry.
bdr.group_camo_details
Uses bdr.run_on_all_nodes
to gather CAMO-related information from all nodes.
bdr.group_camo_details
columns
Name | Type | Description |
---|---|---|
node_id | text | Internal node ID |
node_name | text | Name of the node |
camo_partner | text | Node name of the camo partner |
is_camo_partner_connected | text | Connection status |
is_camo_partner_ready | text | Readiness status |
camo_transactions_resolved | text | Are there any pending and unresolved CAMO transactions |
apply_lsn | text | Latest position reported as replayed (visible) |
receive_lsn | text | Latest LSN of any change or message received (can go backwards in case of restarts) |
apply_queue_size | text | Bytes difference between apply_lsn and receive_lsn |
bdr.group_raft_details
Uses bdr.run_on_all_nodes
to gather Raft Consensus status from all nodes.
bdr.group_raft_details
columns
Name | Type | Description |
---|---|---|
node_id | oid | Internal node ID |
node_name | name | Name of the node |
node_group_name | name | Name of the group is part of |
state | text | Raft worker state on the node |
leader_id | oid | Node id of the RAFT_LEADER |
current_term | int | Raft election internal ID |
commit_index | int | Raft snapshot internal ID |
nodes | int | Number of nodes accessible |
voting_nodes | int | Number of nodes voting |
protocol_version | int | Protocol version for this node |
bdr.group_replslots_details
Uses bdr.run_on_all_nodes
to gather PGD slot information from all nodes.
bdr.group_replslots_details
columns
Name | Type | Description |
---|---|---|
node_group_name | text | Name of the PGD group |
origin_name | text | Name of the origin node |
target_name | text | Name of the target node |
slot_name | text | Slot name on the origin node used by this subscription |
active | text | Is the slot active (does it have a connection attached to it) |
state | text | State of the replication (catchup, streaming, ...) or 'disconnected' if offline |
write_lag | interval | Approximate lag time for reported write |
flush_lag | interval | Approximate lag time for reported flush |
replay_lag | interval | Approximate lag time for reported replay |
sent_lag_bytes | int8 | Bytes difference between sent_lsn and current WAL write position |
write_lag_bytes | int8 | Bytes difference between write_lsn and current WAL write position |
flush_lag_bytes | int8 | Bytes difference between flush_lsn and current WAL write position |
replay_lag_byte | int8 | Bytes difference between replay_lsn and current WAL write position |
bdr.group_subscription_summary
Uses bdr.run_on_all_nodes
to gather subscription status from all nodes.
bdr.group_subscription_summary
columns
Name | Type | Description |
---|---|---|
origin_node_name | text | Name of the origin of the subscription |
target_node_name | text | Name of the target of the subscription |
last_xact_replay_timestamp | text | Timestamp of the last replayed transaction |
sub_lag_seconds | text | Lag between now and last_xact_replay_timestamp |
bdr.group_versions_details
Uses bdr.run_on_all_nodes
to gather PGD information from all nodes.
bdr.group_versions_details
columns
Name | Type | Description |
---|---|---|
node_id | oid | Internal node ID |
node_name | name | Name of the node |
postgres_version | text | PostgreSQL version on the node |
bdr_version | text | PGD version on the node |
bdr.local_consensus_snapshot
This catalog table contains consensus snapshots created or received by the local node.
bdr.local_consensus_snapshot
columns
Name | Type | Description |
---|---|---|
log_index | int8 | ID of the journal entry |
log_term | int8 | Raft term |
snapshot | bytea | Raft snapshot data |
bdr.local_consensus_state
This catalog table stores the current state of Raft on the local node.
bdr.local_consensus_state
columns
Name | Type | Description |
---|---|---|
node_id | oid | ID of the node |
current_term | int8 | Raft term |
apply_index | int8 | Raft apply index |
voted_for | oid | Vote cast by this node in this term |
last_known_leader | oid | node_id of last known Raft leader |
bdr.local_node
This table identifies the local node in the current database of the current Postgres instance.
bdr.local_node
columns
Name | Type | Description |
---|---|---|
node_id | oid | ID of the node |
pub_repsets | text[] | Published replication sets |
sub_repsets | text[] | Subscribed replication sets |
bdr.local_node_summary
A view containing the same information as bdr.node_summary
(plus pub_repsets
and sub_repsets
), but only for the local node.
bdr.local_sync_status
Information about status of either subscription or table synchronization process.
bdr.local_sync_status
columns
Name | Type | Description |
---|---|---|
sync_kind | char | The kind of synchronization done |
sync_subid | oid | ID of subscription doing the synchronization |
sync_nspname | name | Schema name of the synchronized table (if any) |
sync_relname | name | Name of the synchronized table (if any) |
sync_status | char | Current state of the synchronization |
sync_remote_relid | oid | ID of the synchronized table (if any) on the upstream |
sync_end_lsn | pg_lsn | Position at which the synchronization state last changed |
bdr.node
This table lists all the PGD nodes in the cluster.
bdr.node
columns
Name | Type | Description |
---|---|---|
node_id | oid | ID of the node |
node_name | name | Name of the node |
node_group_id | oid | ID of the node group |
source_node_id | oid | ID of the source node |
synchronize_structure | "char" | Schema synchronization done during the join |
node_state | oid | Consistent state of the node |
target_state | oid | State that the node is trying to reach (during join or promotion) |
seq_id | int4 | Sequence identifier of the node used for generating unique sequence numbers |
dbname | name | Database name of the node |
node_dsn | char | Connection string for the node |
proto_version_ranges | int[] | Supported protocol version ranges by the node |
node_join_finished | boolean | Check if the join is finished |
bdr.node_catchup_info
This catalog table records relevant catchup information on each node, either if it is related to the join or part procedure.
bdr.node_catchup_info
columns
Name | Type | Description |
---|---|---|
node_id | oid | ID of the node |
node_source_id | oid | ID of the node used as source for the data |
slot_name | name | Slot used for this source |
min_node_lsn | pg_lsn | Minimum LSN at which the node can switch to direct replay from a peer node |
catchup_state | oid | Status code of the catchup state |
origin_node_id | oid | ID of the node from which we want transactions |
If a node(node_id) needs missing data from a parting node(origin_node_id), it can get it from a node that already has it(node_source_id) by forwarding. The records in this table persists until the node(node_id) is a member of the EDB Postgres Distributed cluster.
bdr.node_catchup_info_details
A view of bdr.node_catchup_info
catalog which shows info in more friendly way
bdr.node_conflict_resolvers
Currently configured conflict resolution for all known conflict types.
bdr.node_conflict_resolvers
columns
Name | Type | Description |
---|---|---|
conflict_type | text | Type of the conflict |
conflict_resolver | text | Resolver used for this conflict type |
bdr.node_group
This catalog table lists all the PGD node groups. See also
bdr.node_group_summary
for a view containing
user-readable details.
bdr.node_group
columns
Name | Type | Description |
---|---|---|
node_group_id | oid | ID of the node group |
node_group_name | name | Name of the node group |
node_group_default_repset | oid | Default replication set for this node group |
node_group_default_repset_ext | oid | Default replication set for this node group |
node_group_parent_id | oid | ID of parent group (0 if this is a root group) |
node_group_flags | int | The group flags |
node_group_uuid | uuid | The uuid of the group |
node_group_apply_delay | interval | How long a subscriber waits before applying changes from the provider |
node_group_check_constraints | bool | Whether the apply process checks constraints when applying data |
node_group_num_writers | int | Number of writers to use for subscriptions backing this node group |
node_group_enable_wal_decoder | bool | Whether the group has enable_wal_decoder set |
node_group_streaming_mode | char | Transaction streaming setting: 'O' - off, 'F' - file, 'W' - writer, 'A' - auto, 'D' - default |
node_group_location | char | Name of the location associated with the node group |
node_group_enable_proxy_routing | char | Whether the node group allows routing from pgd-proxy |
node_group_enable_raft | bool | Whether the node group allows Raft Consensus |
bdr.node_group_replication_sets
A view showing default replication sets create for PGD groups. See also
bdr.replication_sets
.
bdr.node_group_replication_sets
columns
Name | Type | Description |
---|---|---|
node_group_name | name | Name of the PGD group |
def_repset | name | Name of the default repset |
def_repset_ops | text[] | Actions replicated by the default repset |
def_repset_ext | name | Name of the default "external" repset (usually same as def_repset) |
def_repset_ext_ops | text[] | Actions replicated by the default "external" repset (usually same as def_repset_ops) |
bdr.node_group_summary
A view containing user-readable details about node groups. See also
bdr.node_group
.
bdr.node_group_summary
columns
Name | Type | Description |
---|---|---|
node_group_name | name | Name of the node group |
default_repset | name | Default replication set for this node group |
parent_group_name | name | Name of parent group (NULL if this is a root group) |
node_group_type | text | Type of the node group (one of "global", "data", "shard" or "subscriber-only") |
apply_delay | interval | How long a subscriber waits before applying changes from the provider |
check_constraints | boolean | Whether the apply process checks constraints when applying data |
num_writers | integer | Number of writers to use for subscriptions backing this node group |
enable_wal_decoder | boolean | Whether the group has enable_wal_decoder set |
streaming_mode | text | Transaction streaming setting: "off", "file", "writer", "auto" or "default" |
default_commit_scope | name | Name of the node group's default commit scope |
location | name | Name of the location associated with the node group |
enable_proxy_routing | boolean | Whether the node group allows routing from pgd-proxy |
enable_raft | boolean | Whether the node group allows Raft Consensus |
route_writer_max_lag | bigint | Maximum write lag accepted |
route_reader_max_lag | bigint | Maximum read lag accepted |
route_writer_wait_flush | boolean | Switch if we need to wait for the flush |
bdr.node_local_info
A catalog table used to store per-node configuration that's specific to the local node (as opposed to global view of per-node configuration).
bdr.node_local_info
columns
Name | Type | Description |
---|---|---|
node_id | oid | The OID of the node (including the local node) |
applied_state | oid | Internal ID of the node state |
ddl_epoch | int8 | Last epoch number processed by the node |
slot_name | name | Name of the slot used to connect to that node (NULL for the local node) |
bdr.node_log_config
A catalog view that stores information on the conflict logging configurations.
bdr.node_log_config
columns
Name | Description |
---|---|
log_name | Name of the logging configuration |
log_to_file | Whether it logs to the server log file |
log_to_table | Whether it logs to a table, and which table is the target |
log_conflict_type | Which conflict types it logs, if NULL means all |
log_conflict_res | Which conflict resolutions it logs, if NULL means all |
bdr.node_peer_progress
Catalog used to keep track of every node's progress in the replication stream.
Every node in the cluster regularly broadcasts its progress every
bdr.replay_progress_frequency
milliseconds to all other nodes (default
is 60000 ms, i.e., 1 minute). Expect N * (N-1) rows in this relation.
You might be more interested in the bdr.node_slots
view for monitoring
purposes. See also Monitoring.
bdr.node_peer_progress
columns
Name | Type | Description |
---|---|---|
node_id | oid | The OID of the originating node that reported this position info |
peer_node_id | oid | The OID of the node's peer (remote node) for which this position info was reported |
last_update_sent_time | timestamptz | The time at which the report was sent by the originating node |
last_update_recv_time | timestamptz | The time at which the report was received by the local server |
last_update_node_lsn | pg_lsn | LSN on the originating node at the time of the report |
peer_position | pg_lsn | Latest LSN of the node's peer seen by the originating node |
peer_replay_time | timestamptz | Latest replay time of peer seen by the reporting node |
last_update_horizon_xid | oid | Internal resolution horizon: all lower xids are known resolved on the reporting node |
last_update_horizon_lsn | pg_lsn | Internal resolution horizon: same in terms of an LSN of the reporting node |
bdr.node_replication_rates
This view contains information about outgoing replication activity from a given node.
bdr.node_replication_rates
columns
Column | Type | Description |
---|---|---|
peer_node_id | oid | The OID of node's peer (remote node) for which this info was reported |
target_name | name | Name of the target peer node |
sent_lsn | pg_lsn | Latest sent position |
replay_lsn | pg_lsn | Latest position reported as replayed (visible) |
replay_lag | interval | Approximate lag time for reported replay |
replay_lag_bytes | int8 | Bytes difference between replay_lsn and current WAL write position on origin |
replay_lag_size | text | Human-readable bytes difference between replay_lsn and current WAL write position |
apply_rate | bigint | LSNs being applied per second at the peer node |
catchup_interval | interval | Approximate time required for the peer node to catch up to all the changes that are yet to be applied |
Note
The replay_lag
is set immediately to zero after reconnect.
As a workaround, use replay_lag_bytes
, replay_lag_size
, or
catchup_interval
.
bdr.node_slots
This view contains information about replication slots used in the current database by PGD.
See Monitoring outgoing replication for guidance on the use and interpretation of this view's fields.
bdr.node_slots
columns
Name | Type | Description |
---|---|---|
target_dbname | name | Database name on the target node |
node_group_name | name | Name of the PGD group |
node_group_id | oid | The OID of the PGD group |
origin_name | name | Name of the origin node |
target_name | name | Name of the target node |
origin_id | oid | The OID of the origin node |
target_id | oid | The OID of the target node |
local_slot_name | name | Name of the replication slot according to PGD |
slot_name | name | Name of the slot according to Postgres (same as above) |
is_group_slot | boolean | True if the slot is the node-group crash recovery slot for this node (see ["Group Replication Slot"](nodes#Group Replication Slot)) |
is_decoder_slot | boolean | Is this slot used by the decoding worker feature |
plugin | name | Logical decoding plugin using this slot (should be pglogical_output or bdr) |
slot_type | text | Type of the slot (should be logical) |
datoid | oid | The OID of the current database |
database | name | Name of the current database |
temporary | bool | Is the slot temporary |
active | bool | Is the slot active (does it have a connection attached to it) |
active_pid | int4 | The PID of the process attached to the slot |
xmin | xid | The XID needed by the slot |
catalog_xmin | xid | The catalog XID needed by the slot |
restart_lsn | pg_lsn | LSN at which the slot can restart decoding |
confirmed_flush_lsn | pg_lsn | Latest confirmed replicated position |
usesysid | oid | sysid of the user the replication session is running as |
usename | name | username of the user the replication session is running as |
application_name | text | Application name of the client connection (used by synchronous_standby_names ) |
client_addr | inet | IP address of the client connection |
client_hostname | text | Hostname of the client connection |
client_port | int4 | Port of the client connection |
backend_start | timestamptz | When the connection started |
state | text | State of the replication (catchup, streaming, ...) or 'disconnected' if offline |
sent_lsn | pg_lsn | Latest sent position |
write_lsn | pg_lsn | Latest position reported as written |
flush_lsn | pg_lsn | Latest position reported as flushed to disk |
replay_lsn | pg_lsn | Latest position reported as replayed (visible) |
write_lag | interval | Approximate lag time for reported write |
flush_lag | interval | Approximate lag time for reported flush |
replay_lag | interval | Approximate lag time for reported replay |
sent_lag_bytes | int8 | Bytes difference between sent_lsn and current WAL write position |
write_lag_bytes | int8 | Bytes difference between write_lsn and current WAL write position |
flush_lag_bytes | int8 | Bytes difference between flush_lsn and current WAL write position |
replay_lag_bytes | int8 | Bytes difference between replay_lsn and current WAL write position |
sent_lag_size | text | Human-readable bytes difference between sent_lsn and current WAL write position |
write_lag_size | text | Human-readable bytes difference between write_lsn and current WAL write position |
flush_lag_size | text | Human-readable bytes difference between flush_lsn and current WAL write position |
replay_lag_size | text | Human-readable bytes difference between replay_lsn and current WAL write position |
Note
The replay_lag
is set immediately to zero after reconnect.
As a workaround, use replay_lag_bytes
or replay_lag_size
.
bdr.node_summary
This view contains summary information about all PGD nodes known to the local node.
bdr.node_summary
columns
Name | Type | Description |
---|---|---|
node_name | name | Name of the node |
node_group_name | name | Name of the PGD group the node is part of |
interface_connstr | text | Connection string to the node |
peer_state_name | text | Consistent state of the node in human readable form |
peer_target_state_name | text | State that the node is trying to reach (during join or promotion) |
node_seq_id | int4 | Sequence identifier of the node used for generating unique sequence numbers |
node_local_dbname | name | Database name of the node |
node_id | oid | The OID of the node |
node_group_id | oid | The OID of the PGD node group |
node_kind_name | oid | Node kind name |
bdr.queue
This table stores the historical record of replicated DDL statements.
bdr.queue
columns
Name | Type | Description |
---|---|---|
queued_at | timestamptz | When was the statement queued |
role | name | Which role has executed the statement |
replication_sets | text[] | Which replication sets was the statement published to |
message_type | char | Type of a message. Possible values: A - Table sync D - DDL S - Sequence T - Truncate Q - SQL statement |
message | json | Payload of the message needed for replication of the statement |
bdr.replication_set
A table that stores replication set configuration. For user queries, we recommend instead checking the
bdr.replication_sets
view.
bdr.replication_set
columns
Name | Type | Description |
---|---|---|
set_id | oid | The OID of the replication set |
set_nodeid | oid | OID of the node (always local node oid currently) |
set_name | name | Name of the replication set |
replicate_insert | boolean | Indicates if the replication set replicates INSERTs |
replicate_update | boolean | Indicates if the replication set replicates UPDATEs |
replicate_delete | boolean | Indicates if the replication set replicates DELETEs |
replicate_truncate | boolean | Indicates if the replication set replicates TRUNCATEs |
set_isinternal | boolean | Reserved |
set_autoadd_tables | boolean | Indicates if new tables are automatically added to this replication set |
set_autoadd_seqs | boolean | Indicates if new sequences are automatically added to this replication set |
bdr.replication_set_table
A table that stores replication set table membership. For user queries, we recommend instead checking
the bdr.tables
view.
bdr.replication_set_table
columns
Name | Type | Description |
---|---|---|
set_id | oid | The OID of the replication set |
set_reloid | regclass | Local ID of the table |
set_att_list | text[] | Reserved |
set_row_filter | pg_node_tree | Compiled row filtering expression |
bdr.replication_set_ddl
A table that stores replication set ddl replication filters. For user queries, we recommend
instead checking the bdr.ddl_replication
view.
bdr.replication_set_ddl
Columns
Name | Type | Description |
---|---|---|
set_id | oid | The OID of the replication set |
set_ddl_name | name | Name of the DDL filter |
set_ddl_tag | text | Command tag for the DDL filter |
set_ddl_role | text | Role executing the DDL |
bdr.replication_sets
A view showing replication sets defined in the PGD group, even if they aren't currently used by any node.
bdr.replication_sets
columns
Name | Type | Description |
---|---|---|
set_id | oid | The OID of the replication set |
set_name | name | Name of the replication set |
replicate_insert | boolean | Indicates if the replication set replicates INSERTs |
replicate_update | boolean | Indicates if the replication set replicates UPDATEs |
replicate_delete | boolean | Indicates if the replication set replicates DELETEs |
replicate_truncate | boolean | Indicates if the replication set replicates TRUNCATEs |
set_autoadd_tables | boolean | Indicates if new tables are automatically added to this replication set |
set_autoadd_seqs | boolean | Indicates if new sequences are automatically added to this replication set |
bdr.schema_changes
A simple view to show all the changes to schemas win PGD.
bdr.schema_changes
columns
Name | Type | Description |
---|---|---|
schema_changes_ts | timestampstz | The ID of the trigger |
schema_changes_change | char | A flag of change type |
schema_changes_classid | oid | Class ID |
schema_changes_objectid | oid | Object ID |
schema_changes_subid | smallint | The subscription |
schema_changes_descr | text | The object changed |
schema_changes_addrnames | text[] | Location of schema change |
bdr.sequence_alloc
A view to see the allocation details for galloc sequences.
bdr.sequence_alloc
columns
Name | Type | Description |
---|---|---|
seqid | regclass | The ID of the sequence |
seq_chunk_size | bigint | A sequence number for the chunk within its value |
seq_allocated_up_to | bigint | |
seq_nallocs | bigint | |
seq_last_alloc | timestamptz | Last sequence allocated |
bdr.sequences
This view lists all sequences with their kind, excluding sequences for internal PGD bookkeeping.
bdr.sequences
columns
Name | Type | Description |
---|---|---|
nspname | name | Namespace containing the sequence |
relname | name | Name of the sequence |
seqkind | text | Type of the sequence ('local', 'timeshard', 'galloc') |
bdr.stat_activity
Dynamic activity for each backend or worker process.
This contains the same information as pg_stat_activity
, except wait_event
is set correctly when the wait relates to PGD.
bdr.stat_relation
Apply statistics for each relation. Contains data only if the tracking is enabled and something was replicated for a given relation.
bdr.stat_relation
columns
Column | Type | Description |
---|---|---|
nspname | name | Name of the relation's schema |
relname | name | Name of the relation |
relid | oid | OID of the relation |
total_time | double precision | Total time spent processing replication for the relation |
ninsert | bigint | Number of inserts replicated for the relation |
nupdate | bigint | Number of updates replicated for the relation |
ndelete | bigint | Number of deletes replicated for the relation |
ntruncate | bigint | Number of truncates replicated for the relation |
shared_blks_hit | bigint | Total number of shared block cache hits for the relation |
shared_blks_read | bigint | Total number of shared blocks read for the relation |
shared_blks_dirtied | bigint | Total number of shared blocks dirtied for the relation |
shared_blks_written | bigint | Total number of shared blocks written for the relation |
blk_read_time | double precision | Total time spent reading blocks for the relation, in milliseconds (if track_io_timing is enabled, otherwise zero) |
blk_write_time | double precision | Total time spent writing blocks for the relation, in milliseconds (if track_io_timing is enabled, otherwise zero) |
lock_acquire_time | double precision | Total time spent acquiring locks on the relation, in milliseconds (if bdr.track_apply_lock_timing is enabled, otherwise zero) |
bdr.stat_subscription
Apply statistics for each subscription. Contains data only if the tracking is enabled.
bdr.stat_subscription
columns
Column | Type | Description |
---|---|---|
sub_name | name | Name of the subscription |
subid | oid | OID of the subscription |
mean_apply_time | double precision | Average time per apply transaction, in milliseconds |
nconnect | bigint | Number of times this subscription has connected upstream |
ncommit | bigint | Number of commits this subscription did |
nabort | bigint | Number of aborts writer did for this subscription |
nerror | bigint | Number of errors writer has hit for this subscription |
nskippedtx | bigint | Number of transactions skipped by writer for this subscription (due to skip_transaction conflict resolver) |
ninsert | bigint | Number of inserts this subscription did |
nupdate | bigint | Number of updates this subscription did |
ndelete | bigint | Number of deletes this subscription did |
ntruncate | bigint | Number of truncates this subscription did |
nddl | bigint | Number of DDL operations this subscription has executed |
ndeadlocks | bigint | Number of errors that were caused by deadlocks |
nretries | bigint | Number of retries the writer did (without going for full restart/reconnect) |
nstream_writer | bigint | Number of transactions streamed to writer |
nstream_file | bigint | Number of transactions streamed to file |
nstream_commit | bigint | Number of streaming transactions committed |
nstream_abort | bigint | Number of streaming transactions aborted |
nstream_start | bigint | Number of STREAM START messages processed |
nstream_stop | bigint | Number of STREAM STOP messages processed |
nstream_commit | bigint | Number of streaming transactions committed |
nstream_abort | bigint | Number of streaming transactions aborted |
nstream_prepare | bigint | Number of streaming transactions prepared |
nstream_insert | bigint | Number of streaming inserts processed |
nstream_update | bigint | Number of streaming updates processed |
nstream_delete | bigint | Number of streaming deletes processed |
nstream_truncate | bigint | Number of streaming truncates processed |
shared_blks_hit | bigint | Total number of shared block cache hits by the subscription |
shared_blks_read | bigint | Total number of shared blocks read by the subscription |
shared_blks_dirtied | bigint | Total number of shared blocks dirtied by the subscription |
shared_blks_written | bigint | Total number of shared blocks written by the subscription |
blk_read_time | double precision | Total time the subscription spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
blk_write_time | double precision | Total time the subscription spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
connect_time | timestamp with time zone | Time when the current upstream connection was established, NULL if not connected |
last_disconnect_time | timestamp with time zone | Time when the last upstream connection was dropped |
start_lsn | pg_lsn | LSN from which this subscription requested to start replication from the upstream |
retries_at_same_lsn | bigint | Number of attempts the subscription was restarted from the same LSN value |
curr_ncommit | bigint | Number of commits this subscription did after the current connection was established |
npre_commit_confirmations | bigint | Number of precommit confirmations by CAMO partners |
npre_commit | bigint | Number of precommits |
ncommit_prepared | bigint | Number of prepared transaction commits |
nabort_prepared | bigint | Number of prepared transaction aborts |
nprovisional_waits | bigint | Number of update/delete operations on same tuples by concurrent apply transactions. These are provisional waits. See Parallel Apply |
ntuple_waits | bigint | Number of update/delete operations that waited to be safely applied. See Parallel Apply |
ncommit_waits | bigint | Number of fully applied transactions that had to wait before being committed. See Parallel Apply |
stats_reset | timestamp with time zone | Time when these subscription statistics were reset |
bdr.subscription
This catalog table lists all the subscriptions owned by the local PGD node and their modes.
bdr.subscription
columns
Name | Type | Description |
---|---|---|
sub_id | oid | ID of the subscription |
sub_name | name | Name of the subscription |
nodegroup_id | oid | ID of nodegroup |
origin_node_id | oid | ID of origin node |
source_node_id | oid | ID of source node |
target_node_id | oid | ID of target node |
subscription_mode | char | Mode of subscription |
sub_enabled | bool | Whether the subscription is enabled (should be replication) |
apply_delay | interval | How much behind should the apply of changes on this subscription be (normally 0) |
slot_name | name | Slot on upstream used by this subscription |
origin_name | name | Local origin used by this subscription |
num_writers | int | Number of writer processes this subscription uses |
streaming_mode | char | Streaming configuration for the subscription |
replication_sets | text[] | Replication sets replicated by this subscription (NULL = all) |
forward_origin | text[] | Origins forwarded by this subscription (NULL = all) |
bdr.subscription_summary
This view contains summary information about all PGD subscriptions that the local node has to other nodes.
bdr.subscription_summary
columns
Name | Type | Description |
---|---|---|
node_group_name | name | Name of the PGD group the node is part of |
sub_name | name | Name of the subscription |
origin_name | name | Name of the origin node |
target_name | name | Name of the target node (normally local node) |
sub_enabled | bool | Is the subscription enabled |
sub_slot_name | name | Slot name on the origin node used by this subscription |
sub_replication_sets | text[] | Replication sets subscribed |
sub_forward_origins | text[] | Does the subscription accept changes forwarded from other nodes besides the origin |
sub_apply_delay | interval | Delay transactions by this much compared to the origin |
sub_origin_name | name | Replication origin name used by this subscription |
bdr_subscription_mode | char | Subscription mode |
subscription_status | text | Status of the subscription worker |
node_group_id | oid | The OID of the PGD group the node is part of |
sub_id | oid | The OID of the subscription |
origin_id | oid | The OID of the origin node |
target_id | oid | The OID of the target node |
receive_lsn | pg_lsn | Latest LSN of any change or message received (this can go backwards in case of restarts) |
receive_commit_lsn | pg_lsn | Latest LSN of last COMMIT received (this can go backwards in case of restarts) |
last_xact_replay_lsn | pg_lsn | LSN of last transaction replayed on this subscription |
last_xact_flush_lsn | timestamptz | LSN of last transaction replayed on this subscription that's flushed durably to disk |
last_xact_replay_timestamp | timestamptz | Timestamp of last transaction replayed on this subscription |
bdr.tables
This view lists information about table membership in replication sets. If a table exists in multiple replication sets, it appears multiple times in this table.
bdr.tables
columns
Name | Type | Description |
---|---|---|
relid | oid | The OID of the relation |
nspname | name | Name of the schema relation is in |
relname | name | Name of the relation |
set_name | name | Name of the replication set |
set_ops | text[] | List of replicated operations |
rel_columns | text[] | List of replicated columns (NULL = all columns) (*) |
row_filter | text | Row filtering expression |
conflict_detection | text | Conflict detection method used: row_origin (default), row_version or column_level |
(*) These columns are reserved for future use and should currently be NULL
bdr.taskmgr_work_queue
Contains work items created and processed by task manager. The work items are created on only one node and processed on different nodes.
bdr.taskmgr_work_queue
columns
Column | Type | Description |
---|---|---|
ap_wq_workid | bigint | The unique ID of the work item |
ap_wq_ruleid | int | ID of the rule listed in autopartition_rules. Rules are specified using bdr.autopartition command |
ap_wq_relname | name | Name of the relation the task belongs to |
ap_wq_relnamespace | name | Name of the tablespace specified in rule for this work item |
ap_wq_partname | name | Name of the partition created by the workitem |
ap_wq_work_category | char | The work category can be either 'c' (Create Partition), 'm' (Migrate Partition), 'd' (Drop Partition), 'a' (Alter Partition) |
ap_wq_work_sql | text | SQL query for the work item |
ap_wq_work_depends | Oid[] | OIDs of the nodes on which the work item depends |
bdr.taskmgr_workitem_status
The status of the work items that is updated locally on each node.
bdr.taskmgr_workitem_status
columns
Column | Type | Description |
---|---|---|
ap_wi_workid | bigint | The ID of the work item |
ap_wi_nodeid | Oid | OID of the node on which the work item is being processed |
ap_wi_status | char | The status can be either 'q' (Queued), 'c' (Complete), 'f' (Failed), 'u' (Unknown) |
ap_wi_started_at | timestamptz | The start timestamptz of work item |
ap_wi_finished_at | timestamptz | The end timestamptz of work item |
bdr.taskmgr_local_work_queue
Contains work items created and processed by the task manager. This is
similar to bdr.taskmgr_work_queue
, except that these work items are for
locally managed tables. Each node creates and processes its own local work
items, independent of other nodes in the cluster.
bdr.taskmgr_local_work_queue
columns
Column | Type | Description |
---|---|---|
ap_wq_workid | bigint | The unique ID of the work item |
ap_wq_ruleid | int | ID of the rule listed in autopartition_rules. Rules are specified using bdr.autopartition command |
ap_wq_relname | name | Name of the relation the task belongs to |
ap_wq_relnamespace | name | Name of the tablespace specified in rule for this work item. |
ap_wq_partname | name | Name of the partition created by the workitem |
ap_wq_work_category | char | The categoty can be either 'c' (Create Partition), 'm' (Migrate Partition), 'd' (Drop Partition), 'a' (Alter Partition) |
ap_wq_work_sql | text | SQL query for the work item |
ap_wq_work_depends | Oid[] | Always NULL |
bdr.taskmgr_local_workitem_status
The status of the work items for locally managed tables.
bdr.taskmgr_local_workitem_status
columns
Column | Type | Description |
---|---|---|
ap_wi_workid | bigint | The ID of the work item |
ap_wi_nodeid | Oid | OID of the node on which the work item is being processed |
ap_wi_status | char | The status can be either 'q' (Queued), 'c' (Complete), 'f' (Failed), 'u' (Unknown) |
ap_wi_started_at | timestamptz | The start timestamptz of work item |
ap_wi_finished_at | timestamptz | The end timestamptz of work item |
bdr.trigger
In this view, you can see all the stream triggers created.
Often triggers here are created from bdr.create_conflict_trigger
.
bdr.trigger
columns
Name | Type | Description |
---|---|---|
trigger_id | oid | The ID of the trigger |
trigger_reloid | regclass | Name of the relating function |
trigger_pgtgid | oid | Postgres trigger ID |
trigger_type | char | Type of trigger call |
trigger_name | name | Name of the trigger |
bdr.triggers
An expanded view of bdr.trigger
with columns that are easier to read.
Name | Type | Description |
---|---|---|
trigger_name | name | The name of the trigger |
event_manipulation | text | The operation(s) |
trigger_type | bdr.trigger_type | Type of trigger |
trigger_table | bdr.trigger_reloid | The table that calls it |
trigger_function | name | The function used |
bdr.workers
Information about running PGD worker processes.
This can be joined with bdr.stat_activity
using pid to get even more insight
into the state of PGD workers.
bdr.workers
Columns
Name | Type | Description |
---|---|---|
worker_pid | int | Process ID of the worker process |
worker_role | int | Numeric representation of worker role |
worker_role_name | text | Name of the worker role |
worker_subid | oid | Subscription ID if the worker is associated with one |
bdr.writers
Specific information about PGD writer processes.
bdr.writers
columns
Name | Type | Description |
---|---|---|
sub_name | name | Name of the subscription |
pid | int | Process ID of the worker process |
syncing_rel | int | OID of the relation being synchronized (if any) |
streaming_allowed | text | Can this writer be target of direct to writer streaming |
is_streaming | bool | Is there transaction being streamed to this writer |
remote_xid | xid | Remote transaction id of the transaction being processed (if any) |
remote_commit_lsn | pg_lsn | LSN of last commit processed |
commit_queue_position | int | Position in the internal commit queue |
nxacts | bigint | Number of transactions processed by this writer |
ncommits | bigint | Number of transactions committed by this writer |
naborts | bigint | Number of transactions aborted by this writer |
nstream_file | bigint | Number of streamed-to-file transactions processed by this writer |
nstream_writer | bigint | Number of streamed-to-writer transactions processed by this writer |
bdr.worker_tasks
The bdr.worker_tasks
view shows PGD's current worker launch rate
limiting state as well as some basic statistics on background worker launch
and registration activity.
Unlike the other views listed here, it isn't specific to the current database and PGD node. State for all PGD nodes on the current PostgreSQL instance is shown. Join on the current database to filter it.
bdr.worker_tasks
doesn't track walsenders and output plugins.
bdr.worker_tasks
columns
Column | Type | Description |
---|---|---|
task_key_worker_role | integer | Worker role identifier |
task_key_worker_role_name | text | Worker role name |
task_key_dboid | oid | Database identifier, if available |
datname | name | Name of the database, if available |
task_key_subid | oid | Subscription identifier, if available |
sub_name | name | Name of the subscription, if available |
task_key_ext_libname | name | Name of the library (most likely bdr) |
task_key_ext_funcname | name | Name of the function entry point |
task_key_ext_workername | name | Name assigned to the worker |
task_key_remoterelid | oid | Identifier of the remote syncing relation, if available |
task_pid | integer | Process ID of the worker |
task_registered | timestamp with time zone | Worker registration timestamp |
since_registered | interval | Interval since the worker registered |
task_attached | timestamp with time zone | Worker attach timestamp |
since_attached | interval | Interval since the worker attached |
task_exited | timestamp with time zone | Worker exit timestamp |
since_exited | interval | Interval since the worker exited |
task_success | boolean | Is worker still running? |
task_next_launch_not_before | timestamp with time zone | Timestamp when the worker will be restarted again |
until_launch_allowed | interval | Time remaining for next launch |
task_last_launch_requestor_pid | integer | Process ID that requested launch |
task_last_launch_request_time | timestamp with time zone | Timestamp when the request was made |
since_last_request | interval | Interval since the last request |
task_last_launch_request_approved | boolean | Did the last request succeed? |
task_nrequests | integer | Number of requests |
task_nregistrations | integer | Number of registrations |
task_prev_pid | integer | Process ID of the previous generation |
task_prev_registered | timestamp with time zone | Timestamp of the previous registered task |
since_prev_registered | interval | Interval since the previous registration |
task_prev_launched | timestamp with time zone | Timestamp of the previous launch |
since_prev_launched | interval | Interval since the previous launch |
task_prev_exited | timestamp with time zone | Timestamp when the previous task exited |
since_prev_exited | interval | Interval since the previous task exited |
task_first_registered | timestamp with time zone | Timestamp when the first registration happened |
since_first_registered | interval | Interval since the first registration |
- On this page
- bdr.camo_decision_journal
- bdr.commit_scopes
- bdr.conflict_history
- bdr.conflict_history_summary
- bdr.consensus_kv_data
- bdr.crdt_handlers
- bdr.ddl_replication
- bdr.depend
- bdr.global_consensus_journal
- bdr.global_consensus_journal_details
- bdr.global_consensus_response_journal
- bdr.global_lock
- bdr.global_locks
- bdr.group_camo_details
- bdr.group_raft_details
- bdr.group_replslots_details
- bdr.group_subscription_summary
- bdr.group_versions_details
- bdr.local_consensus_snapshot
- bdr.local_consensus_state
- bdr.local_node
- bdr.local_node_summary
- bdr.local_sync_status
- bdr.node
- bdr.node_catchup_info
- bdr.node_catchup_info_details
- bdr.node_conflict_resolvers
- bdr.node_group
- bdr.node_group_replication_sets
- bdr.node_group_summary
- bdr.node_local_info
- bdr.node_log_config
- bdr.node_peer_progress
- bdr.node_replication_rates
- bdr.node_slots
- bdr.node_summary
- bdr.queue
- bdr.replication_set
- bdr.replication_set_table
- bdr.replication_set_ddl
- bdr.replication_sets
- bdr.schema_changes
- bdr.sequence_alloc
- bdr.sequences
- bdr.stat_activity
- bdr.stat_relation
- bdr.stat_subscription
- bdr.subscription
- bdr.subscription_summary
- bdr.tables
- bdr.taskmgr_work_queue
- bdr.taskmgr_workitem_status
- bdr.taskmgr_local_work_queue
- bdr.taskmgr_local_workitem_status
- bdr.trigger
- bdr.triggers
- bdr.workers
- bdr.writers
- bdr.worker_tasks