By digoal
It is critical for businesses to maintain databases accurately.
We have covered the topic of "How to Avoid Timeline Errors during Database Switchover Based on Asynchronous Streaming Replication" in a previous article. In this article, we will explore how to prevent downstream timeline errors arising during primary/standby database switchover.
You can configure the restore_command to avoid timeline differences during primary/standby switchover. Then, are there any other solutions?
When the standby delays the apply operation, and there is an upstream primary/standby switchover, it is necessary to reconnect the WAL receiver and WAL sender.
The current process is as follows:
Standby tells upstream the timestamp of WAL logs received last time, and the upstream starts to send logs downstream from this timestamp. Although the timestamp to "apply WAL logs" of the standby is very old, the timestamp to "send WAL logs" is very new; namely, WAL logs may be more recent than that of the new primary database. Under such circumstances, timeline errors also occur, failing to keep up with the new primary database.
Improvements:
The upstream and downstream can be coordinated when the WAL receiver and WAL sender of the new primary database are reconnected. Then, the new primary database sends the timeline after the switchover to the downstream (standby). If the timestamp of WAL logs in the standby exceeds the switchover timestamp, there is a difference. The next to solve this problem is to make the standby re-receive the upstream (new primary) logs and history files from the new timeline.
Parameter:
recovery_min_apply_delay (integer)
Let us reiterate the definition on PostgreSQL's official documentation.
By default, a standby server restores WAL records from the sending server as soon as possible. It may be helpful to have a time-delayed copy of the data, offering opportunities to correct data loss errors. This parameter allows delaying recovery by a fixed period of time, measured in milliseconds if no unit is specified. For example, if you set this parameter to 5 minutes, the standby will replay each transaction commit only when the system time on the standby is at least five minutes past the commit time reported by the primary server.
The replication delay between servers may exceed the value of this parameter, in which case no delay is added. Note that the delay is calculated between the WAL timestamp as written on master and the current time on the standby. Delays in transfer because of network lag or cascading replication configurations may reduce the actual wait time significantly. If the system clocks on primary and standby are not synchronized, this may lead to recovery applying records earlier than expected; but that is not a significant issue because useful settings of this parameter are much larger than typical time deviations between servers.
The delay occurs only on WAL records for transaction commits. Other records are replayed as quickly as possible, which is not a problem because MVCC visibility rules ensure their effects are not visible until the corresponding commit record is applied.
The delay occurs once the database in recovery has reached a consistent state until the standby is promoted or triggered. After that, the standby will end recovery without further wait.
This parameter is intended to be used with streaming replication deployments; however, if you specify the parameter, it will be honored in all cases. Using this feature will delay hot_standby_feedback, which could lead to bloat on the master. Use them both together with caution.
Warning:
This setting affects synchronous replication when synchronous_commit is set to remote_apply. Every COMMIT will need to wait to be applied.
This parameter can only be set in the postgresql.conf
file or on the server command line.
1) HA (host1 primary, host2 standby 1), vip (on host1 primary), host3 standby 2 recovery_min_apply_delay=5 min. Avoid the situation in which the timestamp of host3 standby 2 to apply WAL logs is bigger than that of host2 standby 1 to receive WAL logs.
2) host1 primary WAL LSN : 100000
host2 standby 1 WAL LSN : 95000
host3 standby 2 WAL LSN : receive 98000, apply 90000 (<95000)
3) Primary/standby switchover
host1 primary WAL LSN : 100000 failed
host2 standby 1 WAL LSN : 95000 promote to new primary, TL = old TL+1
vip (on host2 new primary)
4) The host3 standby 2 WAL receiver process reconnects to host2 new primary WAL sender process. (After enabling the reconnection, standby 2 shuts down the apply process to avoid excessive apply operations.)
The host2 new primary WAL sender process sends all timeline files to the host3 standby 2 WAL receiver process.
The host3 standby 2 WAL receiver process discovers the timeline of the new primary database, which is larger than the old timeline of the current host3 standby 2. The WAL LSN timestamp of the host2 new primary activated is 95000, smaller than that of the host3 standby 2 WAL LSN — 98000.
5) The host3 standby 2 WAL receiver process receives the WAL logs and the corresponding history file when the WAL LSN timestamp of the host2 new primary activated is 95000, covering the advanced part.
6) The host3 standby 2 restarts the apply process to keep up with the new primary database.
How to Avoid Timeline Errors during Database Switchover Based on Asynchronous Streaming Replication
Insert and Update Time of PostgreSQL Trace Record (Row and Tuple) – SPI Moddatetime Trigger
digoal - June 4, 2021
digoal - April 21, 2021
digoal - January 30, 2022
Alibaba Cloud Community - February 28, 2022
DavidZhang - July 5, 2022
ApsaraDB - June 5, 2024
Alibaba Cloud PolarDB for PostgreSQL is an in-house relational database service 100% compatible with PostgreSQL and highly compatible with the Oracle syntax.
Learn MoreLeverage cloud-native database solutions dedicated for FinTech.
Learn MoreMigrate your legacy Oracle databases to Alibaba Cloud to save on long-term costs and take advantage of improved scalability, reliability, robust security, high performance, and cloud-native features.
Learn MoreMigrating to fully managed cloud databases brings a host of benefits including scalability, reliability, and cost efficiency.
Learn MoreMore Posts by digoal