×
Community Blog Create Second-Level Flashback Instances for ApsaraDB RDS for PostgreSQL with Real-Time Disaster Recovery

Create Second-Level Flashback Instances for ApsaraDB RDS for PostgreSQL with Real-Time Disaster Recovery

This article discusses flashback instances through the ZFS file system.

By digoal

Background

Let’s use Alibaba Cloud ApsaraDB RDS for PostgreSQL 12 as an example. A second-level flashback instance is created with real-time disaster recovery.

Technical Points: ZFS, snapshot, clone, PostgreSQL stream standby, and compress

Example

1.  Apply for ECS instances:

The instance specification, AZ, VPC, and vSwitch can be the same as those of ApsaraDB RDS for PostgreSQL.

The storage is configured according to the read and write pressure on the primary database to meet the recovery efficiency of the disaster recovery database.

If the disaster recovery instance needs to be configured in a cross-region IDC, we recommend connecting the VPC network.

2.  Prepare the system; we recommend using CentOS 7 (x64).

Please visit this link for information about configure sysctl, limits, and other items.

3.  Deploy ZFS and enable compression:

# cat /etc/redhat-release     
CentOS Linux release 7.7.1908 (Core)    

https://github.com/openzfs/zfs/wiki/RHEL-and-CentOS

wget http://download.zfsonlinux.org/epel/zfs-release.el7_7.noarch.rpm    
    
rpm -ivh zfs-release.el7_7.noarch.rpm     
    
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm    
    
yum install -y "kernel-devel-uname-r == $(uname -r)" zfs    

Check the log for any errors. Under normal circumstances, no errors are reported.

4.  Check whether ZFS is available:

Configure it to load ZFS automatically when the system starts:

vi /etc/rc.local    
/sbin/modprobe zfs    
chmod +x /etc/rc.local    

Test whether ZFS works normally:

# modprobe zfs     
    
# zpool list    
no pools available    
    
    
# reboot    

5.  Install ApsaraDB RDS for PostgreSQL 12:
https://www.postgresql.org/download/linux/redhat/

yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm    
    
yum install postgresql12*    
  
su - postgres  
配置环境变量  
  
vi ~/.bash_profile  
# 追加    
export PS1="$USER@`/bin/hostname -s`-> "      
export LANG=en_US.utf8      
export PGHOME=/usr/pgsql-12      
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH      
export DATE=`date +"%Y%m%d%H%M"`      
export PATH=$PGHOME/bin:$PATH:.      
export MANPATH=$PGHOME/share/man:$MANPATH      
alias rm='rm -i'      
alias ll='ls -lh'      
unalias vi      

6.  Install the plug-ins. You can install the plug-ins you use in ApsaraDB RDS. Some plug-ins may be unique to ApsaraDB RDS, so these plug-ins cannot be called on the disaster recovery end, but disaster recovery is not affected.

7.  Configure zpool for ZFS:

Let’s say /dev/vdc is a new cloud disk. If your cloud disk is not vdc, please replace it. ZFS also supports managing multiple cloud disks. Please read the ZFS manual for details:

parted -a optimal -s /dev/vdc mklabel gpt mkpart primary 1MiB 100%FREE     
    
zpool create zp1 -f -o ashift=13 vdc1    
    
    
zfs set canmount=off zp1      

8.  Create a directory to store data files:

zfs create -o mountpoint=/zpdata01 -o recordsize=8K -o atime=off -o primarycache=metadata -o logbias=throughput -o secondarycache=none zp1/zpdata01      

9.  Create a directory to store WAL archive files:

zfs create -o mountpoint=/zpdata02 -o recordsize=8K -o atime=off -o primarycache=metadata -o logbias=throughput -o secondarycache=none -o compression=on zp1/zpdata02      
      
zfs set compression=on zp1/zpdata02    
    
    
# zfs list    
NAME           USED  AVAIL     REFER  MOUNTPOINT    
zp1           1.29M  1.42T      192K  /zp1    
zp1/zpdata01   192K  1.42T      192K  /zpdata01    
zp1/zpdata02   192K  1.42T      192K  /zpdata02    

10.  Create an ApsaraDB RDS instance with rds_superuser as the username. You can create it in the console:

user: rep    
pwd: xxxxxx    

11.  Configure the ApsaraDB RDS whitelist to allow a connection from the ECS instance

12.  Create a standby database in ApsaraDB RDS for PostgreSQL 12:

mkdir /zpdata01/pg12_1921_data    
    
mkdir /zpdata02/pg12_1921_wal    
    
chown -R postgres:postgres /zpdata01/pg12_1921_data    
chown -R postgres:postgres /zpdata02/pg12_1921_wal    
su - postgres    
    
export PGPASSWORD=pwd    
nohup pg_basebackup -D /zpdata01/pg12_1921_data -F p -R -c fast -X stream -h pgm-bp1wv687i955y998129390.pg.rds.aliyuncs.com -p 1433 -U rep  >./bak.log 2>&1 &     

13.  Configure the standby database, including application_name and archiving:

su - postgres  
cd /zpdata01/pg12_1921_data    

Configure the postgresql.conf annotation:

grep "^#" postgresql.conf    
    
    
#    
#Fri Mar 13 09:55:03 CST 2020    
#ssl_key_file='server.key'    
#huge_pages=try    
#auto_explain.sample_rate=1    
#zhparser.multi_zall=off    
#shared_preload_libraries='pg_stat_statements,auth_delay,auto_explain,zhparser,timescaledb,pg_pathman'    
#promote_trigger_file='/data/postgresql.trigger'    
#ssl=off    
#rds_max_log_files=20    
#pg_pathman.enable_auto_partition=on    
#shared_buffers=32768MB    
#zhparser.punctuation_ignore=off    
#pg_pathman.override_copy=on    
#port=1922    
#pg_stat_statements.max=5000    
#auth_delay.milliseconds=3s    
#auto_explain.log_nested_statements=off    
#track_io_timing=on    
#zhparser.multi_zmain=off    
#auto_explain.log_analyze=off    
#archive_mode=on    
#ssl_cert_file='server.crt'    
#zhparser.multi_short=off    
#zhparser.dict_in_memory=off    
#auto_explain.log_format=text    
#auto_explain.log_min_duration=-1    
#rds.rds_max_non_super_conns=12800    
#pg_pathman.enable=on    
#archive_command='/bin/date'    
#auto_explain.log_verbose=off    
#log_line_prefix='\1\n\t%p\t%r\t%u\t%d\t%t\t%e\t%T\t%S\t%U\t%E\t\t'    
#pg_pathman.enable_runtimemergeappend=on    
#zhparser.extra_dicts='dict_extra.xdb'    
#auto_explain.log_buffers=off    
#pg_stat_statements.track=top    
#jit_provider='llvmjit'    
#pg_pathman.enable_partitionrouter=off    
#pg_stat_statements.track_utility=off    
#pg_stat_statements.save=off    
#zhparser.dicts_type='EXTRA'    
#auto_explain.log_timing=on    
#pg_pathman.enable_runtimeappend=on    
#zhparser.seg_with_duality=off    
#rds.rds_max_super_conns=100    
#pg_pathman.enable_partitionfilter=on    
#log_destination='stderr,csvlog'    
#zhparser.multi_duality=off    
#pg_pathman.insert_into_fdw='postgres'    
#pg_pathman.enable_bounds_cache=on    
#rds.rds_max_non_super_wal_snd=32    
#auto_explain.log_triggers=off    
#rds_sync_replication_timeout=0    

Modify the configuration parameters below (add this file if there is no such file):

su - postgres  
cd /zpdata01/pg12_1921_data    
  
vi postgresql.auto.conf    
    
primary_conninfo = 'user=rep password=''pwd'' host=''pgm-bp1wv687i955y998129390.pg.rds.aliyuncs.com'' port=1433 application_name=hello_rds_pg12'    
port=1922    
shared_buffers=32GB    
log_destination='csvlog'    
archive_mode=always    
archive_command='test ! -f /zpdata02/pg12_1921_wal/%f && cp %p /zpdata02/pg12_1921_wal/%f'    

14.  Change directory permissions:

chmod 700 /zpdata02/pg12_1921_wal    
chmod 700 /zpdata01/pg12_1921_data    

15.  Start the standby database:

pg_ctl start -D /zpdata01/pg12_1921_data    

16.  Configure automatic startup for the standby database:

vi /etc/rc.local    
su - postgres -c "pg_ctl start -D /zpdata01/pg12_1921_data"    

17.  Configure the automatic snapshot function for directories of data files. You do not need to create snapshots for the directory that is used to store archive files.

Create scripts and configure execution permissions:

vi /etc/snap.sh    
STIME=`date +%F%T`    
/usr/sbin/zfs snapshot zp1/zpdata01@$STIME     
    
chmod 500 /etc/snap.sh    

Test whether snapshots work normally:

/etc/snap.sh    
    
# zfs list -t snapshot    
NAME                              USED  AVAIL     REFER  MOUNTPOINT    
zp1/zpdata01@2020-03-2117:06:47   144K      -      770M  -    

Configure automatic startup for crond:

# systemctl start crond    
    
# systemctl enable crond    
    
# systemctl status crond    
● crond.service - Command Scheduler    
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)    
   Active: active (running) since Sat 2020-03-21 16:16:08 CST; 53min ago    
 Main PID: 2526 (crond)    
   CGroup: /system.slice/crond.service    
           └─2526 /usr/sbin/crond -n    
    
Mar 21 16:16:08 iZbp135pwcjjoxqgfpw9k1Z systemd[1]: Started Command Scheduler.    
Mar 21 16:16:09 iZbp135pwcjjoxqgfpw9k1Z crond[2526]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 85% if used.)    
Mar 21 16:16:09 iZbp135pwcjjoxqgfpw9k1Z crond[2526]: (CRON) INFO (running with inotify support)    

Configure automatic tasks for crontab based on your needs, such as creating a snapshot every day at 01:01:

# crontab -e    
1 1 * * * /etc/snap.sh    
    
# crontab -l -u root    
1 1 * * * /etc/snap.sh    

18.  Configure automatic cleaning for snapshots:

It is not described in detail. An example of manual cleanup is listed below:

[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_data]# zfs list -t snapshot    
zNAME                              USED  AVAIL     REFER  MOUNTPOINT    
zp1/zpdata01@2020-03-2117:06:47   144K      -      770M  -    
zp1/zpdata01@2020-03-2117:17:01     0B      -      786M  -    
  
[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_data]# zfs destroy zp1/zpdata01@2020-03-2117:06:47    
  
[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_data]# zfs list -t snapshot    
NAME                              USED  AVAIL     REFER  MOUNTPOINT    
zp1/zpdata01@2020-03-2117:17:01     0B      -      786M  -    

19.  Configure automatic cleanup for archive files:

It is not described in detail:

[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_data]# cd /zpdata02/pg12_1921_wal/    
[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_wal]# ll    
total 49    
-rw------- 1 postgres postgres 16777216 Mar 21 17:06 000000010000011D00000097

In the following example, the WAL archive files generated seven days ago are deleted:

find /zpdata02/pg12_1921_wal/ -type f -mtime +7 -exec rm -f {} \;    

Automatic cleaning:

Create scripts, configure script execution permissions and configure crontab. They are not described in detail here.

20.  Check the availability of backup sets

21.  Detect primary/standby latency

Query the primary database

postgres=> select pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_flush_lsn(),sent_lsn)) as sent_delay,    
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_flush_lsn(),replay_lsn)) as replay_dealy,*     
from pg_stat_replication ;    
  
-[ RECORD 1 ]----+------------------------------    
sent_delay       | 0 bytes    
replay_dealy     | 0 bytes    
pid              | 84098    
usesysid         | 886185    
usename          | rep    
application_name | hello_rds_pg12    
client_addr      | 192.168.0.173    
client_hostname  |     
client_port      | 60402    
backend_start    | 2020-03-21 16:59:01.890775+08    
backend_xmin     |     
state            | streaming    
sent_lsn         | 11D/97002068    
write_lsn        | 11D/97002068    
flush_lsn        | 11D/97002068    
replay_lsn       | 11D/97002068    
write_lag        |     
flush_lag        |     
replay_lag       |     
sync_priority    | 0    
sync_state       | async    
reply_time       | 2020-03-21 17:01:17.198139+08    

Flashback Example

1.  List the snapshots and pay attention to the time of the snapshot:

# zfs list -t snapshot    
NAME                              USED  AVAIL     REFER  MOUNTPOINT    
zp1/zpdata01@2020-03-2117:17:01   312K      -      786M  -    

2.  Select a snapshot before the target time point that needs to be restored. It is best to select the one that is closest to the recovery time. (It must be earlier than the exact time.) Otherwise, select the snapshot much earlier.

Clone the ZFS file system based on the snapshot:

# zfs clone -o mountpoint=/test_recovery zp1/zpdata01@2020-03-2117:17:01 zp1/zpdata_test    
  
[root@iZbp135pwcjjoxqgfpw9k1Z pg12_1921_wal]# cd /test_recovery    
[root@iZbp135pwcjjoxqgfpw9k1Z test_recovery]# ll    
total 17    
drwx------ 20 postgres postgres 35 Mar 21 16:59 pg12_1921_data    

3.  Configuration items, including port and recovery_target_time:

su - postgres  
cd /test_recovery/pg12_1921_data    
  
vi postgresql.auto.conf    
    
port=1923    
shared_buffers=32GB    
log_destination='csvlog'    
recovery_end_command = 'cp /zpdata02/pg12_1921_wal/%f %p'    
recovery_target_time = '2020-03-21 17:28:37.670338+08'    
recovery_target_timeline = 'latest'    
recovery_target_action = 'pause'     

If your ECS instance cannot provide a sufficient memory capacity, you can configure a small shared_buffer.

4.  Delete the socket and pid files from the cloned ZFS file system:

rm -f /test_recovery/pg12_1921_data/.s.*    
rm /test_recovery/pg12_1921_data/postmaster.pid    

5.  Start recovery:

pg_ctl start -D /test_recovery/pg12_1921_data    

6.  Open the recovery database:

psql -h /test_recovery/pg12_1921_data -p 1923 -U rep postgres    
psql (12.1)    
Type "help" for help.    
    
postgres=> \dt    
             List of relations    
 Schema |       Name       | Type  | Owner      
--------+------------------+-------+--------    
 public | pgbench_branches | table | digoal    
 public | pgbench_history  | table | digoal    
 public | pgbench_tellers  | table | digoal    
(3 rows)    

After the restoration is completed, stop and delete the cloned test_recovery file system:

su - postgres  
  
pg_ctl stop -m fast -D /test_recovery/pg12_1921_data  
  
sudo   
  
zfs destroy zp1/zpdata_test  

Summary

A real-time standby object was created in the ZFS file system after using this method. Snapshots are periodically created in seconds using crond (regardless of the amount of data.) The latest snapshot is found and used during recovery.

ZFS is a good file system that supports snapshot, compression, and other functions.

References

0 0 0
Share on

digoal

282 posts | 24 followers

You may also like

Comments

digoal

282 posts | 24 followers

Related Products