一、 简介:
PG在9.*版本后热备提供了新的一个功能,那就是Stream Replication的读写分离,是PG高可用性的一个典型应用。这个功能在oracle中叫active dataguard,在PostgreSQL中称为hot standby。
二、系统环境
系统平台:CentOS 6.2
PostgreSQL版本:9.5.0
master : 192.168.1.202
slave : 192.168.1.201
三、搭建步骤
注:slave库可不需要init数据库;
- master库操作
- 创建流复制用户repuser
CREATE USER repuser replication LOGIN CONNECTION LIMIT 3 ENCRYPTED PASSWORD 'li0924';# replication 做流复制的时候用到的一个用户属性,一般单独设定。# LOGIN 用户登录的属性;CREATE USER默认自带;所以LOGIN关键字可以略去。
2. 在master数据库的/data/pgdata/postgresql.conf文件中设置如下配置项:
wal_level = hot_standbymax_wal_senders = 1wal_keep_segments = 32# max_wal_senders 是slave库的节点数,有多少个slave库就设多少,# wal_keep_segments 默认值是16,是PG_XLOG下的日志文件数相关参数
3.在主数据库中的/var/lib/pgsql/data/pg_hba.conf中添加如下配置:
host replication repuser 192.168.1.201/16 md5# 第二项必须填 replication;
4.重新启动主数据库,让配置生效:
pg_stop;pg_start
5.采用热备份的方式;把master数据库的PGDATA目录下面的文件传到slave库中.同时把创建的数据文件夹也传过去;
这个步骤就是一个数据库克隆的工作。
- slave库操作
1.在slave数据库的/data/pgdata/postgresql.conf文件中设置如下配置项:
hot_standby = on
2.在PGDATA目录下;也就这里的/data/pgdata;新建文件recovery.conf
standby_mode = 'on'primary_conninfo = 'host=192.168.1.202 port=5432 user=repuser password=li0924'trigger_file = '/data/pgdata/trigger_standby'
3.删除原先从master库上过来的/data/pgdata/postmaster.pid文件,
rm /data/pgdata/postmaster.pid
4.根据master库中的环境变量文件;修改slave库的postgres用户的环境变量 然后启动备库:
pg_start
四、验证工作
- 查看进程
master库
[postgres@sdserver40_210 ~]$ ps -ef | grep postgreroot 2021 556 0 15:18 pts/1 00:00:00 su - postgrespostgres 2022 2021 0 15:18 pts/1 00:00:00 -bashpostgres 2239 1 0 15:24 pts/1 00:00:00 /opt/pgsql95/bin/postgrespostgres 2249 2239 0 15:24 ? 00:00:00 postgres: checkpointer processpostgres 2250 2239 0 15:24 ? 00:00:00 postgres: writer processpostgres 2251 2239 0 15:24 ? 00:00:00 postgres: wal writer processpostgres 2252 2239 0 15:24 ? 00:00:00 postgres: autovacuum launcher processpostgres 2253 2239 0 15:24 ? 00:00:00 postgres: archiver process last was 00000006000000000000001E.00000028.backuppostgres 2254 2239 0 15:24 ? 00:00:00 postgres: stats collector processpostgres 3235 2239 0 15:54 ? 00:00:00 postgres: wal sender process repuser 183.60.192.229(40399) streaming 0/1F000D80
slave库
[postgres@sdserver40_222 pgdata]$ ps -ef | grep postgrespostgres 6856 1 0 15:54 pts/0 00:00:00 /opt/pgsql/bin/postgrespostgres 6863 6856 0 15:54 ? 00:00:00 postgres: startup process recovering 00000006000000000000001Fpostgres 6864 6856 0 15:54 ? 00:00:00 postgres: checkpointer processpostgres 6865 6856 0 15:54 ? 00:00:00 postgres: writer processpostgres 6866 6856 0 15:54 ? 00:00:00 postgres: wal receiver process streaming 0/1F000CA0postgres 6867 6856 0 15:54 ? 00:00:00 postgres: stats collector processroot 6922 30527 0 16:08 pts/0 00:00:00 su - postgrespostgres 6923 6922 0 16:08 pts/0 00:00:00 -bashpostgres 6974 6923 0 16:49 pts/0 00:00:00 ps -efpostgres 6975 6923 0 16:49 pts/0 00:00:00 grep postgres
- 数据验证
在master库操作
[postgres@sdserver40_210 ~]$ psql mydb repuserpsql (9.5.0)Type "help" for help.mydb=> \dList of relationsSchema | Name | Type | Owner--------+-------+---------------+----------public | dept | table | lottupublic | emp | foreign table | postgrespublic | test | table | lottupublic | trade | table | lottu(4 rows)mydb=> create table t_lottu (id int primary key,name varchar(20));CREATE TABLEmydb=> \dList of relationsSchema | Name | Type | Owner--------+---------+---------------+----------public | dept | table | lottupublic | emp | foreign table | postgrespublic | t_lottu | table | repuserpublic | test | table | lottupublic | trade | table | lottu(5 rows)mydb=> insert into t_lottu values (1001,'lottu');INSERT 0 1mydb=> insert into t_lottu values (1002,'vincent');INSERT 0 1mydb=> insert into t_lottu values (1003,'rax');INSERT 0 1mydb=> select * from t_lottu;id | name------+---------1001 | lottu1002 | vincent1003 | rax(3 rows)
登录slave库查看数据
[postgres@sdserver40_222 pgdata]$ psql mydb repuserpsql (9.5.0)Type "help" for help.mydb=> select * from t_lottu;id | name------+---------1001 | lottu1002 | vincent1003 | rax(3 rows)mydb=> delete from t_lottu where id = 1003;
ERROR: cannot execute DELETE in a read-only transaction
参考博客 --http://my.oschina.net/Kenyon/blog/54967