tidb 分布式高可用架构

时间:2021-06-10 19:43:00

 

架构图:

tidb 分布式高可用架构

 

 

一、环境规划:

Pd node

192.168.9.42

192.168.15.57


Tikv node

192.168.15.2

192.168.15.2

192.168.15.23

tidb

192.168.15.57

192.168.15.104


VIP

192.168.15.219

Haproxy  -v

1.4.20

192.168.15.57


192.168.9.42

Keepalived-v

1.2.19

准备前工作:

A、同步系统时间

B、是否把刷屏日志写到一个文件里,可在启动命令后面追加到一个文件

/tmp/sh-error1.log 2>&1 &

C、不同版本需要的包glibc

D、启动服务开启守护进程要加nohup,

bash -c 'nohup command args... &'

E、系统版本为CENTOS6

F、包下载

http://down.51cto.com/data/2258987  ##CENTOS7

http://down.51cto.com/data/2259295  ##CENTOS6

http://down.51cto.com/data/2259848  ##pd tikv配置文件

友情提示:需要的其他包可以在其他章节博客里找到

 

G、IPTABLES 设置,需要开通集群使用的端口或者直接关闭

启动服务开启守护进程要加nohup

1、  下载包,解压后把命令存放在/usr/bin/目录下

 

#ln �Cs ……..

2、 修改57pd配置文件,并启动:##默认参数不展示

name = "tidb_pd"

data-dir = "/home/tidb_pd"

client-urls = http://192.168.15.57:2379#本机

advertise-client-urls = ""

peer-urls = http://192.168.15.57:2380#本机

advertise-peer-urls =http://192.168.15.57:2380#本机

initial-cluster ="tidb57=http://192.168.15.57:2380,tidb104=http://192.168.15.104:2380"#PD节点信息

initial-cluster-state = "new"

lease = 1

log-level = "debug"

tso-save-interval = "3s"

max-peer-count = 3

[balance]

min-capacity-used-ratio = 0.1

max-capacity-used-ratio = 0.9

address = ""

3、启动pd-server:

#bash -c 'nohup /usr/local/tidb/bin/pd-server--config=/usr/local/tidb/conf/pd.toml >>/tmp/aa.tx 2>&1   &' 

检查是否启动OK:

root@nod2 conf]# netstat -nletp |grep pd-server

tcp 0  0192.168.15.57:2379    0.0.0.0:* ……

tcp 0  0 192.168.15.57:2380   0.0.0.0:*  ……….

4、修改三个节点的tikv配置文件并 并启动  #####默认参数不展示

addr ="192.168.15.13:20160" #写各个节点IP

advertise-addr= ""

store = "/home/tikv13" 

log-level = "debug"  

job = "tikv_13"

endpoints = "192.168.15.57:2379,192.168.15.104:2379" #PDIP

#/usr/local/tidb/bin/tikv-server--config=/usr/local/tidb/conf/tikv.toml &

此方式启动没办法做到后台后台守护,需要在启动前加nohup

#bash -c 'nohup /usr/local/tidb/bin/tikv-server--config=/usr/local/tidb/conf/tikv.toml >>/tmp/aa.tx 2>&1   &'

直接这样启动  当前会话窗口不停的刷日志

5、启动tidb ##

#nohup /usr/local/tidb/bin/tidb-server--store=tikv --path="192.168.15.57:2379,192.168.15.104:2379" &

##--store=tikv为分布式是的引擎

 [root@nod2conf]# netstat -nltp |grep tidb

tcp        0     0 :::10080                   :::*                        LISTEN     

tcp        0     0 :::4000  

###基础点

4000:为服务监听端口

10080:服务状态监听端口,此端口展示

###基础操作tidb

TiDB内部数据用的,包括prometheus统计

http://192.168.15.57:10080/debug/pprof

http://192.168.15.57:10080/metrics

查看tidb状态信息:http://192.168.15.57:10080/status

{"connections":1,"version":"5.7.1-TiDB-1.0","git_hash":"01dde4433a0e5aabb183b3f6d00bd2f43107421a"}

查看集群状态,集群状态通过查看pd服务信息既可看到tikv信息

http://192.168.15.57:2379/pd/api/v1/stores

或者在本地查看 直接加curl 加地址

 

首次登录为:mysql �Ch192.168.15.57 �CP4000 �Curoot

mysql �Ch192.168.9.42 �CP4000 �Curoot

mysql> show databases;

+--------------------+

| Database          |

+--------------------+

| INFORMATION_SCHEMA |

| PERFORMANCE_SCHEMA |

| mysql             |

| test              |

| tidb              |

+--------------------+

登录后看到保存的元数据信息是相同的。集群OK    

5、在192.168.15.57 192.168.9.42上都安装haproxy keepalived服务

         ##keepalived主要提供资源高可用,解决单点故障,实现VIP漂移

1、  解压配置haproxy

###配置HAPROXY 若同时启动haproxy,一边没有IP资源的会提示

 Startingproxy admin_stats: cannot bind socket导致服务无法启动,修改:

修改内核参数: /etc/sysctl.conf

net.ipv4.ip_nonlocal_bind=1

保存结果,使结果生效

sysctl  �Cp

#useradd haproxy

#tar -zxvf haproxy-1.4.20.tar.gz

#cd haproxy-1.4.20 &&makeTARGET=linux26 PREFIX=/usr/local/haproxy ARCH=X86_64 && makeinstallPREFIX=/usr/local/haproxy

若出现此问题,则需要安装gcc

#yum �Cy install gcc

#chown -R haproxy.haproxy/usr/local/haproxy

2、添加修改配置文件,部分解释参数配置意义(具体看haproxy.cfg)

   #cd /usr/local/haproxy && makeconf&&cdconf && touch haproxy.cfg

###注意前段页使面用的端口48800和前端提供服务的端口,开通IPTABLES访问权限,

端口不能有冲突

3、默认情况下haproxy是不记录日志的,可以使用rsync本例LINUX服务记录日志。

1、在linux下是rsyslogd服务,

#yum �Cyinstallrsyslog先安装rsyslog

一般安装好rsyslog会自动生成rsyslog.d这个目录,若无自己创建

 #cd/etc/rsyslog.d/ && touch haproxy.conf

#vim/etc/rsyslog.d/haproxy.conf

$ModLoad imudp

$UDPServerRun 514

 local0.* /var/log/haproxy.log ###这个必须和haproxy.cfg的配置文件一致。

#vim /etc/rsyslog.conf

在62行 添加local0.*       /var/log/haproxy.log

重启服务

#service rsyslogrestart

现在你就可以看到日志(/var/log/haproxy.log)了

Haproxy.cfg:

# this config needs haproxy-1.1.28 orhaproxy-1.2.1

 

global

    log127.0.0.1   local0

    maxconn4096

    log127.0.0.1   local1 notice

    #logloghost    local0 info

    #maxconn4096

    #chroot/usr/local/haproxy

    chroot/usr/local/pxc

    uid501

    gid501

    daemon

    nbproc1

    pidfile/usr/local/haproxy/logs/haproxy.pid

    #debug

    #quiet

defaults

    log global

    #option dontlognull

    retries 3

    option      redispatch

    maxconn 4096

    timeout     http-keep-alive 10s

    timeoutcheck 10s

    contimeout  600s

    clitimeout  600s

    srvtimeout  50000

    timeoutqueue   50000

    timeoutconnect 600s

    timeoutclient  600s

    timeoutserver  600s

listen admin_stats 192.168.15.219:48800

       stats enable

    statshide-version

       stats realm <realm>

    statsrefresh 5s

    statsuri /admin-status

       stats auth admin:admin

       stats admin if TRUE

       mode http

       option httplog

       timeout connect 600s

       timeout check 5000

       timeout client 600s

       timeout server 600s

 

listen  tidb_server192.168.15.219:3306

    modetcp

    balance roundrobin

    optiontcpka

    optiontcplog

    server  tidb_server1 192.168.15.57:4000 weight 1 checkinter 2000 rise 2 fall 5

    server  tidb_server2 192.168.9.42:4000  weight 1 check inter 2000 rise 2 fall 5backup

    #timeoutconnect 50000

    #timeoutclient  50000

    #timeoutcheck   50000

    #timeouthttp-keep-alive 5000

    #timeoutserver  50000

 

listen tidb_status 192.168.15.219:6688

    modetcp

    balanceroundrobin

    optiontcpka

    optiontcplog

    servertidb_status1 192.168.15.57:10080 weight 1 check inter 2000 rise 2 fall 5

    servertidb_status2 192.168.9.42:10080 weight 1 check inter 2000 rise 2 fall 5

    timeoutconnect 50000

       timeout client  50000

       timeout check   50000

       timeout http-keep-alive 5000

       timeout server  50000

listen tikv_server

    bind*:20160

    modetcp

       balance roundrobin

       option tcpka

       option tcplog

       server tikv_server1 192.168.15.2:20160 weight 1 check inter 2000 rise 2fall 5

       server tikv_server2 192.168.15.13:20160 weight 1 check inter 2000 rise 2fall 5

       server tikv_server3 192.168.15.23:20160 weight 1 check inter 2000 rise 2fall 5

       timeout connect 50000

       timeout client  50000

       timeout check   50000

       timeout http-keep-alive 5000

       timeout server  50000

6安装keepalived服务:

#cd keepalived-1.2.12

#./configure--prefix=/usr/local/keepalived

若报错:

configure:error:

  !!! OpenSSL is notproperly installed on yoursystem. !!!

  !!! Can not include OpenSSL headersfiles.           !!!

 

yuminstall  openssl* check* -y

#make &&  make install

#cp/usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/init.d/

#cp/usr/local/keepalived/etc/sysconfig/keepalived/etc/sysconfig/

#mkdir /etc/keepalived

#cp/usr/local/keepalived/etc/keepalived/keepalived.conf/etc/keepalived/

#cp/usr/local/keepalived/sbin/keepalived/usr/sbin/

 

Keepalived.conf:

cat keepalived.conf

! Configuration File for keepalived

 

global_defs {

  notification_email {

    acassen@firewall.loc

    failover@firewall.loc

    sysadmin@firewall.loc

   }

  notification_email_from Alexandre.Cassen@firewall.loc

  smtp_server 192.168.200.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

 

vrrp_instance VI_1 {

   state BACKUP

   interface em1

   virtual_router_id 51

   realserver 192.168.15.57

   priority 90

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.15.219 dev em1 scope global

    }

#notify_master/etc/keepalived/check_master_haproxy.sh

#notify_master/etc/keepalived/check_backup_haproxy.sh

}

###注意这里需要写个简单的脚本判断haproxykeepalived服务的高可用,若出现故障进行切换。脚本略

##tikvtidb,pd都加到haproxy里去,不知道会不会影响效率,若是有还是老老实实的使用原生态界面查看


tidb 分布式高可用架构

##在负载均衡,可以替换的有maxscale也是不错的,,,配置简单,就是没有监控界面

本文出自 “DBAspace” 博客,请务必保留此出处http://dbaspace.blog.51cto.com/6873717/1873964