(数据库必知必会:TiDB(11)TiDB数据库集群安装)
TiDB数据库集群安装
TiDB集群的安装,需要先安装一台中控机,然后通过中控机进行集群的安装及管理。
单机环境上安装集群
单机环境集群式将所有节点都安装在同一台服务器上。
在集群中,PD实例需要有3个,TiKV实例需要有3个,其余的实例可以只保留1个。
安装过程中,需要先安装中控机,然后通过中控机安装、管理集群。
下载并安装TiUP工具
通过命令下载并安装TiUP工具。
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
安装过程为:
wux_labs@wux-labs-vm:~$ curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7088k 100 7088k 0 0 1483k 0 0:00:04 0:00:04 --:--:-- 1561k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /home/wux_labs/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile: /home/wux_labs/.bashrc
/home/wux_labs/.bashrc has been modified to add tiup to PATH
open a new terminal or source /home/wux_labs/.bashrc to use it
Installed path: /home/wux_labs/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================
wux_labs@wux-labs-vm:~$
安装完成后,命令行中提示需要执行source /home/wux_labs/.bashrc
让环境变量生效,实际上这是因为在执行installs.sh的时候往这个文件中追加了:
export PATH=/home/wux_labs/.tiup/bin:$PATH
按提示执行命令让环境变量生效即可,此时,tiup命令被添加到环境变量PATH中。
安装TiUP cluster组件
执行以下命令安装cluster组件:
tiup cluster
安装过程为:
wux_labs@wux-labs-vm:~$ tiup cluster
tiup is checking updates for component cluster ...timeout(2s)!
The component `cluster` version is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.11.3-linux-amd64.tar.gz 8.44 MiB / 8.44 MiB 100.00% 7.00 MiB/s
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster
Deploy a TiDB cluster for production
Usage:
tiup cluster [command]
Available Commands:
check Perform preflight checks for the cluster.
deploy Deploy a cluster for production
start Start a TiDB cluster
stop Stop a TiDB cluster
restart Restart a TiDB cluster
scale-in Scale in a TiDB cluster
scale-out Scale out a TiDB cluster
destroy Destroy a specified cluster
clean (EXPERIMENTAL) Cleanup a specified cluster
upgrade Upgrade a specified TiDB cluster
display Display information of a TiDB cluster
prune Destroy and remove instances that is in tombstone state
list List all clusters
audit Show audit log of cluster operation
import Import an exist TiDB cluster from TiDB-Ansible
edit-config Edit TiDB cluster config
show-config Show TiDB cluster config
reload Reload a TiDB cluster's config and restart if needed
patch Replace the remote package with a specified package and restart the service
rename Rename the cluster
enable Enable a TiDB cluster automatically at boot
disable Disable automatic enabling of TiDB clusters at boot
replay Replay previous operation and skip successed steps
template Print topology template
tls Enable/Disable TLS between TiDB components
meta backup/restore meta information
help Help about any command
completion Generate the autocompletion script for the specified shell
Flags:
-c, --concurrency int max number of parallel tasks allowed (default 5)
--format string (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
-h, --help help for tiup
--ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
--ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
-v, --version version for tiup
--wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
-y, --yes Skip all confirmations and assumes 'yes'
Use "tiup cluster help [command]" for more information about a command.
wux_labs@wux-labs-vm:~$
这样TiUP cluster组件就安装完成。
创建拓扑文件
为了防止配置错误、提高配置效率,可以通过命令生成拓扑文件模板,然后基于模板修改拓扑配置。命令如下:
tiup cluster template > topology.yaml
由于是在一台服务器上部署多个实例,所以PD、TiKV的多个实例之间需要用不同的端口来进行区分,最终修改后的拓扑文件为:
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
arch: "amd64"
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
pd_servers:
- host: wux-labs-vm
client_port: 23791
peer_port: 23801
deploy_dir: "/tidb-deploy/pd-23791"
data_dir: "/tidb-data/pd-23791"
log_dir: "/tidb-deploy/pd-23791/log"
config:
server.labels: { host: "logic-host-1" }
- host: wux-labs-vm
client_port: 23792
peer_port: 23802
deploy_dir: "/tidb-deploy/pd-23792"
data_dir: "/tidb-data/pd-23792"
log_dir: "/tidb-deploy/pd-23792/log"
- host: wux-labs-vm
client_port: 23793
peer_port: 23803
deploy_dir: "/tidb-deploy/pd-23793"
data_dir: "/tidb-data/pd-23793"
log_dir: "/tidb-deploy/pd-23793/log"
tidb_servers:
- host: wux-labs-vm
tikv_servers:
- host: wux-labs-vm
port: 20161
status_port: 20181
deploy_dir: "/tidb-deploy/tikv-20161"
data_dir: "/tidb-data/tikv-20161"
log_dir: "/tidb-deploy/tikv-20161/log"
config:
server.labels: { host: "logic-host-1" }
- host: wux-labs-vm
port: 20162
status_port: 20182
deploy_dir: "/tidb-deploy/tikv-20162"
data_dir: "/tidb-data/tikv-20162"
log_dir: "/tidb-deploy/tikv-20162/log"
config:
server.labels: { host: "logic-host-2" }
- host: wux-labs-vm
port: 20163
status_port: 20183
deploy_dir: "/tidb-deploy/tikv-20163"
data_dir: "/tidb-data/tikv-20163"
log_dir: "/tidb-deploy/tikv-20163/log"
config:
server.labels: { host: "logic-host-3" }
tiflash_servers:
- host: wux-labs-vm
monitoring_servers:
- host: wux-labs-vm
grafana_servers:
- host: wux-labs-vm
alertmanager_servers:
- host: wux-labs-vm
配置SSH免密登录
由于是通过中控机安装、管理集群,虽然是单机环境的集群,也需要配置一下SSH免密登录。
wux_labs@wux-labs-vm:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/wux_labs/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/wux_labs/.ssh/id_rsa
Your public key has been saved in /home/wux_labs/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:sKT4o0ISwCqLtL0cVm4yFTFQS19FJbp3FsTyXfA/Gmg wux_labs@wux-labs-vm
The key's randomart image is:
+---[RSA 3072]----+
|. .o=. .o+oo.. |
|.. ..+ . ..o. ..|
|o .+. . o.. o|
|+. . = o . .....|
|+o+ = . S. E + ..|
|+o B o o o o .|
|o o O . |
|. + . |
| .. |
+----[SHA256]-----+
wux_labs@wux-labs-vm:~$
检查安装要求
在安装集群之前,为了确保集群安装成功,需要先检查一下服务器是否满足安装集群的要求。执行以下命令进行检查:
tiup cluster check ./topology.yaml
检查过程为:
wux_labs@wux-labs-vm:~$ tiup cluster check ./topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster check ./topology.yaml
+ Detect CPU Arch Name
- Detecting node wux-labs-vm Arch info ... Done
+ Detect CPU OS Name
- Detecting node wux-labs-vm OS info ... Done
+ Download necessary tools
- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
- Getting system info of wux-labs-vm:22 ... Done
+ Check time zone
- Checking node wux-labs-vm ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
- Checking node wux-labs-vm ... Done
+ Cleanup check files
- Cleanup check files on wux-labs-vm:22 ... Done
Node Check Result Message
---- ----- ------ -------
wux-labs-vm sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0
wux-labs-vm sysctl Fail vm.swappiness = 60, should be 0
wux-labs-vm sysctl Fail net.core.somaxconn = 4096, should be greater than 32768
wux-labs-vm thp Fail THP is enabled, please disable it for best performance
wux-labs-vm command Fail numactl not usable, bash: numactl: command not found
wux-labs-vm os-version Warn OS is Ubuntu 20.04.5 LTS 20.04.5 (ubuntu support is not fully tested, be careful)
wux-labs-vm cpu-cores Pass number of CPU cores / threads: 2
wux-labs-vm memory Pass memory size is 8192MB
wux-labs-vm selinux Pass SELinux is disabled
wux-labs-vm service Pass service firewalld not found, ignore
wux-labs-vm cpu-governor Warn Unable to determine current CPU frequency governor policy
wux-labs-vm network Pass network speed of enP58751s1 is 50000MB
wux-labs-vm network Pass network speed of eth0 is 50000MB
wux-labs-vm limits Fail soft limit of 'nofile' for user 'tidb' is not set or too low
wux-labs-vm limits Fail hard limit of 'nofile' for user 'tidb' is not set or too low
wux-labs-vm limits Fail soft limit of 'stack' for user 'tidb' is not set or too low
wux_labs@wux-labs-vm:~$
可以看到,检查结果中有失败的项目Fail
的。
这里可以手工修复不满足的项,也可以通过以下命令在检查的时候自动修复不满足的。
tiup cluster check ./topology.yaml --apply
命令执行以后,会重复刚才的检查动作,并且在检查完后会多出一步修复的动作,修复不满足的项。
修复完成后再次执行检查,确保配置项都是满足要求的。
这里还差一个numactl软件需要手动安装,在Ubuntu 20.04系统中,执行命令安装软件:
sudo apt-get install numactl
软件安装完成后,再次检查一下,最终检查结果为全部通过。
创建安装目录
由于我们指定了集群部署的目录是
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
但是我们当前使用的用户并非root用户,所以需要先手动创建一下安装目录。
sudo mkdir /tidb-deploy /tidb-data
sudo chmod 777 /tidb-data /tidb-deploy
部署集群
所有检查项都通过以后,通过deploy部署命令部署TiDB集群,其中cluster1是部署的集群的名字。
tiup cluster deploy cluster1 v6.1.0 ./topology.yaml
在等待确认的地方输入y,确认继续安装。
整个安装过程为:
wux_labs@wux-labs-vm:~$ tiup cluster deploy cluster1 v6.1.0 ./topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster deploy cluster1 v6.1.0 ./topology.yaml
+ Detect CPU Arch Name
- Detecting node wux-labs-vm Arch info ... Done
+ Detect CPU OS Name
- Detecting node wux-labs-vm OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: cluster1
Cluster version: v6.1.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd wux-labs-vm 23791/23801 linux/x86_64 /tidb-deploy/pd-23791,/tidb-data/pd-23791
pd wux-labs-vm 23792/23802 linux/x86_64 /tidb-deploy/pd-23792,/tidb-data/pd-23792
pd wux-labs-vm 23793/23803 linux/x86_64 /tidb-deploy/pd-23793,/tidb-data/pd-23793
tikv wux-labs-vm 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv wux-labs-vm 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tikv wux-labs-vm 20163/20183 linux/x86_64 /tidb-deploy/tikv-20163,/tidb-data/tikv-20163
tidb wux-labs-vm 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash wux-labs-vm 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus wux-labs-vm 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana wux-labs-vm 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager wux-labs-vm 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v6.1.0 (linux/amd64) ... Done
- Download tikv:v6.1.0 (linux/amd64) ... Done
- Download tidb:v6.1.0 (linux/amd64) ... Done
- Download tiflash:v6.1.0 (linux/amd64) ... Done
- Download prometheus:v6.1.0 (linux/amd64) ... Done
- Download grafana:v6.1.0 (linux/amd64) ... Done
- Download alertmanager: (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare wux-labs-vm:22 ... Done
+ Deploy TiDB instance
- Copy pd -> wux-labs-vm ... Done
- Copy pd -> wux-labs-vm ... Done
- Copy pd -> wux-labs-vm ... Done
- Copy tikv -> wux-labs-vm ... Done
- Copy tikv -> wux-labs-vm ... Done
- Copy tikv -> wux-labs-vm ... Done
- Copy tidb -> wux-labs-vm ... Done
- Copy tiflash -> wux-labs-vm ... Done
- Copy prometheus -> wux-labs-vm ... Done
- Copy grafana -> wux-labs-vm ... Done
- Copy alertmanager -> wux-labs-vm ... Done
- Deploy node_exporter -> wux-labs-vm ... Done
- Deploy blackbox_exporter -> wux-labs-vm ... Done
+ Copy certificate to remote host
+ Init instance configs
- Generate config pd -> wux-labs-vm:23791 ... Done
- Generate config pd -> wux-labs-vm:23792 ... Done
- Generate config pd -> wux-labs-vm:23793 ... Done
- Generate config tikv -> wux-labs-vm:20161 ... Done
- Generate config tikv -> wux-labs-vm:20162 ... Done
- Generate config tikv -> wux-labs-vm:20163 ... Done
- Generate config tidb -> wux-labs-vm:4000 ... Done
- Generate config tiflash -> wux-labs-vm:9000 ... Done
- Generate config prometheus -> wux-labs-vm:9090 ... Done
- Generate config grafana -> wux-labs-vm:3000 ... Done
- Generate config alertmanager -> wux-labs-vm:9093 ... Done
+ Init monitor configs
- Generate config node_exporter -> wux-labs-vm ... Done
- Generate config blackbox_exporter -> wux-labs-vm ... Done
Enabling component pd
Enabling instance wux-labs-vm:23793
Enabling instance wux-labs-vm:23792
Enabling instance wux-labs-vm:23791
Enable instance wux-labs-vm:23791 success
Enable instance wux-labs-vm:23792 success
Enable instance wux-labs-vm:23793 success
Enabling component tikv
Enabling instance wux-labs-vm:20163
Enabling instance wux-labs-vm:20161
Enabling instance wux-labs-vm:20162
Enable instance wux-labs-vm:20163 success
Enable instance wux-labs-vm:20162 success
Enable instance wux-labs-vm:20161 success
Enabling component tidb
Enabling instance wux-labs-vm:4000
Enable instance wux-labs-vm:4000 success
Enabling component tiflash
Enabling instance wux-labs-vm:9000
Enable instance wux-labs-vm:9000 success
Enabling component prometheus
Enabling instance wux-labs-vm:9090
Enable instance wux-labs-vm:9090 success
Enabling component grafana
Enabling instance wux-labs-vm:3000
Enable instance wux-labs-vm:3000 success
Enabling component alertmanager
Enabling instance wux-labs-vm:9093
Enable instance wux-labs-vm:9093 success
Enabling component node_exporter
Enabling instance wux-labs-vm
Enable wux-labs-vm success
Enabling component blackbox_exporter
Enabling instance wux-labs-vm
Enable wux-labs-vm success
Cluster `cluster1` deployed successfully, you can start it with command: `tiup cluster start cluster1 --init`
wux_labs@wux-labs-vm:~$
至此,集群就算是安装完成了,接下来就可以启动集群了。
启动集群
部署完成后,可以通过命令查看已安装的集群的信息。
- 列出集群列表
tiup cluster list
- 查看集群状态
tiup cluster display cluster1
可以看到,当前集群中有11个实例节点,但是都没有启动。
按照提示可以启动集群,其中--init
表示安全启动,启动后会给数据库的root用户生成一个密码。
tiup cluster start cluster1 --init
启动过程为:
wux_labs@wux-labs-vm:~$ tiup cluster start cluster1 --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster start cluster1 --init
Starting cluster cluster1...
+ [ Serial ] - SSHKeySet: privateKey=/home/wux_labs/.tiup/storage/cluster/clusters/cluster1/ssh/id_rsa, publicKey=/home/wux_labs/.tiup/storage/cluster/clusters/cluster1/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [ Serial ] - StartCluster
Starting component pd
Starting instance wux-labs-vm:23793
Starting instance wux-labs-vm:23791
Starting instance wux-labs-vm:23792
Start instance wux-labs-vm:23792 success
Start instance wux-labs-vm:23791 success
Start instance wux-labs-vm:23793 success
Starting component tikv
Starting instance wux-labs-vm:20163
Starting instance wux-labs-vm:20161
Starting instance wux-labs-vm:20162
Start instance wux-labs-vm:20162 success
Start instance wux-labs-vm:20163 success
Start instance wux-labs-vm:20161 success
Starting component tidb
Starting instance wux-labs-vm:4000
Start instance wux-labs-vm:4000 success
Starting component tiflash
Starting instance wux-labs-vm:9000
Start instance wux-labs-vm:9000 success
Starting component prometheus
Starting instance wux-labs-vm:9090
Start instance wux-labs-vm:9090 success
Starting component grafana
Starting instance wux-labs-vm:3000
Start instance wux-labs-vm:3000 success
Starting component alertmanager
Starting instance wux-labs-vm:9093
Start instance wux-labs-vm:9093 success
Starting component node_exporter
Starting instance wux-labs-vm
Start wux-labs-vm success
Starting component blackbox_exporter
Starting instance wux-labs-vm
Start wux-labs-vm success
+ [ Serial ] - UpdateTopology: cluster=cluster1
Started cluster `cluster1` successfully
The root password of TiDB database has been changed.
The new password is: '@2XKr^+9&nNZ3U07q6'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
wux_labs@wux-labs-vm:~$
从这个日志中可以看出,TiDB的启动顺序为:
验证集群启动
使用命令验证
启动完成后,再次查看一下集群的状态。
此时,TiDB的实例都启动了。
通过Dashboard查看
根据上面的结果,TiDB的Dashboard的地址为:http://wux-labs-vm:23792/dashboard,直接在浏览器中打开即可。
输入用户名和密码登录后可打开Dashboard界面。
这里面可以实现TiDB数据库集群的监控。
通过Grafana查看
根据上面的结果,TiDB的Grafana的地址为:http://wux-labs-vm:3000,直接在浏览器中打开即可。
输入用户名和密码可以进入到监控界面。
通过Prometheus查看
除了上面几种,还可以直接查看Prometheus的监控数据,地址是http://wux-labs-vm:9090/,直接浏览器打开即可。
写在后面
TiDB是分布式K-V数据库,对集群节点要求比较多,如果每个实例都分别在一台服务器上,则至少需要十几台服务器。单机环境仅作为模拟分布式集群用,不能用于生产环境。