由于测试环境上面使用的zabbix服务器配置比较低,经常会遇到性能瓶颈(主要是数据库和磁盘I/O等),于是倒逼我使用了一些方式来缓解这些问题。
主要是以前使用的那个备份数据库的脚本是对zabbix数据库进行全备的,使用的又是mysql自带的工具mysqldump,当数据量大了之后进行全备所花的时间比较长,这样将会造成数据库的锁读。。。从而使zabbix服务以为mysql死掉了,产生一大堆的报警。
后来发现原来造成数据库数据量大量增加的是zabbix数据库中的一些存储数据的大表导致的。于是备份数据库的时候可以选择跳过这些表进行备份,这样,将大大减少数据库备份所花的时间(PS:之前备份数据库所花时间在十分钟左右,现在跳过大表备份,所花时间在1S左右就能备份完,大大缩短了备份数据库时间)。
下面就贴出某位大神写的专门为zabbix数据库做备份以及恢复的脚本:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
|
#!/bin/bash
#author: itnihao
red= '\e[0;31m' # 红色
RED= '\e[1;31m'
green= '\e[0;32m' # 绿色
GREEN= '\e[1;32m'
blue= '\e[0;34m' # 蓝色
BLUE= '\e[1;34m'
purple= '\e[0;35m' # 紫色
PURPLE= '\e[1;35m'
NC= '\e[0m' # 没有颜色
source /etc/bashrc
source /etc/profile
MySQL_USER=zabbix
MySQL_PASSWORD=zabbix
MySQL_HOST=localhost
MySQL_PORT=3306
MySQL_DUMP_PATH= /opt/backup
MYSQL_BIN_PATH= /opt/software/mysql/bin/mysql
MYSQL_DUMP_BIN_PATH= /opt/software/mysql/bin/mysqldump
MySQL_DATABASE_NAME=zabbix
DATE=$( date '+%Y%m%d' )
MySQLDUMP () {
[ -d ${MySQL_DUMP_PATH} ] || mkdir ${MySQL_DUMP_PATH}
cd ${MySQL_DUMP_PATH}
[ -d logs ] || mkdir logs
[ -d ${DATE} ] || mkdir ${DATE}
cd ${DATE}
#TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables"|egrep -v "(Tables_in_zabbix)")
TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables" | egrep - v "(Tables_in_zabbix|history*|trends*|acknowledges|alerts|auditlog|events|service_alarms)" )
for TABLE_NAME in ${TABLE_NAME_ALL}
do
${MYSQL_DUMP_BIN_PATH} --opt -u${MySQL_USER} -p${MySQL_PASSWORD} -P${MySQL_PORT} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} ${TABLE_NAME} >${TABLE_NAME}.sql
sleep 0.01
done
[ "$?" == 0 ] && echo "${DATE}: Backup zabbix succeed" >> ${MySQL_DUMP_PATH} /logs/ZabbixMysqlDump .log
[ "$?" != 0 ] && echo "${DATE}: Backup zabbix not succeed" >> ${MySQL_DUMP_PATH} /logs/ZabbixMysqlDump .log
cd ${MySQL_DUMP_PATH}/
rm -rf $( date +%Y%m%d -- date = '5 days ago' )
exit 0
}
MySQLImport () {
cd ${MySQL_DUMP_PATH}
DATE=$( ls ${MySQL_DUMP_PATH} | egrep "\b^[0-9]+$\b" )
echo -e "${green}${DATE}"
echo -e "${blue}what DATE do you want to import,please input date:${NC}"
read SELECT_DATE
if [ -d "${SELECT_DATE}" ]; then
echo -e "you select is ${green}${SELECT_DATE}${NC}, do you want to contine,if,input ${red}(yes|y|Y)${NC},else then exit"
read Input
[[ 'yes|y|Y' =~ "${Input}" ]]
status= "$?"
if [ "${status}" == "0" ]; then
echo "now import SQL....... Please wait......."
else
exit 1
fi
cd ${SELECT_DATE}
for PER_TABEL_SQL in $( ls *.sql)
do
${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} < ${PER_TABEL_SQL}
echo -e "import ${PER_TABEL_SQL} ${PURPLE}........................${NC}"
done
echo "Finish import SQL,Please check Zabbix database"
else
echo "Don't exist ${SELECT_DATE} DIR"
fi
}
case "$1" in
MySQLDUMP|mysqldump)
MySQLDUMP
;;
MySQLImport|mysqlimport)
MySQLImport
;;
*)
echo "Usage: $0 {(MySQLDUMP|mysqldump) (MySQLImport|mysqlimport)}"
;;
esac
|
该脚本源出处在这https://github.com/itnihao/zabbix-book/blob/master/03-chapter/Zabbix_MySQLdump_per_table_v2.sh
我这是在大神的脚本上做了修改之后形成的适合我自己备份的脚本,各位也可以自行修改成适合自己的备份脚本。这个脚本实现的效果上面已经说了,之前做全备的时候差不多有4G左右的数据量,现在只备份配置文件数据量只有不到10M,果断大大节省时间以及空间呀。
不过这样的话将无法保证数据的备份,我目前考虑使用xtradbbackup对数据进行增量备份,目前还未实现,留待过两天做吧。
好了,关于数据库备份的事情搞了,然后还需要对大数据量的表进行表分区,参考了zabbix官网的一篇文章https://www.zabbix.org/wiki/Docs/howto/mysql_partition 各位有兴趣的话可以去看看,我这里将其总结在了一起,更加方便一点。
表分区可以对大数据量的表进行物理上的拆分成多个文件,但是逻辑上来看,还是一张表,对应用程序是透明的。另外,将这一张大表拆分成很多小表的话将使得数据查询速度能够更快。还可以随时删除旧的数据分区,删除过期数据。这种方式适用于大数据量的表,但是查询量比较少的应用场景。如果是大数据量的表,又有大量查询的话建议还是进行分库分表操作。
好了,不多扯了,开始作业了。
首先,登录数据库(PS:这个就不演示了)
然后登陆到zabbix库中修改两张表的结构:
1
2
3
|
use zabbix;
Alter table history_text drop primary key , add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key , add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
|
修改完之后再按照官网上的过程创建四个存储过程:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME VARCHAR (64), TABLENAME VARCHAR (64), PARTITIONNAME VARCHAR (64), CLOCK INT )
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
PARTITIONNAME = The name of the partition to create
*/
/*
Verify that the partition does not already exist
*/
DECLARE RETROWS INT ;
SELECT COUNT (1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_description >= CLOCK;
IF RETROWS = 0 THEN
/*
1. Print a message indicating that a partition was created.
2. Create the SQL to create the partition.
3. Execute the SQL from #2.
*/
SELECT CONCAT( "partition_create(" , SCHEMANAME, "," , TABLENAME, "," , PARTITIONNAME, "," , CLOCK, ")" ) AS msg;
SET @SQL = CONCAT( 'ALTER TABLE ' , SCHEMANAME, '.' , TABLENAME, ' ADD PARTITION (PARTITION ' , PARTITIONNAME, ' VALUES LESS THAN (' , CLOCK, '));' );
PREPARE STMT FROM @SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END
$$DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR (64), TABLENAME VARCHAR (64), DELETE_BELOW_PARTITION_DATE BIGINT )
BEGIN
/*
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
*/
DECLARE done INT DEFAULT FALSE ;
DECLARE drop_part_name VARCHAR (16);
/*
Get a list of all the partitions that are older than the date
in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with
a "p" , so use SUBSTRING TO get rid of that character .
*/
DECLARE myCursor CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND CAST ( SUBSTRING (partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE ;
/*
Create the basics for when we need to drop the partition. Also, create
@drop_partitions to hold a comma-delimited list of all partitions that
should be deleted.
*/
SET @alter_header = CONCAT( "ALTER TABLE " , SCHEMANAME, "." , TABLENAME, " DROP PARTITION " );
SET @drop_partitions = "" ;
/*
Start looping through all the partitions that are too old.
*/
OPEN myCursor;
read_loop: LOOP
FETCH myCursor INTO drop_part_name;
IF done THEN
LEAVE read_loop;
END IF;
SET @drop_partitions = IF(@drop_partitions = "" , drop_part_name, CONCAT(@drop_partitions, "," , drop_part_name));
END LOOP;
IF @drop_partitions != "" THEN
/*
1. Build the SQL to drop all the necessary partitions.
2. Run the SQL to drop the partitions.
3. Print out the table partitions that were deleted.
*/
SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";" );
PREPARE STMT FROM @full_sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
SELECT CONCAT(SCHEMANAME, "." , TABLENAME) AS ` table `, @drop_partitions AS `partitions_deleted`;
ELSE
/*
No partitions are being deleted, so print out "N/A" ( Not applicable) to indicate
that no changes were made.
*/
SELECT CONCAT(SCHEMANAME, "." , TABLENAME) AS ` table `, "N/A" AS `partitions_deleted`;
END IF;
END $$
DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR (32), TABLE_NAME VARCHAR (32), KEEP_DATA_DAYS INT , HOURLY_INTERVAL INT , CREATE_NEXT_INTERVALS INT )
BEGIN
DECLARE OLDER_THAN_PARTITION_DATE VARCHAR (16);
DECLARE PARTITION_NAME VARCHAR (16);
DECLARE LESS_THAN_TIMESTAMP INT ;
DECLARE CUR_TIME INT ;
CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00' ));
SET @__interval = 1;
create_loop: LOOP
IF @__interval > CREATE_NEXT_INTERVALS THEN
LEAVE create_loop;
END IF;
SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00' );
CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
SET @__interval=@__interval+1;
END LOOP;
SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY ), '%Y%m%d0000' );
CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);
END $$
DELIMITER ;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR (64), TABLENAME VARCHAR (64), HOURLYINTERVAL INT (11))
BEGIN
DECLARE PARTITION_NAME VARCHAR (16);
DECLARE RETROWS INT (11);
DECLARE FUTURE_TIMESTAMP TIMESTAMP ;
/*
* Check if any partitions exist for the given SCHEMANAME.TABLENAME.
*/
SELECT COUNT (1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_name IS NULL ;
/*
* If partitions do not exist, go ahead and partition the table
*/
IF RETROWS = 1 THEN
/*
* Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values .
* We begin partitioning based on the beginning of a day . This is because we don 't want to generate a random partition
* that won' t necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
* end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000" ).
*/
SET FUTURE_TIMESTAMP = TIMESTAMPADD( HOUR , HOURLYINTERVAL, CONCAT(CURDATE(), " " , '00:00:00' ));
SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00' );
-- Create the partitioning query
SET @__PARTITION_SQL = CONCAT( "ALTER TABLE " , SCHEMANAME, "." , TABLENAME, " PARTITION BY RANGE(`clock`)" );
SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION " , PARTITION_NAME, " VALUES LESS THAN (" , UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));" );
-- Run the partitioning query
PREPARE STMT FROM @__PARTITION_SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END $$
DELIMITER ;
|
上面四个存储过程执行后将可以使用
1
|
CALL partition_maintenance( '<zabbix_db_name>' , '<table_name>' , <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)
|
命令对想要分区的表进行表分区了。其中的参数我这里解释一下。
这是举例:
1
|
CALL partition_maintenance(zabbix, 'history_uint' , 31, 24, 14);
|
zabbix_db_name:库名
table_name:表名
days_to_keep_data:保存多少天的数据
hourly_interval:每隔多久生成一个分区
num_future_intervals_to_create:本次一共生成多少个分区
这个例子就是history_uint表最多保存31天的数据,每隔24小时生成一个分区,这次一共生成14个分区
这里可以将上面四个存储过程保存为一个文件,导入到数据库中,文件我稍后将会放在附件中,这里使用的命令是:mysql -uzabbix -pzabbix zabbix<partition_call.sql
然后可以将CALL统一调用也做成一个文件,统一调用的内容如下:
1
2
3
4
5
6
7
8
9
10
11
12
|
DELIMITER $$
CREATE PROCEDURE `partition_maintenance_all`(SCHEMA_NAME VARCHAR (32))
BEGIN
CALL partition_maintenance(SCHEMA_NAME, 'history' , 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_log' , 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_str' , 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_text' , 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_uint' , 31, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends' , 180, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends_uint' , 180, 24, 14);
END $$
DELIMITER ;
|
也将该文件导入到数据库中,使用命令:mysql -uzabbix -pzabbix zabbix<partition_all.sql
好了,到了这里之后就可以使用如下命令执行表分区了:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
+ ----------------+--------------------+
| table | partitions_deleted |
+ ----------------+--------------------+
| zabbix.history | N/A |
+ ----------------+--------------------+
+ --------------------+--------------------+
| table | partitions_deleted |
+ --------------------+--------------------+
| zabbix.history_log | N/A |
+ --------------------+--------------------+
+ --------------------+--------------------+
| table | partitions_deleted |
+ --------------------+--------------------+
| zabbix.history_str | N/A |
+ --------------------+--------------------+
+ ---------------------+--------------------+
| table | partitions_deleted |
+ ---------------------+--------------------+
| zabbix.history_text | N/A |
+ ---------------------+--------------------+
+ ---------------------+--------------------+
| table | partitions_deleted |
+ ---------------------+--------------------+
| zabbix.history_uint | N/A |
+ ---------------------+--------------------+
+ ---------------+--------------------+
| table | partitions_deleted |
+ ---------------+--------------------+
| zabbix.trends | N/A |
+ ---------------+--------------------+
+ --------------------+--------------------+
| table | partitions_deleted |
+ --------------------+--------------------+
| zabbix.trends_uint | N/A |
+ --------------------+--------------------+
|
看到如下结果证明所有7张表都进行了表分区,也可以在Mysql的数data目录下看到新生成的表分区文件。(PS:注意,最好是清空history_uint表的数据之后再执行上面这条命令,否则因为这张表数据量太大,转换时间将会好长,清空表中数据的命令为: truncate table history_uint;)
好了,这样可以进行表分区了。
将上面这条命令写入到计划任务中如下:
1
2
|
crontab -l| tail -1
01 01 * * * /opt/software/mysql/bin/mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
|
每天晚上的1点01执行一次。还有之前写的备份数据库的脚本也需要执行计划任务每天的凌晨0点01执行备份:
1
2
|
crontab -l| tail -2| head -1
01 00 * * * /usr/local/scripts/Zabbix_MySQLdump_per_table_v2 .sh mysqldump
|
这样就大功告成了,之后再体验一下zabbix的web页面看是不是感觉比以前快了?