SaltStack项目实战(六)

时间:2024-10-05 11:33:26

SaltStack项目实战

  • 系统架构图

SaltStack项目实战(六)

一、初始化

1、salt环境配置,定义基础环境、生产环境(base、prod)

vim /etc/salt/master
修改file_roots
file_roots:
base:
- /srv/salt/base
prod:
- /srv/salt/prod mkdir -p /srv/salt/base
mkdir -p /srv/salt/prod

pillar配置

vim /etc/salt/master
修改pillar_roots
pillar_roots:
base:
- /srv/pillar/base
pord:
- /srv/pillar/prod mkdir -p /srv/pillar/base
mkdir -p /srv/pillar/prod

服务重启 systemctl restart salt-master

2、salt base环境初始化:

mkdir -p /srv/salt/base/init  # 环境初始化目录
mkdir -p /srv/salt/base/init/files # 配置文件目录

1)dns配置

准备dns配置文件,放入/srv/salt/base/init/files目录下

cp /etc/resolv.conf /srv/salt/base/init/files/

vi /srv/salt/base/init/dns.sls
/etc/resolv.conf:
file.managed:
- source: salt://init/files/resolv.conf
- user: root
- gourp: root
- mode: 644

2)histroy记录时间

vi /srv/salt/base/init/history.sls
/etc/profile:
file.append:
- text:
- export HISTTIMEFORMAT="%F %T `whoami` "

3)记录命令操作

vi /srv/salt/base/init/audit.sls
/etc/bashrc:
file.append:
- text:
- export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg"; }'

4)内核参数优化

vi /srv/salt/base/init/sysctl.sls
net.ipv4.ip_local_port_range:
sysctl.present:
- value: 10000 65000
fs.file-max:
sysctl.present:
- value: 2000000
net.ipv4.ip_forward:
sysctl.present:
- value: 1
vm.swappiness:
sysctl.present:
- value: 0

5)安装yum仓库

vi /srv/salt/base/init/epel.sls
yum_repo_release:
pkg.installed:
- sources:
- epel-release: http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
- unless: rpm -qa | grep epel-release-latest-7

6)安装zabbix-agent

准备zabbix-agent配置文件,放入/srv/salt/base/init/files目录下

cp /etc/zabbix/zabbix_agentd.conf /srv/salt/base/init/files/

修改 vi /etc/zabbix/zabbix_agentd.conf

SaltStack项目实战(六)

SaltStack项目实战(六)

vi /srv/salt/base/init/zabbix_agent.sls
zabbix-agent:
pkg.installed:
- name: zabbix-agent
file.managed:
- name: /etc/zabbix/zabbix_agentd.conf
- source: salt://init/files/zabbix_agentd.conf
- template: jinja
- backup: minion
- defaults:
Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }}
Hostname: {{ grains['fqdn'] }}
- require:
- pkg: zabbix-agent
service.running:
- enable: True
- watch:
- pkg: zabbix-agent
- file: zabbix-agent zabbix_agentd.d:
file.directory:
- name: /etc/zabbix/zabbix_agentd.d
- watch_in:
- service: zabbix-agent
- require:
- pkg: zabbix-agent
- file: zabbix-agent

备注:“- backup: minion”表示备份,如果文件改动,会将之前的文件备份到/var/cache/salt/file_backup目录下

SaltStack项目实战(六)

7)编写init.sls总文件,引用其它文件

vi /srv/salt/base/init/init.sls
include:
- init.dns
- init.history
- init.audit
- init.sysctl
- init.epel
- init.zabbix_agent

执行命令: salt "*" state.sls init.init

执行结果

 linux-node1.example.com:
----------
ID: /etc/resolv.conf
Function: file.managed
Result: True
Comment: File /etc/resolv.conf is in the correct state
Started: ::32.998314
Duration: 181.548 ms
Changes:
----------
ID: /etc/profile
Function: file.append
Result: True
Comment: File /etc/profile is in correct state
Started: ::33.180034
Duration: 6.118 ms
Changes:
----------
ID: /etc/bashrc
Function: file.append
Result: True
Comment: Appended lines
Started: ::33.186266
Duration: 6.608 ms
Changes:
----------
diff:
--- +++ @@ -, +, @@ unset -f pathmunge
fi
# vim:ts=:sw=
+export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg"; }'
----------
ID: net.ipv4.ip_local_port_range
Function: sysctl.present
Result: True
Comment: Updated sysctl value net.ipv4.ip_local_port_range =
Started: ::33.261448
Duration: 212.528 ms
Changes:
----------
net.ipv4.ip_local_port_range: ----------
ID: fs.file-max
Function: sysctl.present
Result: True
Comment: Updated sysctl value fs.file-max =
Started: ::33.474197
Duration: 122.497 ms
Changes:
----------
fs.file-max: ----------
ID: net.ipv4.ip_forward
Function: sysctl.present
Result: True
Comment: Updated sysctl value net.ipv4.ip_forward =
Started: ::33.596905
Duration: 35.061 ms
Changes:
----------
net.ipv4.ip_forward: ----------
ID: vm.swappiness
Function: sysctl.present
Result: True
Comment: Updated sysctl value vm.swappiness =
Started: ::33.632208
Duration: 36.226 ms
Changes:
----------
vm.swappiness: ----------
ID: yum_repo_release
Function: pkg.installed
Result: True
Comment: All specified packages are already installed
Started: ::39.085699
Duration: 12627.626 ms
Changes:
----------
ID: zabbix-agent
Function: pkg.installed
Result: True
Comment: Package zabbix-agent is already installed
Started: ::51.713592
Duration: 6.677 ms
Changes:
----------
ID: zabbix-agent
Function: file.managed
Name: /etc/zabbix/zabbix_agentd.conf
Result: True
Comment: File /etc/zabbix/zabbix_agentd.conf updated
Started: ::51.720994
Duration: 152.077 ms
Changes:
----------
diff:
---
+++
@@ -, +, @@
#
# Mandatory: no
# Default:
-Server={{ Server }}
+Server=192.168.137.11 ### Option: ListenPort
# Agent will listen on this port for connections from the server.
----------
ID: zabbix_agentd.d
Function: file.directory
Name: /etc/zabbix/zabbix_agentd.d
Result: True
Comment: Directory /etc/zabbix/zabbix_agentd.d is in the correct state
Started: ::51.875082
Duration: 0.908 ms
Changes:
----------
ID: zabbix-agent
Function: service.running
Result: True
Comment: Service restarted
Started: ::51.932698
Duration: 205.223 ms
Changes:
----------
zabbix-agent:
True Summary for linux-node1.example.com
-------------
Succeeded: (changed=)
Failed:
-------------
Total states run:
Total run time: 13.593 s
linux-node2.example.com:
----------
ID: /etc/resolv.conf
Function: file.managed
Result: True
Comment: File /etc/resolv.conf is in the correct state
Started: ::38.639870
Duration: 182.254 ms
Changes:
----------
ID: /etc/profile
Function: file.append
Result: True
Comment: Appended lines
Started: ::38.822236
Duration: 3.047 ms
Changes:
----------
diff:
--- +++ @@ -, +, @@ unset i
unset -f pathmunge
+export HISTTIMEFORMAT="%F %T `whoami` "
----------
ID: /etc/bashrc
Function: file.append
Result: True
Comment: Appended lines
Started: ::38.825423
Duration: 3.666 ms
Changes:
----------
diff:
--- +++ @@ -, +, @@ unset -f pathmunge
fi
# vim:ts=:sw=
+export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg"; }'
----------
ID: net.ipv4.ip_local_port_range
Function: sysctl.present
Result: True
Comment: Updated sysctl value net.ipv4.ip_local_port_range =
Started: ::39.011409
Duration: 132.499 ms
Changes:
----------
net.ipv4.ip_local_port_range: ----------
ID: fs.file-max
Function: sysctl.present
Result: True
Comment: Updated sysctl value fs.file-max =
Started: ::39.144117
Duration: 33.556 ms
Changes:
----------
fs.file-max: ----------
ID: net.ipv4.ip_forward
Function: sysctl.present
Result: True
Comment: Updated sysctl value net.ipv4.ip_forward =
Started: ::39.177821
Duration: 43.489 ms
Changes:
----------
net.ipv4.ip_forward: ----------
ID: vm.swappiness
Function: sysctl.present
Result: True
Comment: Updated sysctl value vm.swappiness =
Started: ::39.221788
Duration: 39.882 ms
Changes:
----------
vm.swappiness: ----------
ID: yum_repo_release
Function: pkg.installed
Result: True
Comment: All specified packages are already installed
Started: ::47.608597
Duration: 13989.554 ms
Changes:
----------
ID: zabbix-agent
Function: pkg.installed
Result: True
Comment: Package zabbix-agent is already installed
Started: ::01.598548
Duration: 1.265 ms
Changes:
----------
ID: zabbix-agent
Function: file.managed
Name: /etc/zabbix/zabbix_agentd.conf
Result: True
Comment: File /etc/zabbix/zabbix_agentd.conf updated
Started: ::01.600712
Duration: 82.425 ms
Changes:
----------
diff:
---
+++
@@ -, +, @@
#
# Mandatory: no
# Default:
-# Server=
-
Server=192.168.137.11 ### Option: ListenPort
@@ -, +, @@
# Mandatory: no
# Range: -
# Default:
-StartAgents=
+# StartAgents= ##### Active checks related @@ -, +, @@
# Default:
# ServerActive= -#ServerActive=192.168.137.11
+ServerActive=192.168.137.11 ### Option: Hostname
# Unique, case sensitive hostname.
@@ -, +, @@
# Default:
# Hostname= -Hostname=linux-node2
+Hostname=Zabbix server ### Option: HostnameItem
# Item used for generating Hostname if it is undefined. Ignored if Hostname is defined.
@@ -, +, @@
#
# Mandatory: no
# Default:
-HostMetadataItem=system.uname
+# HostMetadataItem= ### Option: RefreshActiveChecks
# How often list of active checks is refreshed, in seconds.
----------
ID: zabbix_agentd.d
Function: file.directory
Name: /etc/zabbix/zabbix_agentd.d
Result: True
Comment: Directory /etc/zabbix/zabbix_agentd.d is in the correct state
Started: ::01.684357
Duration: 0.93 ms
Changes:
----------
ID: zabbix-agent
Function: service.running
Result: True
Comment: Service restarted
Started: ::01.751277
Duration: 275.781 ms
Changes:
----------
zabbix-agent:
True Summary for linux-node2.example.com
-------------
Succeeded: (changed=)
Failed:
-------------
Total states run:
Total run time: 14.788 s

8)创建top文件

vi /srv/salt/base/top.sls
base:
'*':
- init.init

测试 salt "*" state.highstate test=True

执行 salt "*" state.highstate

3、pillar base初始化

1)zabbix agent配置,指定zabbix server地址,用于sls文件引用

mkdir -p /srv/pillar/base/zabbix
vi /srv/pillar/base/zabbix/agent.sls
zabbix-agent:
Zabbix_Server: 192.168.137.11

编写top,引用/srv/pillar/base/zabbix/agent文件

vi /srv/pillar/base/top.sls
base:
'*':
- zabbix.agent

测试 salt '*' pillar.items

SaltStack项目实战(六)

二、haproxy

官网 http://www.haproxy.com/

mkdir -p /srv/salt/prod/modules/haproxy
mkdir -p /srv/salt/prod/modules/keepalived
mkdir -p /srv/salt/prod/modules/memcached
mkdir -p /srv/salt/prod/modules/nginx
mkdir -p /srv/salt/prod/modules/php
mkdir -p /srv/salt/prod/modules/pkg
mkdir -p /srv/salt/prod/cluster
mkdir -p /srv/salt/prod/modules/haproxy/files/
mkdir -p /srv/salt/prod/cluster/files

1)系统gcc编译包等

vi /srv/salt/prod/pkg/make.sls
make-pkg:
pkg.installed:
- names:
- gcc
- gcc-c++
- glibc
- make
- autoconf
- openssl
- openssl-devel
- pcre
- pcre-devel

2) 自安装

cd /usr/local/src
tar xvf haproxy-1.6.3.tar.gz
cd haproxy-1.6.3/
make TARGET=linux2628 PREFIX=/usr/local/haproxy-1.6.3
make install PREFIX=/usr/local/haproxy-1.6.3
ln -s /usr/local/haproxy-1.6.3 /usr/local/haproxy

修改启动脚本,放入salt下

vi /usr/local/src/haproxy-1.6.3/examples/haproxy.init
BIN=/usr/local/haproxy/sbin/$BASENAME
cp /usr/local/src/haproxy-1.6.3/examples/haproxy.init /srv/salt/prod/modules/haproxy/files/

haproxy-1.6.3.tar.gz安装包放入/srv/salt/prod/modules/haproxy/files/目录下

3)创建install.sls文件,用于安装haproxy

vi /srv/salt/prod/modules/haproxy/install.sls
include:
- modules.pkg.make haproxy-install:
file.managed:
- name: /usr/local/src/haproxy-1.6.3.tar.gz
- source: salt://modules/haproxy/files/haproxy-1.6.3.tar.gz
- mode: 755
- user: root
- group: root
cmd.run:
- name: cd /usr/local/src && tar zxf haproxy-1.6.3.tar.gz && cd haproxy-1.6.3 && make TARGET=linux2628 PREFIX=/usr/local/haproxy-1.6.3 && make install PREFIX=/usr/local/haproxy-1.6.3 && ln -s /usr/local/haproxy-1.6.3 /usr/local/haproxy
- unless: test -L /usr/local/haproxy
- require:
- pkg: make-pkg
- file: haproxy-install haproxy-init:
file.managed:
- name: /etc/init.d/haproxy
- source: salt://modules/haproxy/files/haproxy.init
- mode: 755
- user: root
- group: root
- require_in:
- file: haproxy-install
cmd.run:
- name: chkconfig --add haproxy
- unless: chkconfig --list| grep haproxy net.ipv4.ip_nonlocal_bind:
sysctl.present:
- value: 1 haproxy-config-dir:
file.directory:
- name: /etc/haproxy
- mode: 755
- user: root
- group: root

备注: “- unless”  如果unless后面的命令返回为True,那么就不执行当前状态命令

4)创建haproxy配置文件

vi /srv/salt/prod/cluster/files/haproxy-outside.cfg
global
maxconn 100000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/logs/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
maxconn 100000
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
mode http
bind 0.0.0.0:8888
stats enable
stats uri /haproxy-status
stats auth haproxy:saltstack
frontend frontend_www_example_com
bind 192.168.137.21:80
mode http
option httplog
log global
default_backend backend_www_example_com
backend backend_www_example_com
option forwardfor header X-REAL-IP
option httpchk HEAD / HTTP/1.0
balance source server web-node1 192.168.137.11:8080 check inter 2000 rise 30 fall 15
server web-node2 192.168.137.12:8080 check inter 2000 rise 30 fall 15

创建haproxy-outside.sls文件,用于配置haproxy

vi /srv/salt/prod/cluster/haproxy-outside.sls
include:
- modules.haproxy.install haproxy-service:
file.managed:
- name: /etc/haproxy/haproxy.cfg
- source: salt://cluster/files/haproxy-outside.cfg
- user: root
- group: root
- mode: 644
service.running:
- name: haproxy
- enable: True
- reload: True
- require:
- cmd: haproxy-install
- watch:
- file: haproxy-service

5)配置top file

vi /srv/pillar/base/top.sls
base:
'*':
- zabbix.agent
prod:
'linux-node*':
- cluster.haproxy-outside

测试 salt "*" state.highstate test=True

执行 salt "*" state.highstate

结果:

SaltStack项目实战(六)

三、keepalived

1)创建files目录,将keepalived-1.2.17.tar.gz安装包、keepalived.sysconfig、keepalived.init放入

mkdir -p /srv/salt/prod/modules/keepalived/files

2)创建install.sls文件

vi /srv/salt/prod/modules/keepalived/install.sls
{% set keepalived_tar = 'keepalived-1.2.17.tar.gz' %}
{% set keepalived_source = 'salt://modules/keepalived/files/keepalived-1.2.17.tar.gz' %} keepalived-install:
file.managed:
- name: /usr/local/src/{{ keepalived_tar }}
- source: {{ keepalived_source }}
- mode: 755
- user: root
- group: root
cmd.run:
- name: cd /usr/local/src && tar zxf {{ keepalived_tar }} && cd keepalived-1.2.17 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
- unless: test -d /usr/local/keepalived
- require:
- file: keepalived-install /etc/sysconfig/keepalived:
file.managed:
- source: salt://modules/keepalived/files/keepalived.sysconfig
- mode: 644
- user: root
- group: root /etc/init.d/keepalived:
file.managed:
- source: salt://modules/keepalived/files/keepalived.init
- mode: 755
- user: root
- group: root keepalived-init:
cmd.run:
- name: chkconfig --add keepalived
- unless: chkconfig --list | grep keepalived
- require:
- file: /etc/init.d/keepalived /etc/keepalived:
file.directory:
- user: root
- group: root

执行命令:salt '*' state.sls modules.keepalived.install saltenv=prod

3)创建keepalived配置文件haproxy-outside-keepalived.conf

! Configuration File for keepalived
global_defs {
notification_email {
saltstack@example.com
}
notification_email_from keepalived@example.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id {{ROUTEID}}
} vrrp_instance haproxy_ha {
state {{STATEID}}
interface eth0
virtual_router_id 36
priority {{PRIORITYID}}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.21
}
}

创建haproxy-outside-keepalived.sls

vi /srv/salt/prod/cluster/haproxy-outside-keepalived.sls
include:
- modules.keepalived.install keepalived-server:
file.managed:
- name: /etc/keepalived/keepalived.conf
- source: salt://cluster/files/haproxy-outside-keepalived.conf
- mode: 644
- user: root
- group: root
- template: jinja
{% if grains['fqdn'] == 'linux-node1.example.com' %}
- ROUTEID: haproxy_ha
- STATEID: MASTER
- PRIORITYID: 150
{% elif grains['fqdn'] == 'linux-node2.example.com' %}
- ROUTEID: haproxy_ha
- STATEID: BACKUP
- PRIORITYID: 100
{% endif %}
service.running:
- name: keepalived
- enable: True
- watch:
- file: keepalived-server

4)将keepalived加入top FILE

vi /srv/salt/base/top.sls
base:
'*':
- init.init
prod:
'linux-node*':
- cluster.haproxy-outside
- cluster.haproxy-outside-keepalived

测试 salt "*" state.highstate test=True

执行 salt "*" state.highstate

下文 http://www.cnblogs.com/shhnwangjian/p/6044436.html