1.heartbeat实现集群高可用
[root@server1 new]# ls
heartbeat-3.0.4-2.el6.x86_64.rpm
heartbeat-devel-3.0.4-2.el6.x86_64.rpm
heartbeat-libs-3.0.4-2.el6.x86_64.rpm
[root@server1 new]# yum install -y heartbeat-* ##安装,三个包都要安装
[root@server1 ~]# yum install -y httpd ##安装测试服务apache服务
[root@server1 ~]# vim /var/www/html/index/html
server1-www.westos.org
[root@server1 ~]# cd /usr/share/doc/heartbeat-3.0.4/
[root@server1 heartbeat-3.0.4]# ls
apphbd.cf AUTHORS COPYING ha.cf README
authkeys ChangeLog COPYING.LGPL haresources
[root@server1 heartbeat-3.0.4]# cp ha.cf authkeys haresources /etc/ha.d/
##默认并没有这3个文件,可以从官网上下载,也可以从解压出来的源码目录中找到,所以我们这里直接在源码目录中拷贝即可。
[root@server1 heartbeat-3.0.4]# cd /etc/ha.d/
[root@server1 ha.d]# ls
authkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs
[root@server1 ha.d]# vim ha.cf ##主配置文件
34 logfacility local0
48 keepalive 2 ##心跳频率,单位为秒,这里表示设置监听时间为2s
56 deadtime 30 ##连续30S联系不上对方认为对方挂掉了
61 warntime 10 ##连续10S联系不上开始警告提示
71 initdead 60
##这里主要是给重启后预留的一段忽略时间段(比如:重启后启动网络等,如果在网络还没有通,keepalive检测肯定通不过,但这时候并不能切换),此值至少为deadtime的两倍;
76 udpport 694 ##设置广播通信的端口,默认为694
91 bcast eth0 ##指明心跳使用以太网的广播方式,并且在eth0上进行广播
157 auto_failback on ##恢复正常后是否自动回切,on表示自动回切,off表示不自动回切
211 node server1 ##主节点主机名
212 node test2 ##备用节点主机名,这两个节点必须按照顺序写,上面的为主节点
220 ping 172.25.66.250 ##测试网络连通性,此处自定义,一般设为网关地址
253 respawn hacluster /usr/lib64/heartbeat/ipfail
259 apiauth ipfail gid=haclient uid=hacluster ##安装heartbeat后,自动建立用户及组
[root@server1 ha.d]# vim authkeys ##认证文件
23 auth 1 ##选择认证方式
24 1 crc
[root@server1 ha.d]# ll authkeys
-rw-r--r-- 1 root root 643 Jul 25 09:57 authkeys
[root@server1 ha.d]# chmod 600 authkeys ##此配置文件必须将权限改为600
[root@server1 ha.d]# vim haresources
150 server1 IPaddr::172.25.66.100/24/eth0 httpd
##Haresources文件用于指定双机系统的主节点、集群IP(vip)、子网掩码、广播地址以及启动的服务等集群资源。这里表示server1为主节点,vip为172.25.66.100 子网掩码为24,广播地址为eth0,启动服务为httpd服务。
[root@server1 ha.d]# scp ha.cf authkeys haresources test2:/etc/ha.d/
##在字节点上做相同的配置,这里直接复制
[root@server1 ha.d]# /etc/init.d/heartbeat start ##开启心跳服务
Starting High-Availability services: INFO: Resource is stopped
Done.
[root@server1 ha.d]# tail -f /var/log/messages
Jul 25 10:04:02 server1 heartbeat: [1517]: info: Status update for node 172.25.66.250: status ping
Jul 25 10:04:30 server1 heartbeat: [1517]: info: Link test2:eth0 up.
Jul 25 10:04:30 server1 heartbeat: [1517]: info: Status update for node test2: status up
Jul 25 10:04:30 server1 harc(default)[1527]: info: Running /etc/ha.d//rc.d/status status
Jul 25 10:04:31 server1 heartbeat: [1517]: info: Comm_now_up(): updating status to active
Jul 25 10:04:31 server1 heartbeat: [1517]: info: Local status now set to: 'active'
Jul 25 10:04:31 server1 heartbeat: [1517]: info: Starting child client "/usr/lib64/heartbeat/ipfail" (496,497)
Jul 25 10:04:31 server1 heartbeat: [1543]: info: Starting "/usr/lib64/heartbeat/ipfail" as uid 496 gid 497 (pid 1543)
Jul 25 10:04:31 server1 heartbeat: [1517]: info: Status update for node test2: status active
Jul 25 10:04:31 server1 harc(default)[1546]: info: Running /etc/ha.d//rc.d/status status
Jul 25 10:04:35 server1 ipfail: [1543]: info: Status update: Node test2 now has status active
Jul 25 10:04:37 server1 ipfail: [1543]: info: Asking other side for ping node count.
##查看日志,确定服务是否正常开启,如果没有报错信息,开启备用节点的心跳服务。
[root@test2 ~]# yum install -y httpd
[root@test2 ~]# vim /var/www/html/index/html
<h1>test2-www.westos.org</h1>
[root@test2 ha.d]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
测试:
1)当主节点和备用节点心跳都正常时,主节点接管vip,正常工作
[root@server1 new]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
[root@server1 new]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c1:37:57 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.1/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth0
inet6 fe80::5054:ff:fec1:3757/64 scope link
valid_lft forever preferred_lft forever
[root@server1 new]# /etc/init.d/heartbeat status
heartbeat OK [pid 14558 et al] is running on server1 [server1]...
[root@test2 ha.d]# /etc/init.d/heartbeat status
heartbeat OK [pid 2405 et al] is running on test2 [test2]...
[root@test2 ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:7a:98:49 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.11/24 brd 172.25.66.255 scope global eth0
inet6 fe80::5054:ff:fe7a:9849/64 scope link
valid_lft forever preferred_lft forever
2)主机点挂掉时,备用节点工作
[root@server1 new]# /etc/init.d/heartbeat stop
Stopping High-Availability services: Done.
[root@server1 new]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c1:37:57 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.1/24 brd 172.25.66.255 scope global eth0
inet6 fe80::5054:ff:fec1:3757/64 scope link
valid_lft forever preferred_lft forever
[root@test2 ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:7a:98:49 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.11/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth0
inet6 fe80::5054:ff:fe7a:9849/64 scope link
valid_lft forever preferred_lft forever
3)当主节点恢复后,自动回切为主节点工作
[root@server1 new]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
[root@server1 new]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c1:37:57 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.1/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth0
inet6 fe80::5054:ff:fec1:3757/64 scope link
valid_lft forever preferred_lft forever
[root@server1 new]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
[root@test2 ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:7a:98:49 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.11/24 brd 172.25.66.255 scope global eth0
inet6 fe80::5054:ff:fe7a:9849/64 scope link
valid_lft forever preferred_lft forever
总结:heartbeat最核心的包括两个部分,心跳监测部分和资源接管部分,心跳监测可以通过网络链路和串口进行,而且支持冗余链路,它们之间相互发送报文来告诉对方自己当前的状态,如果在指定的时间内未受到对方发送的报文,那么就认为对方失效,这时需启动资源接管模块来接管运行在对方主机上的资源或者服务。但他不会监视它控制的资源或应用程序,对于操作系统自身出现的问题,Heartbeat也无法监控。如果主节点操作系统挂起,一方面可能导致服务中断,另一方面由于主节点资源无法释放,而备份节点却接管了主节点的资源,此时就发生了两个节点同时争用一个资源的状况,这样容易导致服务出故障,也有可能会使资源遭到损害。要监控资源和应用程序是否运行正常,必须使用第三方的插件,下面我们以Ldirector为例介绍。
#######################################################
2.heartbeat+lvs负载均衡(DR配置方式)
Ipvsadm:只接受静态web,无健康检查
ipvsadm+ldirectord动态更新调度策略(Ldirectord是一个维护ipvs的服务)
高可用与负载均衡的整合:解决的是lvs的单点调度器的缺陷
heartbeat本身没有健康检查,与Ldirector结合可实现健康检查
Ldirector是一个监控集群服务节点运行状态的插件。Ldirector如果监控到集群节点中某个服务出现故障,就屏蔽此节点的对外连接功能,同时将后续请求转移到正常的节点提供服务。这个插件经常用在LVS负载均衡集群中。
lvs是一个四层负载均衡,是在内核层面,分为netfilter+ipvs和ipvsadm(用户层面的使用工具)
LVS负载均衡:四种负载均衡模式:DR,TUN(ip隧道),NAT,fullNAT,这里我们介绍DR模式
DR模式工作在链路层,采用了arp协议,rs表示realserver,vs表示vitrualserver,假设client的ip为cip,mac地址为m1,调度器vs的ip为vip,mac地址为m2,后端服务器的ip为rip,mac地址为m3,由于DR模式工作在数据链路层,没有经过路由器,所以vs和rs必须在同一个网段,当client访问vip时,在DR模式下,vs通过它本身的一些算法m2改为m3,这样就可以实现直接将数据包丢给rs(这里rs上必须有vip,因为客户端访问的时vip),rs通过解封,得到了vip,通过与自己vip匹配判断数据包确实是给自己的,rs在通过封装,直接将数据发给client,数据包不用原路返回)。由于vs和rs上都有vip,会有冲突,因此这里我们应用arptables协议,在rs上添加策略,控制数据传输。
这里server1做调度器,server2和server3做realserveer
[root@server1 ha.d]# /etc/init.d/heartbeat stop ##在做配置之前将心跳服务停掉
Stopping High-Availability services: Done.
[root@test2 ~]# /etc/init.d/heartbeat stop
Stopping High-Availability services: Done.
server1
[root@server1 ha.d]# yum install -y ipvsadm
[root@server1 ha.d]# ipvsadm -A -t 172.25.38.100:80 -s rr
##添加策略 指定vip 172.25.66.100:80,-t指tcp,-s rr 指定轮询
[root@server1 ha.d]# ip addr add 172.25.66.100/24 dev eth0 ##添加虚拟ip
[root@server1 ha.d]# ipvsadm -L ##列出ipvsadm策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.66.100:http rr
[root@server1 ha.d]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.2:80 -g
[root@server1 ha.d]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.3:80 -g
##将虚拟ip映射到真实的ip,-r指定realserver -t指定tcp协议 -g指dr模式
[root@server1 ha.d]# ipvsadm -L ##查看策略已加入
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.66.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
[root@server1 ha.d]# /etc/init.d/ipvsadm save ##保存策略,不保存的话关机策略即失效
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.66.100:80 rr
-> 172.25.66.2:80 Route 1 0 0
-> 172.25.66.3:80 Route 1 0 0
server2
[root@server2 ~]# ip addr add 172.25.66.100/32 dev eth0 ##在realserver上加一个和vip相同的ip地址
[root@server2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:64:ed:04 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.2/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/32 scope global eth0
inet6 fe80::5054:ff:fe64:ed04/64 scope link
valid_lft forever preferred_lft forever
[root@server2 ~]# yum install -y arptables_jf
为防止客户端在访问vip时直接访问到realserver,在realserver设置一定的策略
[root@server2 ~]# arptables -A IN -d 172.25.66.100 -j DROP
##访问realserver的100ip时的请求直接丢弃
[root@server2 ~]# arptables -A OUT -s 172.25.66.100 -j mangle --mangle-ip-s 172.25.66.2
##realserver发出的信息全部转为他的真实ip
[root@server2 ~]# /etc/init.d/arptables_jf save ##保存策略
Saving current rules to /etc/sysconfig/arptables: [ OK ]
[root@server2 ~]# /etc/init.d/httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.6.2 for ServerName
[ OK ]
server3(两个realserver做同样的配置)
[root@server3 ~]# ip addr add 172.25.66.100/32 dev eth0
[root@server3 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c7:a3:48 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.3/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/32 scope global eth0
inet6 fe80::5054:ff:fec7:a348/64 scope link
valid_lft forever preferred_lft forever
[root@server3 ~]# yum install -y arptables_jf
[root@server3 ~]# arptables -A IN -d 172.25.66.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.66.100 -j mangle --mangle-ip-s 172.25.66.3
[root@server3 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
[root@server3 ~]# /etc/init.d/httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.6.3 for ServerName
[ OK ]
测试:
[kiosk@foundation6 Desktop]$ arp -an | grep 100
##52:54:00:c1:37:57对比可查看这个100的资源时来自调度器server1server1,而不是realserver
? (172.25.66.100) at 52:54:00:c1:37:57 [ether] on br0
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server2-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server2-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server2-www.westos.org</h1>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c1:37:57 brd ff:ff:ff:ff:ff:ff
inet 172.25.66.1/24 brd 172.25.66.255 scope global eth0
inet 172.25.66.100/24 scope global secondary eth0
inet6 fe80::5054:ff:fec1:3757/64 scope link
valid_lft forever preferred_lft forever
##################################################################################
3.heartbeat+ldirectord+lvs(弥补心跳的无健康检测的缺点)
server1开启heartbeat服务(上一个实验已经配置好)和ldirectord,server1和server2开启httpd服务
因为lvs不能够对后端进行健康检查,如果服务down了,那么客户端能够看到错误的信息,因此要安装该软件包,该软件包能够对后端进行健康检查,并且整合了lvs,能够更新策略,但如果lvs出现问题,他是不知道的,因为他只是对后端进行健康检查
[root@server1 ~]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# cd /usr/share/doc/ldirectord-3.9.5/
[root@server1 ldirectord-3.9.5]# cp ldirectord.cf /etc/ha.d/
[root@server1 ldirectord-3.9.5]# cd /etc/ha.d/
[root@server1 ha.d]# ls
authkeys harc ldirectord.cf README.config shellfuncs
ha.cf haresources rc.d resource.d
[root@server1 ha.d]# vim ldirectord.cf
virtual=172.25.66.100:80
real=172.25.66.2:80 gate ##指定后端服务器
real=172.25.66.3:80 gate ##指定后端服务器
fallback=127.0.0.1:80 gate ##如果后端服务器都down了,就有本机接替工作
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
# receive="Test Page"
# virtualhost=www.x.y.z
[root@server1 ha.d]# vim haresources
server1 IPaddr::172.25.66.100/24/eth0 httpd ldirectord
[root@server1 ha.d]# /etc/init.d/heartbeat start
[root@server1 ha.d]# scp haresources ldirectord.cf 172.25.66.11:/etc/ha.d/
root@172.25.66.11's password:
haresources 100% 5961 5.8KB/s 00:00
ldirectord.cf 100% 8278 8.1KB/s 00:00
[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.66.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
测试:
1)当server2和server3的httpd服务都可以正常工作时,访问172.25.66.100时,server2和serve3轮询
[root@server2 ~]# /etc/init.d/httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.66.2 for ServerName
[ OK ]
[root@server3 ~]# /etc/init.d/httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.66.3 for ServerName
[ OK ]
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server2-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server2-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
2)当server2和server3其中一个挂掉后,ldirectord会进行健康检测,当访问172.25.66.100时,只访问可以正常工作的主机,不会出现报错网页
[root@server2 ~]# /etc/init.d/httpd stop
Stopping httpd: [ OK ]
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
<h1>server3-www.westos.org</h1>
3)当server2和server3都挂掉后,server1工作(一般是提示信息)
[root@server3 ~]# /etc/init.d/httpd stop
Stopping httpd: [ OK ]
[kiosk@foundation6 Desktop]$ curl 172.25.66.100
server1-www.westos.org