一、目录结构
二、模块方法调用关系总图
三、入口文件main
1、解决了说明问题
1、客户端就干了一件事情,干什么事情
收集数据汇报给服务端?
但是我这个客户端是插件形式
2、首先必须要传一个参数,start干了 什么事情?
启动了一个程序,先去服务器端拿去配置信息,
拿到配置信息,根据不同的服务的执行间隔去执行监控
3、这个是个死循环,永远执行,服务器断了,客户端也就断了,等服务器重启启动起来,客户端就好了
2、实现代码
from core import client class command_handler(object): def __init__(self, sys_args):
self.sys_args = sys_args
if len(self.sys_args)<2:
self.help_msg() self.command_allowcator() def command_allowcator(self):
'''分捡用户输入的不同指令'''
print(self.sys_args[1]) if hasattr(self,self.sys_args[1]):
func= getattr(self,self.sys_args[1])
return func()
else:
print("command does not exist!")
self.help_msg() def help_msg(self):
valid_commands = '''
start start monitor client
stop stop monitor client '''
exit(valid_commands) def start(self):
print("going to start the monitor client")
#exit_flag = False Client = client.ClientHandle()
Client.forever_run() def stop(self):
print("stopping the monitor client")
四、配置中心settings
1、解决了什么问题?
1、启动了之后,本地什么都没有,去服务器端要监控的配置项,拿着自己的客户端ID,去服务端,说我是谁我的id是什么,你把让我监控的配置信息给我
2、如果你要想获取监控配置信息,就用'api/client/config','get'这个url的get方法
3、如果你要想汇报数据,就找'api/client/service/report/','post' 用post方法
4、我写到配置文件里的好处是什么?
我避免写到代码里写死了就不好了,
5、我连接服务器30秒没通就超时了 ,
6、每300秒去客户端加载一次监控配置,为什么这么干?
- 因为你的客户端一启动加载完一次配置,加载完了你就运行了,那我服务端有没有可能更改?
- 我给这个机器又加一个指标,这样客户端就加载不上,也就是说我加一个监控指标,也就是5分钟生效
2、实现代码
configs ={
'HostID': 1,
"Server": "192.168.16.56",
"ServerPort": 8000,
"urls":{ 'get_configs' :['api/client/config','get'], #acquire all the services will be monitored
'service_report': ['api/client/service/report/','post'], },
'RequestTimeout':30,
'ConfigUpdateInterval': 300, #5 mins as default }
五、请求监控配置汇报数据
1、解决了什么问题
1、要要每5分钟去客户端那一次最新的监控数据,你要知道上一次是什么时候拿的?你的记下来吧
2、知道为什么开始是0了?
1、每次和上一次拿去的时间做一个对比
2、当前时间-减去这个时间大于这个时间,你默认是0的话,肯定会大于这个时间
3、然后我就去获取最新的监控配置信息
3、把ip地址和 url全部拼接起来,判断你的请求类型
1、主流的Python服务器是python2
2、百度现在就是不让你爬数据,我换一个知乎就好了
1、是不是一堆数据,至少让你爬取主页
2、就是客户端给你返回的数据
4、因为我一会我要给服务端端汇报数据
5、post是不是有参数,先把你的参数进行一个extra_data,刚才请求是url现在带着的数据
6、是谁调用的 def url_request它呀?
为什么要调用呀 ?我是要加载配置文件 def load_latest_configs,我就调用elf.url_request(request_type,url)就拿到了最新的数据,它是一个json格式的
7、也就是第一次,取回来这个字典就是一个空的,
把最后的时间更新成当前时间,不改下一次循环有直接去拿了
2、实现代码
import time
from conf import settings
import urllib
import urllib2
import json
import threading
from plugins import plugin_api class ClientHandle(object):
def __init__(self):
self.monitored_services = {} def load_latest_configs(self):
'''
load the latest monitor configs from monitor server
:return:
'''
request_type = settings.configs['urls']['get_configs'][1]
url = "%s/%s" %(settings.configs['urls']['get_configs'][0], settings.configs['HostID'])
latest_configs = self.url_request(request_type,url)
latest_configs = json.loads(latest_configs)
self.monitored_services.update(latest_configs)
六、监控计数器设计
1、我拿到的字典长什么样
{"services": {"LinuxNetwork": ["LinuxNetworkPlugin ", 60],
"LinuxCPU": ["LinuxCpuPlugin", 60],
"Mysql": ["MysqlPlugin", 60],
" Linuxload": ["LinuxloadPlugin", 30],
"LinuxMemory": ["LinuxMemoryPlugin", 90]}}
2、需求讨论
1、服务那么多,这个间隔个60 那个90怎样处理?
2、你启动一个for循环,你怎么知道cpu或者内存需要监控?
1、你这个计数器怎么维护,这样可复杂了
2、你维护一个全局的计数器,这样很复杂
3、我能不能给每个服务启动一个计数器
上一次执行的时间我知道,我每次循环对每个服务进行循环如果大于就出发记录
3、实现代码
for service_name,val in self.monitored_services['services'].items():
if len(val) == 2:# means it's the first time to monitor
self.monitored_services['services'][service_name].append(0)
monitor_interval = val[1]
last_invoke_time = val[2]
if time.time() - last_invoke_time > monitor_interval: #needs to run the plugin
print(last_invoke_time,time.time())
self.monitored_services['services'][service_name][2]= time.time()
#start a new thread to call each monitor plugin
t = threading.Thread(target=self.invoke_plugin,args=(service_name,val))
t.start()
print("Going to monitor [%s]" % service_name) else:
print("Going to monitor [%s] in [%s] secs" % (service_name,
monitor_interval - (time.time()-last_invoke_time))) time.sleep(1)
七、forever_run函数
1、解决了什么问题
1、如果之前没有执行过,第一次执行,为什么初始化执行为0了
为了第一次肯定触发监控
2、你怎样去判断要不要触发这次监控呢?
1、监控间隔
2、最后调用时间,就是我刚才添加进去的0
3、当前时间-上一次服务监控时间如果大于5分钟就需触发监控,如果小于5分钟就不需要出发
3、是不是要去调用插件吧!,我现在通过反射调用插件
4、如果这个插件需要执行2分钟,那么调这个服务你是不是要等它两分钟,但是其他的服务间隔30秒,你卡了2分钟
也就是说这一次的执行不能影响其他的服务
每监控一个服务就启用一个线程并发
5、我为什么要判断多少个服务?
你的意思10个服务启动10个线程,启动起来就不宕了
6、我为什么要启用10个线程,因为我的监控间隔还没有到
所以我就到监控的时间到了再启动这个线程
2、实现代码
def forever_run(self):
'''
start the client program forever
:return:
'''
exit_flag = False
config_last_update_time = 0 while not exit_flag:
if time.time() - config_last_update_time > settings.configs['ConfigUpdateInterval']:
self.load_latest_configs() #获取最新的监控配置信息
print("Loaded latest config:", self.monitored_services)
config_last_update_time = time.time()
#start to monitor services for service_name,val in self.monitored_services['services'].items():
if len(val) == 2:# means it's the first time to monitor
self.monitored_services['services'][service_name].append(0)
#为什么是0, 因为为了保证第一次肯定触发监控这个服务
monitor_interval = val[1]
last_invoke_time = val[2] #0
if time.time() - last_invoke_time > monitor_interval: #needs to run the plugin
print(last_invoke_time,time.time())
self.monitored_services['services'][service_name][2]= time.time() #更新此服务最后一次监控的时间
#start a new thread to call each monitor plugin
t = threading.Thread(target=self.invoke_plugin,args=(service_name,val))
t.start()
print("Going to monitor [%s]" % service_name) else:
print("Going to monitor [%s] in [%s] secs" % (service_name,
monitor_interval - (time.time()-last_invoke_time))) time.sleep(1)
八、插件API
1、解决了什么问题?
1、commands这个命令在插件3里面已经没有了
2、只要不为0,就代表没出错,我就认为拿到值存成一个字典,你只要确认插件拿到的数据是一个字典就没问题
3、不等于0就是一个无效数据
4、我这里这么多这样的脚本,我怎么能让我刚才的线程怎么去调用的我脚本呢?
1、通过反射去调用,我前端给我返回了插件名
2、前端的插件名和脚本名不是一一对应的,他俩肯定不一样
3、我这里做了一个中间层plugin_api文件里一一对应
4、这样相当于我插件的名字,
2、实现代码
1、cpu.py
import commands def monitor(frist_invoke=1):
#shell_command = 'sar 1 3| grep "^Average:"'
shell_command = 'sar 1 3| grep "^平均时间:"'
status,result = commands.getstatusoutput(shell_command)
if status != 0:
value_dic = {'status': status}
else:
value_dic = {}
print('---res:',result)
user,nice,system,iowait,steal,idle = result.split()[2:]
value_dic= {
'user': user,
'nice': nice,
'system': system,
'idle': idle,
'status': status
}
return value_dic if __name__ == '__main__':
print monitor()
2、plugin_api.py
from linux import sysinfo,cpu_mac,cpu,memory,network,host_alive def LinuxCpuPlugin():
return cpu.monitor() def host_alive_check():
return host_alive.monitor() def GetMacCPU():
#return cpu.monitor()
return cpu_mac.monitor() def LinuxNetworkPlugin():
return network.monitor() def LinuxMemoryPlugin():
return memory.monitor()
九、invoke_plugin调用函数
1、解决了什么问题
1、是怎么调用的呢?
2、拿到这插件名,去调用这个脚本,
3、里面有好多函数,通过反射有没有这个函数
4、然后汇报我的数据有一个新的url
5、我就把数据这个字典写好,我告诉服务器端,我是回报的那个监控的监控项
6、我为了减少汇报次数,我一次可以提交多个服务的监控项
不能,因为每个服务的监控间隔都不一样 zabbix都不是一次汇报是单独汇报
拿到请求方法,数据,汇报到后端,你的把这个编写成它支持的格式,
2、实现代码
def invoke_plugin(self,service_name,val):
'''
invoke the monitor plugin here, and send the data to monitor server after plugin returned status data each time
:param val: [pulgin_name,monitor_interval,last_run_time]
:return:
'''
plugin_name = val[0]
if hasattr(plugin_api,plugin_name):
func = getattr(plugin_api,plugin_name)
plugin_callback = func()
#print("--monitor result:",plugin_callback) report_data = {
'client_id':settings.configs['HostID'],
'service_name':service_name,
'data':json.dumps(plugin_callback)
} request_action = settings.configs['urls']['service_report'][1]
request_url = settings.configs['urls']['service_report'][0] #report_data = json.dumps(report_data)
print('---report data:',report_data)
self.url_request(request_action,request_url,params=report_data)
else:
print("\033[31;1mCannot find service [%s]'s plugin name [%s] in plugin_api\033[0m"% (service_name,plugin_name ))
print('--plugin:',val)
九、URL处理函数
1、解决了什么问题
1、extra_data是额外的数据,字典不能直接发,要变成url,走到这里,这个线程是退出了
2、但是主线程还在走,刚才有一部我没讲
self.monitored_services['services'][service_name][2]= time.time()
3、我刚才在监控间隔里加一个0,
更新此服务最后一次监控的时间
4、我现在传进来的服务名和监控间隔,这个服务已经监控完了,也 拿到数据了,我现在要干嘛?
是不不是要汇报给服务器端
2、实现代码
def url_request(self,action,url,**extra_data):
'''
cope with monitor server by url
:param action: "get" or "post"
:param url: witch url you want to request from the monitor server
:param extra_data: extra parameters needed to be submited
:return:
'''
abs_url = "http://%s:%s/%s" % (settings.configs['Server'],
settings.configs["ServerPort"],
url)
if action in ('get','GET'):
print(abs_url,extra_data)
try:
req = urllib2.Request(abs_url)
req_data = urllib2.urlopen(req,timeout=settings.configs['RequestTimeout'])
callback = req_data.read()
#print "-->server response:",callback
return callback
except urllib2.URLError as e:
exit("\033[31;1m%s\033[0m"%e) elif action in ('post','POST'):
#print(abs_url,extra_data['params'])
try:
data_encode = urllib.urlencode(extra_data['params'])
req = urllib2.Request(url=abs_url,data=data_encode)
res_data = urllib2.urlopen(req,timeout=settings.configs['RequestTimeout'])
callback = res_data.read()
callback = json.loads(callback)
print "\033[31;1m[%s]:[%s]\033[0m response:\n%s" %(action,abs_url,callback)
return callback
except Exception as e:
print('---exec',e)
exit("\033[31;1m%s\033[0m"%e)
十、其他完整代码
1、bin
CrazyClient.py
import sys
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(BASE_DIR) from core import main if __name__ == "__main__":
client = main.command_handler(sys.argv)
2、plugins
1、linux
1、host_alive.py
import subprocess def monitor(frist_invoke=1):
value_dic = {}
shell_command = 'uptime'
result = subprocess.Popen(shell_command,shell=True,stdout=subprocess.PIPE).stdout.read() #user,nice,system,iowait,steal,idle = result.split()[2:]
value_dic= {
'uptime': result, 'status': 0
}
return value_dic print monitor()
2、load.py
import commands def monitor():
shell_command = 'uptime' status,result = commands.getstatusoutput(shell_command)
if status != 0: #cmd exec error
value_dic = {'status':status}
else:
value_dic = {}
uptime = result.split(',')[:1][0]
print(result)
#load1,load5,load15 = result.split('load averages:')[1].split(',')
load1,load5,load15 = result.split('load averages:')[1].split()
value_dic= {
#'uptime': uptime,
'load1': load1,
'load5': load5,
'load15': load15,
'status': status
}
return value_dic print(monitor())
3、memory.py
#!/usr/bin/env python
#coding:utf-8 import commands def monitor(frist_invoke=1):
monitor_dic = {
'SwapUsage': 'percentage',
'MemUsage' : 'percentage',
}
shell_command ="grep 'MemTotal\|MemFree\|Buffers\|^Cached\|SwapTotal\|SwapFree' /proc/meminfo" status,result = commands.getstatusoutput(shell_command)
if status != 0: #cmd exec error
value_dic = {'status':status}
else:
value_dic = {'status':status}
for i in result.split('kB\n'):
key= i.split()[0].strip(':') # factor name
value = i.split()[1] # factor value
value_dic[ key] = value if monitor_dic['SwapUsage'] == 'percentage':
value_dic['SwapUsage_p'] = str(100 - int(value_dic['SwapFree']) * 100 / int(value_dic['SwapTotal']))
#real SwapUsage value
value_dic['SwapUsage'] = int(value_dic['SwapTotal']) - int(value_dic['SwapFree']) MemUsage = int(value_dic['MemTotal']) - (int(value_dic['MemFree']) + int(value_dic['Buffers']) + int(value_dic['Cached']))
if monitor_dic['MemUsage'] == 'percentage':
value_dic['MemUsage_p'] = str(int(MemUsage) * 100 / int(value_dic['MemTotal']))
#real MemUsage value
value_dic['MemUsage'] = MemUsage
return value_dic if __name__ == '__main__':
print monitor()
5、network.py
import subprocess def monitor(frist_invoke=1):
shell_command = 'sar -n DEV 1 5 |grep -v IFACE |grep Average'
result = subprocess.Popen(shell_command,shell=True,stdout=subprocess.PIPE).stdout.readlines()
#print(result)
value_dic = {'status':0, 'data':{}}
for line in result:
line = line.split()
nic_name,t_in,t_out = line[1],line[4],line[5]
value_dic['data'][nic_name] = {"t_in":line[4], "t_out":line[5]}
#print(value_dic)
return value_dic
4、sysinfo.py
import os,sys,subprocess
import commands
import re def collect():
filter_keys = ['Manufacturer','Serial Number','Product Name','UUID','Wake-up Type']
raw_data = {} for key in filter_keys:
try:
#cmd_res = subprocess.check_output("sudo dmidecode -t system|grep '%s'" %key,shell=True)
cmd_res = commands.getoutput("sudo dmidecode -t system|grep '%s'" %key)
cmd_res = cmd_res.strip() res_to_list = cmd_res.split(':')
if len(res_to_list)> 1:#the second one is wanted string
raw_data[key] = res_to_list[1].strip()
else: raw_data[key] = -1
except Exception,e:
print e
raw_data[key] = -2 #means cmd went wrong data = {"asset_type":'server'}
data['manufactory'] = raw_data['Manufacturer']
data['sn'] = raw_data['Serial Number']
data['model'] = raw_data['Product Name']
data['uuid'] = raw_data['UUID']
data['wake_up_type'] = raw_data['Wake-up Type'] data.update(cpuinfo())
data.update(osinfo())
data.update(raminfo())
data.update(nicinfo())
data.update(diskinfo())
return data def diskinfo():
obj = DiskPlugin()
return obj.linux() def nicinfo():
#tmp_f = file('/tmp/bonding_nic').read()
#raw_data= subprocess.check_output("ifconfig -a",shell=True)
raw_data = commands.getoutput("ifconfig -a") raw_data= raw_data.split("\n") nic_dic = {}
next_ip_line = False
last_mac_addr = None
for line in raw_data:
if next_ip_line:
#print last_mac_addr
#print line #, last_mac_addr.strip()
next_ip_line = False
nic_name = last_mac_addr.split()[0]
mac_addr = last_mac_addr.split("HWaddr")[1].strip()
raw_ip_addr = line.split("inet addr:")
raw_bcast = line.split("Bcast:")
raw_netmask = line.split("Mask:")
if len(raw_ip_addr) > 1: #has addr
ip_addr = raw_ip_addr[1].split()[0]
network = raw_bcast[1].split()[0]
netmask =raw_netmask[1].split()[0]
#print(ip_addr,network,netmask)
else:
ip_addr = None
network = None
netmask = None
if mac_addr not in nic_dic:
nic_dic[mac_addr] = {'name': nic_name,
'macaddress': mac_addr,
'netmask': netmask,
'network': network,
'bonding': 0,
'model': 'unknown',
'ipaddress': ip_addr,
}
else: #mac already exist , must be boding address
if '%s_bonding_addr' %(mac_addr) not in nic_dic:
random_mac_addr = '%s_bonding_addr' %(mac_addr)
else:
random_mac_addr = '%s_bonding_addr2' %(mac_addr) nic_dic[random_mac_addr] = {'name': nic_name,
'macaddress':random_mac_addr,
'netmask': netmask,
'network': network,
'bonding': 1,
'model': 'unknown',
'ipaddress': ip_addr,
} if "HWaddr" in line:
#print line
next_ip_line = True
last_mac_addr = line nic_list= []
for k,v in nic_dic.items():
nic_list.append(v) return {'nic':nic_list}
def raminfo():
#raw_data = subprocess.check_output(["sudo", "dmidecode" ,"-t", "17"])
raw_data = commands.getoutput("sudo dmidecode -t 17")
raw_list = raw_data.split("\n")
raw_ram_list = []
item_list = []
for line in raw_list: if line.startswith("Memory Device"):
raw_ram_list.append(item_list)
item_list =[]
else:
item_list.append(line.strip()) ram_list = []
for item in raw_ram_list:
item_ram_size = 0
ram_item_to_dic = {}
for i in item:
#print i
data = i.split(":")
if len(data) ==2:
key,v = data if key == 'Size':
#print key ,v
if v.strip() != "No Module Installed":
ram_item_to_dic['capacity'] = v.split()[0].strip() #e.g split "1024 MB"
item_ram_size = int(v.split()[0])
#print item_ram_size
else:
ram_item_to_dic['capacity'] = 0 if key == 'Type':
ram_item_to_dic['model'] = v.strip()
if key == 'Manufacturer':
ram_item_to_dic['manufactory'] = v.strip()
if key == 'Serial Number':
ram_item_to_dic['sn'] = v.strip()
if key == 'Asset Tag':
ram_item_to_dic['asset_tag'] = v.strip()
if key == 'Locator':
ram_item_to_dic['slot'] = v.strip() #if i.startswith("")
if item_ram_size == 0: # empty slot , need to report this
pass
else:
ram_list.append(ram_item_to_dic) #get total size(mb) of ram as well
#raw_total_size = subprocess.check_output(" cat /proc/meminfo|grep MemTotal ",shell=True).split(":")
raw_total_size = commands.getoutput("cat /proc/meminfo|grep MemTotal ").split(":")
ram_data = {'ram':ram_list}
if len(raw_total_size) == 2:#correct total_mb_size = int(raw_total_size[1].split()[0]) / 1024
ram_data['ram_size'] = total_mb_size
#print(ram_data) return ram_data
def osinfo():
#distributor = subprocess.check_output(" lsb_release -a|grep 'Distributor ID'",shell=True).split(":")
distributor = commands.getoutput(" lsb_release -a|grep 'Distributor ID'").split(":")
#release = subprocess.check_output(" lsb_release -a|grep Description",shell=True).split(":")
release = commands.getoutput(" lsb_release -a|grep Description").split(":")
data_dic ={
"os_distribution": distributor[1].strip() if len(distributor)>1 else None,
"os_release":release[1].strip() if len(release)>1 else None,
"os_type": "linux",
}
#print(data_dic)
return data_dic
def cpuinfo():
base_cmd = 'cat /proc/cpuinfo' raw_data = {
'cpu_model' : "%s |grep 'model name' |head -1 " % base_cmd,
'cpu_count' : "%s |grep 'processor'|wc -l " % base_cmd,
'cpu_core_count' : "%s |grep 'cpu cores' |awk -F: '{SUM +=$2} END {print SUM}'" % base_cmd,
} for k,cmd in raw_data.items():
try:
#cmd_res = subprocess.check_output(cmd,shell=True)
cmd_res = commands.getoutput(cmd)
raw_data[k] = cmd_res.strip() #except Exception,e:
except ValueError,e:
print e data = {
"cpu_count" : raw_data["cpu_count"],
"cpu_core_count": raw_data["cpu_core_count"]
}
cpu_model = raw_data["cpu_model"].split(":")
if len(cpu_model) >1:
data["cpu_model"] = cpu_model[1].strip()
else:
data["cpu_model"] = -1 return data class DiskPlugin(object): def linux(self):
result = {'physical_disk_driver':[]} try:
script_path = os.path.dirname(os.path.abspath(__file__))
shell_command = "sudo %s/MegaCli -PDList -aALL" % script_path
output = commands.getstatusoutput(shell_command)
result['physical_disk_driver'] = self.parse(output[1])
except Exception,e:
result['error'] = e
return result def parse(self,content):
'''
解析shell命令返回结果
:param content: shell 命令结果
:return:解析后的结果
'''
response = []
result = []
for row_line in content.split("\n\n\n\n"):
result.append(row_line)
for item in result:
temp_dict = {}
for row in item.split('\n'):
if not row.strip():
continue
if len(row.split(':')) != 2:
continue
key,value = row.split(':')
name =self.mega_patter_match(key);
if name:
if key == 'Raw Size':
raw_size = re.search('(\d+\.\d+)',value.strip())
if raw_size: temp_dict[name] = raw_size.group()
else:
raw_size = ''
else:
temp_dict[name] = value.strip() if temp_dict:
response.append(temp_dict)
return response def mega_patter_match(self,needle):
grep_pattern = {'Slot':'slot', 'Raw Size':'capacity', 'Inquiry':'model', 'PD Type':'iface_type'}
for key,value in grep_pattern.items():
if needle.startswith(key):
return value
return False if __name__=="__main__":
print DiskPlugin().linux()
sysinfo
2、windows
sysinfo.py
import platform
import win32com
import wmi
import os def collect():
data = {
'os_type': platform.system(),
'os_release':"%s %s %s "%( platform.release() ,platform.architecture()[0],platform.version()),
'os_distribution': 'Microsoft',
'asset_type':'server'
}
#data.update(cpuinfo())
win32obj = Win32Info()
data.update(win32obj.get_cpu_info())
data.update(win32obj.get_ram_info())
data.update(win32obj.get_server_info())
data.update(win32obj.get_disk_info())
data.update(win32obj.get_nic_info()) #for k,v in data.items():
# print k,v
return data
class Win32Info(object):
def __init__(self):
self.wmi_obj = wmi.WMI()
self.wmi_service_obj = win32com.client.Dispatch("WbemScripting.SWbemLocator")
self.wmi_service_connector =self.wmi_service_obj.ConnectServer(".","root\cimv2") def get_cpu_info(self):
data = {}
cpu_lists = self.wmi_obj.Win32_Processor()
cpu_core_count = 0 for cpu in cpu_lists:
cpu_core_count += cpu.NumberOfCores
cpu_model = cpu.Name
data["cpu_count"] = len(cpu_lists)
data["cpu_model"] = cpu_model
data["cpu_core_count"] =cpu_core_count
return data def get_ram_info(self):
data = []
ram_collections = self.wmi_service_connector.ExecQuery("Select * from Win32_PhysicalMemory")
for item in ram_collections:
item_data = {}
#print item
mb = int(1024 * 1024)
ram_size = int(item.Capacity) / mb
item_data = {
"slot":item.DeviceLocator.strip(),
"capacity":ram_size,
"model":item.Caption,
"manufactory":item.Manufacturer,
"sn":item.SerialNumber,
}
data.append(item_data)
#for i in data:
# print i
return {"ram":data}
def get_server_info(self):
computer_info = self.wmi_obj.Win32_ComputerSystem()[0]
system_info = self.wmi_obj.Win32_OperatingSystem()[0]
data = {}
data['manufactory'] = computer_info.Manufacturer
data['model'] = computer_info.Model
data['wake_up_type'] = computer_info.WakeUpType
data['sn'] = system_info.SerialNumber
#print data
return data def get_disk_info(self):
data = []
for disk in self.wmi_obj.Win32_DiskDrive():
#print disk.Model,disk.Size,disk.DeviceID,disk.Name,disk.Index,disk.SerialNumber,disk.SystemName,disk.Description
item_data = {}
iface_choices = ["SAS","SCSI","SATA","SSD"]
for iface in iface_choices:
if iface in disk.Model:
item_data['iface_type'] = iface
break
else:
item_data['iface_type'] = 'unknown'
item_data['slot'] = disk.Index
item_data['sn'] = disk.SerialNumber
item_data['model'] = disk.Model
item_data['manufactory'] = disk.Manufacturer
item_data['capacity'] = int(disk.Size ) / (1024*1024*1024)
data.append(item_data)
return {'physical_disk_driver':data}
def get_nic_info(self):
data = []
for nic in self.wmi_obj.Win32_NetworkAdapterConfiguration():
if nic.MACAddress is not None:
item_data = {}
item_data['macaddress'] = nic.MACAddress
item_data['model'] = nic.Caption
item_data['name'] = nic.Index
if nic.IPAddress is not None:
item_data['ipaddress'] = nic.IPAddress[0]
item_data['netmask'] = nic.IPSubnet
else:
item_data['ipaddress'] = ''
item_data['netmask'] = ''
bonding = 0
#print nic.MACAddress ,nic.IPAddress,nic.ServiceName,nic.Caption,nic.IPSubnet
#print item_data
data.append(item_data)
return {'nic':data}
if __name__=="__main__":
collect()
sysinfo
十一、测试截图
图一
图二