python学习笔记之常用模块(第五天)

时间:2021-01-26 22:30:01

 

  参考老师的博客:

       金角:http://www.cnblogs.com/alex3714/articles/5161349.html

       银角:http://www.cnblogs.com/wupeiqi/articles/4963027.html

 

一、常用函数说明:

 ★ lamba

python lambda是在python中使用lambda来创建匿名函数,而用def创建的方法是有名称的,除了从表面上的方法名不一样外,python lambda还有哪些和def不一样呢?

1 python lambda会创建一个函数对象,但不会把这个函数对象赋给一个标识符,而def则会把函数对象赋值给一个变量。
2 python lambda它只是一个表达式,而def则是一个语句。

 

lambda语句中,冒号前是参数,可以有多个,用逗号隔开,冒号右边的返回值。lambda语句构建的其实是一个函数对象。

例:

m = lambda x,y,z: (x-y)*z
print m(234,122,5)

 

也经常用于生成列表,例:


list = [i ** i for i in range(10)]
print(list)

list_lambda = map(lambda x:x**x,range(10))
print(list_lambda)


 

 ★ enumerate(iterable,[start])   iterable为一个可迭代的对象;

enumerate(iterable[, start]) -> iterator for index, value of iterable Return an enumerate object.  iterable must be another object  that supports iteration.  The enumerate object yields pairs containing a count  (from start, which defaults to zero) and a value yielded by the iterable 
 argument. enumerate is useful for obtaining an indexed list:    (0, seq[0]), (1, seq[1]), (2, seq[2]), ...

例:


for k,v in enumerate(['a','b','c',1,2,3],10):
        print k,v 


 

★S.format(*args, **kwargs) -> string  字符串的格式输出,类似于格式化输出%s

        Return a formatted version of S, using substitutions from args and kwargs.
        The substitutions are identified by braces ('{' and '}').

s = 'i am {0},{1}'
print(s.format('wang',1))

 

★map(function,sequence) 将squence每一项做为参数传给函数,并返回值

例:


def add(arg):
    return arg + 101
print(map(add,[12,23,34,56]))


 

★filter(function or None, sequence) -> list, tuple, or string  返还true的序列
        Return those items of sequence for which function(item) is true.  If
    function is None, return the items that are true.  If sequence is a tuple
    or string, return the same type, else return a list.

例:


def comp(arg):
    if arg < 8:
        return True
    else:
        return False  
print(filter(comp,[1,19,21,8,5]))
print(filter(lambda x:x % 2,[1,19,20,8,5]))
print(filter(lambda x:x % 2,(1,19,20,8,5)))
print(filter(lambda x:x > 'a','AbcdE'))


 

★reduce(function, sequence[, initial]) -> value   对二个参数进行计算
Apply a function of two arguments cumulatively to the items of a  sequence,
from left to right, so as to reduce the sequence to a single value.For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates((((1+2)+3)+4)+5).  If initial is present, it is placed before the 
 items of the sequence in the calculation, and serves as a default when  the sequence is empty.

例:


print(reduce(lambda x,y:x*y,[22,11,8]))
print(reduce(lambda x,y:x*y,[3],10))

print(reduce(lambda x,y:x*y,[],5))


 

★zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)] 将多个序列转化为新元祖的序列
Return a list of tuples, where each tuple contains the i-th element from each of the argument sequences.  The returned list is  truncated in length to the length of the shortest argument sequence.

例:


a = [1,2,3,4,5,6]
b = [11,22,33,44,55]
c = [111,222,333,444]
print(zip(a,b,c))


 

★eval(source[, globals[, locals]]) -> value 将表达式字符串执行为值,其中globals为全局命名空间,locals为局部命名空间,从指字的命名空间中执行表达式,
        Evaluate the source in the context of globals and locals.    The source may be a string representing a Python expression  or a code object as returned by compile().    The globals must be a dictionary and locals can be any mapping,    defaulting to the current globals and locals.    If only globals is given, locals defaults to it.

例:


a = '8*(8+20-5%12*23'
print(eval(a))

d = {'a':5,'b':4}
print(eval('a*b',d))


 

★exec(source[, globals[, locals]]) 语句用来执行储存在字符串或文件中的Python语句

例:


a = 'print("nihao")'

b = 'for i in range(10): print i'

exec(a) 

exec(b)


 

★execfile(filename[, globals[, locals]])
     Read and execute a Python script from a file.The globals and locals are dictionaries, defaulting to the currentglobals and locals.  If only globals is given, locals defaults to it.

 

二、模块 paramiko

paramiko是一个用于做远程控制的模块,使用该模块可以对远程服务器进行命令或文件操作,值得一说的是,fabric和ansible内部的远程管理就是使用的paramiko来现实。

1、下载安装(pycrypto,由于 paramiko 模块内部依赖pycrypto,所以先下载安装pycrypto)

2、使用模块

python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
#!/usr/bin/env python
#
coding:utf-8

import paramiko

ssh
= paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(
'192.168.1.108', 22, 'alex', '123')
stdin, stdout, stderr
= ssh.exec_command('df')
print stdout.read()
ssh.close();
通过用户名和密码连接服务器
python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
import paramiko

private_key_path
= '/home/auto/.ssh/id_rsa'
key
= paramiko.RSAKey.from_private_key_file(private_key_path)

ssh
= paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(
'主机名 ', 端口, '用户名', key)

stdin, stdout, stderr
= ssh.exec_command('df')
print stdout.read()
ssh.close()
过密钥链接服务器
python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
import os,sys
import paramiko

t
= paramiko.Transport(('182.92.219.86',22))
t.connect(username
='wupeiqi',password='123')
sftp
= paramiko.SFTPClient.from_transport(t)
sftp.put(
'/tmp/test.py','/tmp/test.py')
t.close()


import os,sys
import paramiko

t
= paramiko.Transport(('182.92.219.86',22))
t.connect(username
='wupeiqi',password='123')
sftp
= paramiko.SFTPClient.from_transport(t)
sftp.get(
'/tmp/test.py','/tmp/test2.py')
t.close()
上传或者下载文件 - 通过用户名和密码
python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
import paramiko

pravie_key_path
= '/home/auto/.ssh/id_rsa'
key
= paramiko.RSAKey.from_private_key_file(pravie_key_path)

t
= paramiko.Transport(('182.92.219.86',22))
t.connect(username
='wupeiqi',pkey=key)

sftp
= paramiko.SFTPClient.from_transport(t)
sftp.put(
'/tmp/test3.py','/tmp/test3.py')

t.close()

import paramiko

pravie_key_path
= '/home/auto/.ssh/id_rsa'
key
= paramiko.RSAKey.from_private_key_file(pravie_key_path)

t
= paramiko.Transport(('182.92.219.86',22))
t.connect(username
='wupeiqi',pkey=key)

sftp
= paramiko.SFTPClient.from_transport(t)
sftp.get(
'/tmp/test3.py','/tmp/test4.py')

t.close()
上传或下载文件 - 通过密钥

 

三、其他常用模块:

1、random模块:

★random 生成随机数

print random.random()             生成0-1之间的小数

print random.randint(1,3)         生成整数,包含endpoint

print random.randrange(1,3,2)    生成整数,不包含endpoint

randrange(self, start, stop=None, step=?)

生成5位随机数,例:


import random

a = []
for i in range(5):
    if i == random.randint(1,5):
        a.append(str(i))
    else:
        a.append(chr(random.randint(65,90)))
else:        
    print(''.join(a))

 

2、MD5、sha、hashlib模块

★生成MD5码

例:


一. 使用md5包

import md5
src = 'this is a md5 test.'   
m1 = md5.new()   
m1.update(src)   
print m1.hexdigest()

二、使用sha包

import sha

hash = sha.new()

hash.update('admin')

print hash.hexdigest()

三. 使用hashlib

用于加密相关的操作,代替了md5模块和sha模块,主要提供 SHA1, SHA224, SHA256, SHA384, SHA512 ,MD5 算法

错误:“Unicode-objects must be encoded before hashing”,意思是在进行md5哈希运算前,需要对数据进行编码,使用encode("utf8")

import hashlib
hash = hashlib.md5()
hash.update('this is a md5 test.'.encode("utf8")

hash.update('admin')

print(hash.digest())

print(hash.hexdigest())

python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
import hashlib

# ######## md5 ########

hash
= hashlib.md5()
hash.update(
'admin')
print hash.hexdigest()

# ######## sha1 ########

hash
= hashlib.sha1()
hash.update(
'admin')
print hash.hexdigest()

# ######## sha256 ########

hash
= hashlib.sha256()
hash.update(
'admin')
print hash.hexdigest()


# ######## sha384 ########

hash
= hashlib.sha384()
hash.update(
'admin')
print hash.hexdigest()

# ######## sha512 ########

hash
= hashlib.sha512()
hash.update(
'admin')
print hash.hexdigest()
hashlib

推荐使用第三种方法。

 

对以上代码的说明:

1.首先从python直接导入hashlib模块

2.调用hashlib里的md5()生成一个md5 hash对象

3.生成hash对象后,就可以用update方法对字符串进行md5加密的更新处理

4.继续调用update方法会在前面加密的基础上更新加密

5.加密后的二进制结果

6.十六进制结果

如果只需对一条字符串进行加密处理,也可以用一条语句的方式:

print(hashlib.new("md5", "Nobody inspects the spammish repetition").hexdigest())


 

以上加密算法虽然依然非常厉害,但时候存在缺陷,即:通过撞库可以反解。所以,有必要对加密算法中添加自定义key再来做加密

1234567 import hashlib # ######## md5 ######## hash = hashlib.md5('898oaFs09f')hash.update('admin')print hash.hexdigest()

还不够吊?python 还有一个 hmac 模块,它内部对我们创建 key 和 内容 再进行处理然后再加密

1234 import hmac= hmac.new('wueiqi')h.update('hellowo')print h.hexdigest()

不能再牛逼了!!!

 

3、pickle和json模块:

python对象与文件之间的序列化和反序列化(pickle和json)

用于序列化的两个模块

  • json,用于字符串 和 python数据类型间进行转换
  • pickle,用于python特有的类型 和 python的数据类型间进行转换

Json模块提供了四个功能:dumps、dump、loads、load

pickle模块提供了四个功能:dumps、dump、loads、load

 

pickle模块用来实现python对象的序列化和反序列化。通常地pickle将python对象序列化为二进制流或文件。

 python对象与文件之间的序列化和反序列化:

pickle.dump()

pickle.load()

如果要实现python对象和字符串间的序列化和反序列化,则使用:

pickle.dumps()

pickle.loads()

 

可以被序列化的类型有:

* None,True 和 False;

* 整数,浮点数,复数;

* 字符串,字节流,字节数组;

* 包含可pickle对象的tuples,lists,sets和dictionaries;

* 定义在module顶层的函数:

* 定义在module顶层的内置函数;

* 定义在module顶层的类;

* 拥有__dict__()或__setstate__()的自定义类型;

注意:对于函数或类的序列化是以名字来识别的,所以需要import相应的module。

例:


 1 import pickle
2
3 class test():
4 def __init__(self,n):
5 self.a = n
6
7 t = test(123)
8 t2 = test('abc')
9
10 a_list = ['sky','mobi','mopo']
11 a_dict = {'a':1,'b':2,'3':'c'}
12
13 with open('test.pickle','wb') as f:
14 pickle.dump(t,f)
15 pickle.dump(a_list,f)
16 pickle.dump(t2,f)
17
18 with open('test.pickle','rb') as g:
19 gg = pickle.load(g)
20 print(gg.a)
21 hh = pickle.load(g)
22 print(hh[1])
23 ii = pickle.load(g)
24 print(ii.a)

注:dump和load一一对应,顺序也不会乱。


 

★JSON(JavaScript Object Notation):一种轻量级数据交换格式,相对于XML而言更简单,也易于阅读和编写,机器也方便解析和生成,Json是JavaScript中的一个子集。

        Python的Json模块序列化与反序列化的过程分别是 encoding和 decoding

encoding:把一个Python对象编码转换成Json字符串
decoding:把Json格式字符串解码转换成Python对象

具体的转化对照如下:

python学习笔记之常用模块(第五天)

 

loads方法返回了原始的对象,但是仍然发生了一些数据类型的转化。比如,上例中‘abc’转化为了unicode类型。从json到python的类型转化对照如下:

 

python学习笔记之常用模块(第五天)

例:


 1 import json
2
3 data = { 'a': [1, 2.0, 3, 4], 'b': ("character string", "byte string"), 'c': 'abc'}
4
5 du = json.dumps(data)
6 print(du)
7 print(json.loads(du,encoding='ASCII'))
8
9 with open('data.json','w') as f:
10 json.dump(data,f,indent=2,sort_keys=True,separators=(',',':'))
#
f.write(json.dumps(data))
11
12 with open('data.json','r') as f:
13 data = json.load(f)
14 print(repr(data))

注:json并不像pickle一样,将python对象序列化为二进制流或文件,所以写或读不能加b;

      同时只能针对一个对象或类型进行dump和load,不像pickle可以多个。

      sort_keys是告诉编码器按照字典排序(a到z)输出

      indent参数根据数据格式缩进显示,读起来更加清晰:

      separators参数的作用是去掉,,:后面的空格,从上面的输出结果都能看到", :"后面都有个空格,这都是为了美化输出结果的作用,但是在我们传输数据的过程中,越精简越好,冗余的东西全部去掉


经测试,2.7版本导出的json文件,3.4版本导入会报错:TypeError: the JSON object must be str, not 'bytes'

 

4、正则表达式模块:

re模块用于对python的正则表达式的操作。

字符:

  . 匹配除换行符以外的任意字符
  \w匹配字母或数字或下划线或汉字
  \s匹配任意的空白符
  \d匹配数字
  \b匹配单词的开始或结束
  ^匹配字符串的开始
  $匹配字符串的结束

次数:

  * 重复零次或更多次
  +重复一次或更多次
  ?重复零次或一次
  {n}重复n次
  {n,}重复n次或更多次
  {n,m}重复n到m次

IP:
^(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}$
手机号:
^1[3|4|5|8][0-9]\d{8}$

★re.match的函数原型为:re.match(pattern, string, flags)

第一个参数是正则表达式,这里为"(\w+)\s",如果匹配成功,则返回一个Match,否则返回一个None;

第二个参数表示要匹配的字符串;

第三个参数是标致位,用于控制正则表达式的匹配方式,如:是否区分大小写,多行匹配等等。

 

★re.search的函数原型为: re.search(pattern, string, flags)

每个参数的含意与re.match一样。 

re.match与re.search的区别:re.match只匹配字符串的开始,如果字符串开始不符合正则表达式,则匹配失败,函数返回None;而re.search匹配整个字符串,直到找到一个匹配。

 

★re.findall可以获取字符串中所有匹配的字符串。如:re.findall(r'\w*oo\w*', text);获取字符串中,包含'oo'的所有单词。

 

★re.sub的函数原型为:re.sub(pattern, repl, string, count)

其中第二个函数是替换后的字符串;本例中为'-'

第四个参数指替换个数。默认为0,表示每个匹配项都替换。

re.sub还允许使用函数对匹配项的替换进行复杂的处理。如:re.sub(r'\s', lambda m: '[' + m.group(0) + ']', text, 0);将字符串中的空格' '替换为'[ ]'。

 

★re.split可以使用re.split来分割字符串,如:re.split(r'\s+', text);将字符串按空格分割成一个单词列表。

根据指定匹配进行分组

content = "'1 - 2 * ((60-30+1*(9-2*5/3+7/3*99/4*2998+10*568/14))-(-4*3)/(16-3*2) )'"
new_content = re.split('\*', content)
# new_content = re.split('\*', content, 1)
print new_content
content = "'1 - 2 * ((60-30+1*(9-2*5/3+7/3*99/4*2998+10*568/14))-(-4*3)/(16-3*2) )'"
new_content = re.split('[\+\-\*\/]+', content)
# new_content = re.split('\*', content, 1)
print new_content
inpp = '1-2*((60-30 +(-40-5)*(9-2*5/3 + 7 /3*99/4*2998 +10 * 568/14 )) - (-4*3)/ (16-3*2))'
inpp = re.sub('\s*','',inpp)
new_content = re.split('\(([\+\-\*\/]?\d+[\+\-\*\/]?\d+){1}\)', inpp, 1)
print new_content

★re.compile可以把正则表达式编译成一个正则表达式对象。可以把那些经常使用的正则表达式编译成正则表达式对象,这样可以提高一定的效率。下面是一个正则表达式对象的一个例子:

例:


import re

r = re.compile('\d+')
r1 = r.match('adfaf123asdf1asf1123aa')
if r1:
    print(r1.group())
else:
    print('no match')

r2 = r.search('adfaf123asdf1asf1123aa')
if r2:
    print(r2.group())
    print(r2.groups())
else:
    print('no match')
    
r3 = r.findall('adfaf123asdf1asf1123aa')
if r3:
    print(r3)
else:
    print('no match')
    
r4 = r.sub('###','adfaf123asdf1asf1123aa')
print(r4)

r5 = r.subn('###','adfaf123asdf1asf1123aa')
print(r5)

r6 = r.split('adfaf123asdf1asf1123aa',maxsplit=2)
print(r6)


注:re执行分二步:首先编译,然后执行。故先使用re.compile进行查询的字符串进行编译,之后的操作无需在次编译,可以提高效率。

匹配IP具体实例:

 


ip = '12aa13.12.15aasdfa12.32aasdf192.168.12.13asdfafasf12abadaf12.13'
res = re.findall('(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})',ip)
print(res)
res1 = re.findall('(?:\d{1,3}\.){3}\d{1,3}',ip)
print(res1)


 

     而group,groups 主要是针对查询的字符串是否分组,一般只是针对search和match,即'\d+' 和('\d+') 输出结果为:

123   和('123',)。

import re
a
= 'Oldboy School,Beijing Changping shahe:010-8343245'

match
= re.search(r'(\D+),(\D+):(\S+)',a)
print(match.group(1))
print(match.group(2))
print(match.group(3))

print("##########################")

match2
= re.search(r'(?P<name>\D+),(?P<address>\D+):(?P<phone>\S+)',a)
print(match2.group('name'))
print(match2.group('address'))
print(match2.group('phone'))

 

 

5、time模块

time模块提供各种操作时间的函数

#1、时间戳    1970年1月1日之后的秒
#2、元组 包含了:年、日、星期等... time.struct_time
#3、格式化的字符串    2014-11-11 11:11

python学习笔记之常用模块(第五天)

 


import time

import datetime   print(time.clock()) #返回处理器时间,3.3开始已废弃 print(time.process_time()) #返回处理器时间,3.3开始已废弃 print(time.time()) #返回当前系统时间戳 print(time.ctime()) #输出Tue Jan 26 18:23:48 2016 ,当前系统时间 print(time.ctime(time.time()-86640)) #将时间戳转为字符串格式 print(time.gmtime(time.time()-86640)) #将时间戳转换成struct_time格式 print(time.localtime(time.time()-86640)) #将时间戳转换成struct_time格式,但返回 的本地时间 print(time.mktime(time.localtime())) #与time.localtime()功能相反,将struct_time格式转回成时间戳格式 #time.sleep(4) #sleep print(time.strftime("%Y-%m-%d %H:%M:%S",time.gmtime()) ) #将struct_time格式转成指定的字符串格式 print(time.strptime("2016-01-28","%Y-%m-%d") ) #将字符串格式转换成struct_time格式   #datetime module   print(datetime.date.today()) #输出格式 2016-01-26 print(datetime.date.fromtimestamp(time.time()-864400) ) #2016-01-16 将时间戳转成日期格式 current_time = datetime.datetime.now() # print(current_time) #输出2016-01-26 19:04:30.335935 print(current_time.timetuple()) #返回struct_time格式   #datetime.replace([year[, month[, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]]]]]) print(current_time.replace(2014,9,12)) #输出2014-09-12 19:06:24.074900,返回当前时间,但指定的值将被替换   str_to_date = datetime.datetime.strptime("21/11/06 16:30""%d/%m/%y %H:%M"#将字符串转换成日期格式 new_date = datetime.datetime.now() + datetime.timedelta(days=10#比现在加10天 new_date = datetime.datetime.now() + datetime.timedelta(days=-10#比现在减10天 new_date = datetime.datetime.now() + datetime.timedelta(hours=-10#比现在减10小时 new_date = datetime.datetime.now() + datetime.timedelta(seconds=120#比现在+120s print(new_date)
 

6、shutil模块

高级的 文件、文件夹、压缩包 处理模块

shutil.copyfileobj(fsrc, fdst[, length])
将文件内容拷贝到另一个文件中,可以部分内容

shutil.copyfile(src, dst) 拷贝文件

shutil.copymode(src, dst)

仅拷贝权限。内容、组、用户均不变

shutil.copystat(src, dst)

拷贝状态的信息,包括:mode bits, atime, mtime, flags

shutil.copy(src, dst)

拷贝文件和权限

shutil.copy2(src, dst)

拷贝文件和状态信息

shutil.ignore_patterns(*patterns)
shutil.copytree(src, dst, symlinks=False, ignore=None)
递归的去拷贝文件

例如:copytree(source, destination, ignore=ignore_patterns('*.pyc', 'tmp*'))

shutil.rmtree(path[, ignore_errors[, onerror]])

递归的去删除文件

shutil.move(src, dst)
递归的去移动文件

shutil.make_archive(base_name, format,...)

创建压缩包并返回文件路径,例如:zip、tar

  • base_name: 压缩包的文件名,也可以是压缩包的路径。只是文件名时,则保存至当前目录,否则保存至指定路径,
    如:www                        =>保存至当前路径
    如:/Users/wupeiqi/www =>保存至/Users/wupeiqi/
  • format: 压缩包种类,“zip”, “tar”, “bztar”,“gztar”
  • root_dir:要压缩的文件夹路径(默认当前目录)
  • owner: 用户,默认当前用户
  • group: 组,默认当前组
  • logger: 用于记录日志,通常是logging.Logger对象
123456789 #将 /Users/wupeiqi/Downloads/test 下的文件打包放置当前程序目录 import shutilret = shutil.make_archive("wwwwwwwwww"'gztar', root_dir='/Users/wupeiqi/Downloads/test')  #将 /Users/wupeiqi/Downloads/test 下的文件打包放置 /Users/wupeiqi/目录import shutilret = shutil.make_archive("/Users/wupeiqi/wwwwwwwwww"'gztar', root_dir='/Users/wupeiqi/Downloads/test')

shutil 对压缩包的处理是调用 ZipFile 和 TarFile 两个模块来进行的,

 

7、ConfigParser

用于对特定的配置进行操作,当前模块的名称在 python 3.x 版本中变更为 configparser。

1.基本的读取配置文件

-read(filename) 直接读取ini文件内容

-sections() 得到所有的section,并以列表的形式返回

-options(section) 得到该section的所有option

-items(section) 得到该section的所有键值对

-get(section,option) 得到section中option的值,返回为string类型

-getint(section,option) 得到section中option的值,返回为int类型,还有相应的getboolean()和getfloat() 函数。

 

2.基本的写入配置文件

-add_section(section) 添加一个新的section

-set( section, option, value) 对section中的option进行设置,需要调用write将内容写入配置文件。

 

3.Python的ConfigParser Module中定义了3个类对INI文件进行操作。

分别是RawConfigParser、ConfigParser、 SafeConfigParser。

RawCnfigParser是最基础的INI文件读取类;

ConfigParser、 SafeConfigParser支持对%(value)s变量的解析。 

 

设定配置文件test.conf


[portal] 

url = http://%(host)s:%(port)s/Portal 

host = localhost 

port = 8080 


使用RawConfigParser:

 


import ConfigParser 

file1 = ConfigParser.RawConfigParser()

file1.read('aa.txt')

print(file1.get('portal','url'))

得到终端输出:

http://%(host)s:%(port)s/Portal


 

使用ConfigParser:


import ConfigParser 

file2 = ConfigParser.ConfigParser()

file2.read('aa.txt')
print(file2.get('portal','url'))

得到终端输出:

http://localhost:8080/Portal


 

使用SafeConfigParser:


import ConfigParser 

 

cf = ConfigParser.SafeConfigParser() 

file3 = ConfigParser.SafeConfigParser()

file3.read('aa.txt')
print(file3.get('portal','url'))

得到终端输出(效果同ConfigParser):

http://localhost:8080/Portal

 

举例说明:

python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
# 注释1
; 注释2

[section1]
k1
= v1
k2:v2

[section2]
k1
= v1

import ConfigParser

config
= ConfigParser.ConfigParser()
config.read(
'i.cfg')

# ########## 读 ##########
#
secs = config.sections()
#
print secs
#
options = config.options('group2')
#
print options

#item_list = config.items('group2')
#
print item_list

#val = config.get('group1','key')
#
val = config.getint('group1','key')

# ########## 改写 ##########
#
sec = config.remove_section('group1')
#
config.write(open('i.cfg', "w"))

#sec = config.has_section('wupeiqi')
#
sec = config.add_section('wupeiqi')
#
config.write(open('i.cfg', "w"))


#config.set('group2','k1',11111)
#
config.write(open('i.cfg', "w"))

#config.remove_option('group2','age')
#
config.write(open('i.cfg', "w"))
configparser

 

8、logging模块:

 

用于便捷记录日志且线程安全的模块

 

1.简单的将日志打印到屏幕

 

import logging

logging.debug('This is debug message')
logging.info('This is info message')
logging.warning('This is warning message')

 

屏幕上打印:
WARNING:root:This is warning message

默认情况下,logging将日志打印到屏幕,日志级别为WARNING;
日志级别大小关系为:CRITICAL > ERROR > WARNING > INFO > DEBUG > NOTSET,当然也可以自己定义日志级别。

2.通过logging.basicConfig函数对日志的输出格式及方式做相关配置

import logging

logging.basicConfig(level=logging.DEBUG,
                format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
                datefmt='%a, %d %b %Y %H:%M:%S',
                filename='myapp.log',
                filemode='w')
    
logging.debug('This is debug message')
logging.info('This is info message')
logging.warning('This is warning message')

 

./myapp.log文件中内容为:
Sun, 24 May 2009 21:48:54 demo2.py[line:11] DEBUG This is debug message
Sun, 24 May 2009 21:48:54 demo2.py[line:12] INFO This is info message
Sun, 24 May 2009 21:48:54 demo2.py[line:13] WARNING This is warning message

logging.basicConfig函数各参数:
filename: 指定日志文件名
filemode: 和file函数意义相同,指定日志文件的打开模式,'w'或'a'
format: 指定输出的格式和内容,format可以输出很多有用信息,如上例所示:
 %(levelno)s: 打印日志级别的数值
 %(levelname)s: 打印日志级别名称
 %(pathname)s: 打印当前执行程序的路径,其实就是sys.argv[0]
 %(filename)s: 打印当前执行程序名
 %(funcName)s: 打印日志的当前函数
 %(lineno)d: 打印日志的当前行号
 %(asctime)s: 打印日志的时间
 %(thread)d: 打印线程ID
 %(threadName)s: 打印线程名称
 %(process)d: 打印进程ID
 %(message)s: 打印日志信息
datefmt: 指定时间格式,同time.strftime()
level: 设置日志级别,默认为logging.WARNING
stream: 指定将日志的输出流,可以指定输出到sys.stderr,sys.stdout或者文件,默认输出到sys.stderr,当stream和filename同时指定时,stream被忽略

3.将日志同时输出到文件和屏幕

import logging

logging.basicConfig(level=logging.DEBUG,
                format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
                datefmt='%a, %d %b %Y %H:%M:%S',
                filename='myapp.log',
                filemode='w')

#################################################################################################
#定义一个StreamHandler,将INFO级别或更高的日志信息打印到标准错误,并将其添加到当前的日志处理对象#
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)
#################################################################################################

logging.debug('This is debug message')
logging.info('This is info message')
logging.warning('This is warning message')

 

屏幕上打印:
root        : INFO     This is info message
root        : WARNING  This is warning message

./myapp.log文件中内容为:
Sun, 24 May 2009 21:48:54 demo2.py[line:11] DEBUG This is debug message
Sun, 24 May 2009 21:48:54 demo2.py[line:12] INFO This is info message
Sun, 24 May 2009 21:48:54 demo2.py[line:13] WARNING This is warning message

4.logging之日志回滚

import logging
from logging.handlers import RotatingFileHandler

#################################################################################################
#定义一个RotatingFileHandler,最多备份5个日志文件,每个日志文件最大10M
Rthandler = RotatingFileHandler('myapp.log', maxBytes=10*1024*1024,backupCount=5)
Rthandler.setLevel(logging.INFO)
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
Rthandler.setFormatter(formatter)
logging.getLogger('').addHandler(Rthandler)
################################################################################################

从上例和本例可以看出,logging有一个日志处理的主对象,其它处理方式都是通过addHandler添加进去的。
logging的几种handle方式如下:

 

logging.StreamHandler: 日志输出到流,可以是sys.stderr、sys.stdout或者文件
logging.FileHandler: 日志输出到文件

日志回滚方式,实际使用时用RotatingFileHandler和TimedRotatingFileHandler
logging.handlers.BaseRotatingHandler
logging.handlers.RotatingFileHandler
logging.handlers.TimedRotatingFileHandler

logging.handlers.SocketHandler: 远程输出日志到TCP/IP sockets
logging.handlers.DatagramHandler:  远程输出日志到UDP sockets
logging.handlers.SMTPHandler:  远程输出日志到邮件地址
logging.handlers.SysLogHandler: 日志输出到syslog
logging.handlers.NTEventLogHandler: 远程输出日志到Windows NT/2000/XP的事件日志
logging.handlers.MemoryHandler: 日志输出到内存中的制定buffer
logging.handlers.HTTPHandler: 通过"GET"或"POST"远程输出到HTTP服务器

 

由于StreamHandler和FileHandler是常用的日志处理方式,所以直接包含在logging模块中,而其他方式则包含在logging.handlers模块中,
上述其它处理方式的使用请参见python2.5手册!

5.通过logging.config模块配置日志

#logger.conf

###############################################

[loggers]
keys=root,example01,example02

[logger_root]
level=DEBUG
handlers=hand01,hand02

[logger_example01]
handlers=hand01,hand02
qualname=example01
propagate=0

[logger_example02]
handlers=hand01,hand03
qualname=example02
propagate=0

###############################################

[handlers]
keys=hand01,hand02,hand03

[handler_hand01]
class=StreamHandler
level=INFO
formatter=form02
args=(sys.stderr,)

[handler_hand02]
class=FileHandler
level=DEBUG
formatter=form01
args=('myapp.log', 'a')

[handler_hand03]
class=handlers.RotatingFileHandler
level=INFO
formatter=form02
args=('myapp.log', 'a', 10*1024*1024, 5)

###############################################

[formatters]
keys=form01,form02

[formatter_form01]
format=%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s
datefmt=%a, %d %b %Y %H:%M:%S

[formatter_form02]
format=%(name)-12s: %(levelname)-8s %(message)s
datefmt=

上例3:

import logging
import logging.config

logging.config.fileConfig("logger.conf")
logger = logging.getLogger("example01")

logger.debug('This is debug message')
logger.info('This is info message')
logger.warning('This is warning message')

上例4:

 

import logging
import logging.config

logging.config.fileConfig("logger.conf")
logger = logging.getLogger("example02")

logger.debug('This is debug message')
logger.info('This is info message')
logger.warning('This is warning message')

 

 

9、shelve 模块

   shelve模块是一个简单的k,v将内存数据通过文件持久化的模块,可以持久化任何pickle可支持的python数据格式

   shelve是一额简单的数据存储方案,他只有一个函数就是open(),这个函数接收一个参数就是文件名,然后返回一个shelf对象,你可以用他来存储东西,就可以简单的把他当作一个字典,当你存储完毕的时候,就调用close函数来关闭

 1 import shelve
2
3 d = shelve.open('shelve_file') #打开一个文件
4
5 class Test(object):
6 def __init__(self,n):
7 self.a = n
8
9
10 t = Test(123)
11 t2 = Test(123334)
12
13 name = ["alex","rain","test"]
14 d["test"] = name #持久化列表
15 d["t1"] = t #持久化类
16 d["t2"] = t2
17
18 d.close()
19
20 f = shelve.open('shelve_file')
21 print(f['t1'].a)
22 print(f['test'])
23 f.close

 

则会生成三类文件shelve_file.dat,shelve_file.dir,shelve_file.bak用来充放数据

多关注其中的writeback=True参数

详情见shelve模块:
python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
"""Manage shelves of pickled objects.

A "shelf" is a persistent, dictionary-like object. The difference
with dbm databases is that the values (not the keys!) in a shelf can
be essentially arbitrary Python objects -- anything that the "pickle"
module can handle. This includes most class instances, recursive data
types, and objects containing lots of shared sub-objects. The keys
are ordinary strings.

To summarize the interface (key is a string, data is an arbitrary
object):

import shelve
d = shelve.open(filename) # open, with (g)dbm filename -- no suffix

d[key] = data # store data at key (overwrites old data if
# using an existing key)
data = d[key] # retrieve a COPY of the data at key (raise
# KeyError if no such key) -- NOTE that this
# access returns a *copy* of the entry!
del d[key] # delete data stored at key (raises KeyError
# if no such key)
flag = key in d # true if the key exists
list = d.keys() # a list of all existing keys (slow!)

d.close() # close it

Dependent on the implementation, closing a persistent dictionary may
or may not be necessary to flush changes to disk.

Normally, d[key] returns a COPY of the entry. This needs care when
mutable entries are mutated: for example, if d[key] is a list,
d[key].append(anitem)
does NOT modify the entry d[key] itself, as stored in the persistent
mapping -- it only modifies the copy, which is then immediately
discarded, so that the append has NO effect whatsoever. To append an
item to d[key] in a way that will affect the persistent mapping, use:
data = d[key]
data.append(anitem)
d[key] = data

To avoid the problem with mutable entries, you may pass the keyword
argument writeback=True in the call to shelve.open. When you use:
d = shelve.open(filename, writeback=True)
then d keeps a cache of all entries you access, and writes them all back
to the persistent mapping when you call d.close(). This ensures that
such usage as d[key].append(anitem) works as intended.

However, using keyword argument writeback=True may consume vast amount
of memory for the cache, and it may make d.close() very slow, if you
access many of d's entries after opening it in this way: d has no way to
check which of the entries you access are mutable and/or which ones you
actually mutate, so it must cache, and write back at close, all of the
entries that you access. You can call d.sync() to write back all the
entries in the cache, and empty the cache (d.sync() also synchronizes
the persistent dictionary on disk, if feasible).
"""

from pickle import Pickler, Unpickler
from io import BytesIO

import collections

__all__ = ["Shelf", "BsdDbShelf", "DbfilenameShelf", "open"]

class _ClosedDict(collections.MutableMapping):
'Marker for a closed dict. Access attempts raise a ValueError.'

def closed(self, *args):
raise ValueError('invalid operation on closed shelf')
__iter__ = __len__ = __getitem__ = __setitem__ = __delitem__ = keys = closed

def __repr__(self):
return '<Closed Dictionary>'


class Shelf(collections.MutableMapping):
"""Base class for shelf implementations.

This is initialized with a dictionary-like object.
See the module's __doc__ string for an overview of the interface.
"""

def __init__(self, dict, protocol=None, writeback=False,
keyencoding
="utf-8"):
self.dict
= dict
if protocol is None:
protocol
= 3
self._protocol
= protocol
self.writeback
= writeback
self.cache
= {}
self.keyencoding
= keyencoding

def __iter__(self):
for k in self.dict.keys():
yield k.decode(self.keyencoding)

def __len__(self):
return len(self.dict)

def __contains__(self, key):
return key.encode(self.keyencoding) in self.dict

def get(self, key, default=None):
if key.encode(self.keyencoding) in self.dict:
return self[key]
return default

def __getitem__(self, key):
try:
value
= self.cache[key]
except KeyError:
f
= BytesIO(self.dict[key.encode(self.keyencoding)])
value
= Unpickler(f).load()
if self.writeback:
self.cache[key]
= value
return value

def __setitem__(self, key, value):
if self.writeback:
self.cache[key]
= value
f
= BytesIO()
p
= Pickler(f, self._protocol)
p.dump(value)
self.dict[key.encode(self.keyencoding)]
= f.getvalue()

def __delitem__(self, key):
del self.dict[key.encode(self.keyencoding)]
try:
del self.cache[key]
except KeyError:
pass

def __enter__(self):
return self

def __exit__(self, type, value, traceback):
self.close()

def close(self):
self.sync()
try:
self.dict.close()
except AttributeError:
pass
# Catch errors that may happen when close is called from __del__
# because CPython is in interpreter shutdown.
try:
self.dict
= _ClosedDict()
except (NameError, TypeError):
self.dict
= None

def __del__(self):
if not hasattr(self, 'writeback'):
# __init__ didn't succeed, so don't bother closing
# see http://bugs.python.org/issue1339007 for details
return
self.close()

def sync(self):
if self.writeback and self.cache:
self.writeback
= False
for key, entry in self.cache.items():
self[key]
= entry
self.writeback
= True
self.cache
= {}
if hasattr(self.dict, 'sync'):
self.dict.sync()


class BsdDbShelf(Shelf):
"""Shelf implementation using the "BSD" db interface.

This adds methods first(), next(), previous(), last() and
set_location() that have no counterpart in [g]dbm databases.

The actual database must be opened using one of the "bsddb"
modules "open" routines (i.e. bsddb.hashopen, bsddb.btopen or
bsddb.rnopen) and passed to the constructor.

See the module's __doc__ string for an overview of the interface.
"""

def __init__(self, dict, protocol=None, writeback=False,
keyencoding
="utf-8"):
Shelf.
__init__(self, dict, protocol, writeback, keyencoding)

def set_location(self, key):
(key, value)
= self.dict.set_location(key)
f
= BytesIO(value)
return (key.decode(self.keyencoding), Unpickler(f).load())

def next(self):
(key, value)
= next(self.dict)
f
= BytesIO(value)
return (key.decode(self.keyencoding), Unpickler(f).load())

def previous(self):
(key, value)
= self.dict.previous()
f
= BytesIO(value)
return (key.decode(self.keyencoding), Unpickler(f).load())

def first(self):
(key, value)
= self.dict.first()
f
= BytesIO(value)
return (key.decode(self.keyencoding), Unpickler(f).load())

def last(self):
(key, value)
= self.dict.last()
f
= BytesIO(value)
return (key.decode(self.keyencoding), Unpickler(f).load())


class DbfilenameShelf(Shelf):
"""Shelf implementation using the "dbm" generic dbm interface.

This is initialized with the filename for the dbm database.
See the module's __doc__ string for an overview of the interface.
"""

def __init__(self, filename, flag='c', protocol=None, writeback=False):
import dbm
Shelf.
__init__(self, dbm.open(filename, flag), protocol, writeback)


def open(filename, flag='c', protocol=None, writeback=False):
"""Open a persistent dictionary for reading and writing.

The filename parameter is the base filename for the underlying
database. As a side-effect, an extension may be added to the
filename and more than one file may be created. The optional flag
parameter has the same interpretation as the flag parameter of
dbm.open(). The optional protocol parameter specifies the
version of the pickle protocol (0, 1, or 2).

See the module's __doc__ string for an overview of the interface.
"""

return DbfilenameShelf(filename, flag, protocol, writeback)

shelve
shelve

 

但上面的使用有一个潜在的小问题,如下:
  1. >>> import shelve  
  2. >>> s = shelve.open('test.dat')  
  3. >>> s['x'] = ['a', 'b', 'c']  
  4. >>> s['x'].append('d')  
  5. >>> s['x']  
  6. ['a', 'b', 'c']  
存储的d到哪里去了呢?其实很简单,d没有写回,你把['a', 'b', 'c']存到了x,当你再次读取s['x']的时候,s['x']只是一个拷贝,而你没有将拷贝写回,所以当你再次读取s['x']的时候,它又从源中读取了一个拷贝,所以,你新修改的内容并不会出现在拷贝中,解决的办法就是,第一个是利用一个缓存的变量,如下所示
  1. >>> temp = s['x']  
  2. >>> temp.append('d')  
  3. >>> s['x'] = temp  
  4. >>> s['x']  
  5. ['a', 'b', 'c', 'd']  
在python2.4中有了另外的方法,就是把open方法的writeback参数的值赋为True,这样的话,你open后所有的内容都将在cache中,当你close的时候,将全部一次性写到硬盘里面。如果数据量不是很大的时候,建议这么做。 例:基于shelve的简单数据库的代码python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
#database.py  
import sys, shelve

def store_person(db):
"""
Query user for data and store it in the shelf object
"""
pid
= raw_input('Enter unique ID number: ')
person
= {}
person[
'name'] = raw_input('Enter name: ')
person[
'age'] = raw_input('Enter age: ')
person[
'phone'] = raw_input('Enter phone number: ')
db[pid]
= person

def lookup_person(db):
"""
Query user for ID and desired field, and fetch the corresponding data from
the shelf object
"""
pid
= raw_input('Enter ID number: ')
field
= raw_input('What would you like to know? (name, age, phone) ')
field
= field.strip().lower()
print field.capitalize() + ':', \
db[pid][field]

def print_help():
print 'The available commons are: '
print 'store :Stores information about a person'
print 'lookup :Looks up a person from ID number'
print 'quit :Save changes and exit'
print '? :Print this message'

def enter_command():
cmd
= raw_input('Enter command (? for help): ')
cmd
= cmd.strip().lower()
return cmd

def main():
database
= shelve.open('database.dat')
try:
while True:
cmd
= enter_command()
if cmd == 'store':
store_person(database)
elif cmd == 'lookup':
lookup_person(database)
elif cmd == '?':
print_help()
elif cmd == 'quit':
return
finally:
database.close()
if __name__ == '__main__':
main()

shelve数据库实例
shelve数据库实例

 

 10、xml模块

xml是实现不同语言或程序之间进行数据交换的协议,跟json差不多,但json使用起来更简单,不过,古时候,在json还没诞生的黑暗年代,大家只能选择用xml呀,至今很多传统公司如金融行业的很多系统的接口还主要是xml。

xml的格式如下,就是通过<>节点来区别数据结构的:

python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank updated="yes">2</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank updated="yes">5</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank updated="yes">69</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>

xml文件格式
xml文件格式

 

xml协议在各个语言里的都是支持的,在python中可以用以下模块操作xml 

在实际应用中,需要对xml配置文件进行实时修改, 

 1.增加、删除 某些节点

 2.增加,删除,修改某个节点下的某些属性

 3.增加,删除,修改某些节点的文本

 

实现思想:

使用ElementTree,先将文件读入,解析成树,之后,根据路径,可以定位到树的每个节点,再对节点进行修改,最后直接将其输出

 

具体代码如下:

python学习笔记之常用模块(第五天)python学习笔记之常用模块(第五天)
from xml.etree.ElementTree import ElementTree

def read_xml(in_path):
'''''读取并解析xml文件
in_path: xml路径
return: ElementTree
'''
tree
= ElementTree()
tree.parse(in_path)
root
= tree.getroot()
return(tree,root)

def write_xml(tree, out_path):
'''''将xml文件写出
tree: xml树
out_path: 写出路径
'''
tree.write(out_path, encoding
="utf-8",xml_declaration=True)

def if_match(node, kv_map):
'''''判断某个节点是否包含所有传入参数属性
node: 节点
kv_map: 属性及属性值组成的map
'''
for key in kv_map:
#if node.get(key) != kv_map.get(key):
if node.get(key) != kv_map[key]:
return False
return True

def if_match_text(node, text,mode='eq'):
'''''判断某个节点是否包含所有传入参数属性
node: 节点
text: 标签具体值
mode: 数值判断大小
'''

if mode == 'eq':
express
= "{0} == {1}"
elif mode == 'gt':
express
= "{0} > {1}"
elif mode == 'lt':
express
= "{0} < {1}"
else:
print('the mode is error')
return False

flag
= eval(express.format(int(node.text),text))

if flag:
return True
return False


#---------------search -----

def find_nodes(tree, path):
'''''查找某个路径匹配的所有节点
tree: xml树
path: 节点路径
'''
return tree.findall(path)


def get_node_by_keyvalue(nodelist, kv_map):
'''''根据属性及属性值定位符合的节点,返回节点
nodelist: 节点列表
kv_map: 匹配属性及属性值map
'''
result_nodes
= []
for node in nodelist:
if if_match(node, kv_map):
result_nodes.append(node)
return result_nodes

#---------------change -----

def change_node_properties(nodelist, kv_map, is_delete=False):
'''''修改/增加 /删除 节点的属性及属性值
nodelist: 节点列表
kv_map:属性及属性值map
'''
for node in nodelist:
for key in kv_map:
if is_delete:
if key in node.attrib:
del node.attrib[key]
else:
node.set(key, kv_map.get(key))

def change_node_text(nodelist, text, is_add=False, is_delete=False):
'''''改变/增加/删除一个节点的文本
nodelist:节点列表
text : 更新后的文本
'''
for node in nodelist:
if is_add:
node.text
= text * 3
elif is_delete:
node.text
= ""
else:
node.text
= text

def create_node(tag, property_map, content):
'''''新造一个节点
tag:节点标签
property_map:属性及属性值map
content: 节点闭合标签里的文本内容7
return 新节点
'''
element
= Element(tag, property_map)
element.text
= content
return element

def add_child_node(nodelist, element):
'''''给一个节点添加子节点
nodelist: 节点列表
element: 子节点
'''
for node in nodelist:
node.append(element)

def del_node_by_tagkeyvalue(nodelist, tag, kv_map):
'''''通过属性及属性值定位一个节点,并删除之
nodelist: 父节点列表
tag:子节点标签
kv_map: 属性及属性值列表
'''
for parent_node in nodelist:
children
= parent_node.getchildren()

for child in children:
if child.tag == tag and if_match(child, kv_map):
parent_node.remove(child)


def del_node_by_tagtext(nodelist, tag, text,mode='eq',flag=1):
'''''通过属性及属性值定位一个节点,并删除之
nodelist: 父节点列表
tag:子节点标签
text: 标签具体值
'''
for parent_node in nodelist:

children
= parent_node.getchildren()

for child in children:
if child.tag == tag and if_match_text(child,text,mode):
if flag == 1:
parent_node.remove(child)
else:
root.remove(parent_node)


if __name__ == "__main__":

#1. 读取xml文件
(tree,root) = read_xml("server.xml")
print(root)


#2. 属性修改
#A. 找到父节点
nodes = find_nodes(tree, "country/neighbor")

#B. 通过属性准确定位子节点
result_nodes = get_node_by_keyvalue(nodes, {"direction":"E"})

#C. 修改节点属性
change_node_properties(result_nodes, {"age": "10","position":'asiaasia'})

#D. 删除节点属性
change_node_properties(result_nodes, {"age": "10"},is_delete=True)

#3. 节点修改
#A.新建节点
a = create_node("person", {"age":"15","money":"200000"}, "this is the firest content")

#B.插入到父节点之下
add_child_node(result_nodes, a)


#4. 删除节点
#定位父节点
del_parent_nodes = find_nodes(tree, "country")

#根据attrib准确定位子节点并删除之
target_del_node = del_node_by_tagkeyvalue(del_parent_nodes, "neighbor", {"direction":"N"})

#根据text准确定位子节点并删除之
target_del_node = del_node_by_tagtext(del_parent_nodes, "gdppc",60000,'gt',1)

#根据text准确定位country并删除之
target_del_element = del_node_by_tagtext(del_parent_nodes, "rank",20,'gt',0)

#5. 修改节点文本
#定位节点
text_nodes = get_node_by_keyvalue(find_nodes(tree, "country/neighbor"), {"direction":"E"})
change_node_text(text_nodes,
"east",is_add=True)


#6. 输出到结果文件
write_xml(tree,"test.xml")

xml增删改查
xml增删改查

 

自己创建xml文档:

import xml.etree.ElementTree as ET


new_xml = ET.Element("namelist")
name = ET.SubElement(new_xml,"name",attrib={"enrolled":"yes"})
age = ET.SubElement(name,"age",attrib={"checked":"no"})
sex = ET.SubElement(name,"sex")
sex.text = '33'
name2 = ET.SubElement(new_xml,"name",attrib={"enrolled":"no"})
age = ET.SubElement(name2,"age")
age.text = '19'

et = ET.ElementTree(new_xml) #生成文档对象
et.write("test.xml", encoding="utf-8",xml_declaration=True)

ET.dump(new_xml) #打印生成的格式