本文实例讲述了python获取基金网站网页内容、使用beautifulsoup库分析html操作。分享给大家供大家参考,具体如下:
利用 urllib包 获取网页内容
1
2
3
4
5
6
7
8
9
10
|
#引入包
from urllib.request import urlopen
response = urlopen( "http://fund.eastmoney.com/fund.html" )
html = response.read();
#这个网页编码是gb2312
#print(html.decode("gb2312"))
#把html内容保存到一个文件
with open ( "1.txt" , "wb" ) as f:
f.write(html.decode( "gb2312" ).encode( "utf8" ))
f.close()
|
使用beautifulsoup分析html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
from bs4 import beautifulsoup
# 读取文件内容
with open ( "1.txt" , "rb" ) as f:
html = f.read().decode( "utf8" )
f.close()
# 分析html内容
soup = beautifulsoup(html, "html.parser" )
# 取出网页title
print (soup.title) #<title>每日开放式基金净值表 _ 天天基金网</title>
# 基金编码
codes = soup.find( "table" , id = "otable" ).tbody.find_all( "td" , "bzdm" )
result = () # 初始化一个元组
for code in codes:
result + = ({
"code" :code.get_text(),
"name" :code.next_sibling.find( "a" ).get_text(),
"nav" :code.next_sibling.next_sibling.get_text(),
"accnav" :code.next_sibling.next_sibling.next_sibling.get_text()
},)
# 打印结果
print (result[ 0 ][ "name" ])
|
希望本文所述对大家python程序设计有所帮助。
原文链接:https://blog.csdn.net/github_26672553/article/details/78529734