Python爬取动态网页中图片的完整实例

时间:2022-04-09 06:39:22

动态网页爬取是爬虫学习中的一个难点。本文将以知名插画网站pixiv为例,简要介绍动态网页爬取的方法。

写在前面

本代码的功能是输入画师的pixiv id,下载画师的所有插画。由于本人水平所限,所以代码不能实现自动登录pixiv,需要在运行时手动输入网站的cookie值。

重点:请求头的构造,json文件网址的查找,json中信息的提取

分析

创建文件夹

根据画师的id创建文件夹(相关路径需要自行调整)。

?
1
2
3
4
5
6
7
8
def makefolder(id): # 根据画师的id创建对应的文件夹
    try:
        folder = os.path.join('E:\pixivimages', id)
        os.mkdir(folder)
        return folder
    except(FileExistsError):
        print('the folder exists!')
        exit()

获取作者所有图片的id

访问url:https://pixiv.net/ajax/user/画师id/profile/all(这个json可以在画师主页url:https://www.pixiv.net/users/画师id 的开发者面板中找到,如图:)

Python爬取动态网页中图片的完整实例

json内容:

Python爬取动态网页中图片的完整实例

将json文档转化为python的字典,提取对应元素即可获取所有的插画id。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def getAuthorAllPicID(id, cookie): # 获取画师所有图片的id
    url = 'https://pixiv.net/ajax/user/' + id + '/profile/all' # 访问存有画师所有作品
    headers = {
        'User-Agent': user_agent,
        'Cookie': cookie,
        'Referer': 'https://www.pixiv.net/artworks/'
        # referer不能缺少,否则会403
    }
    res = requests.get(url, headers=headers, proxies=proxies)
    if res.status_code == 200:
        resdict = json.loads(res.content)['body']['illusts'] # 将json转化为python的字典后提取元素
        return [key for key in resdict] # 返回所有图片id
    else:
        print("Can not get the author's picture ids!")
        exit()

获取图片的真实url并下载

访问url:https://www.pixiv.net/ajax/illust/图片id?lang=zh,可以看到储存有图片真实地址的json:(这个json可以在图片url:https://www.pixiv.net/artworks/图片id 的开发者面板中找到)

Python爬取动态网页中图片的完整实例

用同样的方法提取json中有用的元素:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def getPictures(folder, IDlist, cookie): # 访问图片储存的真实网址
    for picid in IDlist:
        url1 = 'https://www.pixiv.net/artworks/{}'.format(picid) # 注意这里referer必不可少,否则会报403
        headers = {
            'User-Agent': user_agent,
            'Cookie': cookie,
            'Referer': url1
        }
        url = 'https://www.pixiv.net/ajax/illust/' + str(picid) + '?lang = zh' #访问储存图片网址的json
        res = requests.get(url, headers=headers, proxies=proxies)
        if res.status_code == 200:
            data = json.loads(res.content)
            picurl = data['body']['urls']['original'] # 在字典中找到储存图片的路径与标题
            title = data['body']['title']
            title = changeTitle(title) # 调整标题
            print(title)
            print(picurl)
            download(folder, picurl, title, headers)
        else:
            print("Can not get the urls of the pictures!")
            exit()
 
 
def changeTitle(title): # 为了防止
    global i
    title = re.sub('[*:]', "", title) # 如果图片中有下列符号,可能会导致图片无法成功下载
    # 注意可能还会有许多不能用于文件命名的符号,如果找到对应符号要将其添加到正则表达式中
    if title == '無題': # pixiv中有许多名为'無題'(日文)的图片,需要对它们加以区分以防止覆盖
        title = title + str(i)
        i = i + 1
    return title
 
 
def download(folder, picurl, title, headers): # 将图片下载到文件夹中
    img = requests.get(picurl, headers=headers, proxies=proxies)
    if img.status_code == 200:
        with open(folder + '\\' + title + '.jpg', 'wb') as file: # 保存图片
            print("downloading:" + title)
            file.write(img.content)
    else:
        print("download pictures error!")

完整代码

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
import requests
from fake_useragent import UserAgent
import json
import re
import os
 
global i
i = 0
ua = UserAgent() # 生成假的浏览器请求头,防止被封ip
user_agent = ua.random # 随机选择一个浏览器
proxies = {'http': 'http://127.0.0.1:51837', 'https': 'http://127.0.0.1:51837'} # 代理,根据自己实际情况调整,注意在请求时一定不要忘记代理!!
 
 
def makefolder(id): # 根据画师的id创建对应的文件夹
    try:
        folder = os.path.join('E:\pixivimages', id)
        os.mkdir(folder)
        return folder
    except(FileExistsError):
        print('the folder exists!')
        exit()
 
 
def getAuthorAllPicID(id, cookie): # 获取画师所有图片的id
    url = 'https://pixiv.net/ajax/user/' + id + '/profile/all' # 访问存有画师所有作品
    headers = {
        'User-Agent': user_agent,
        'Cookie': cookie,
        'Referer': 'https://www.pixiv.net/artworks/'
    }
    res = requests.get(url, headers=headers, proxies=proxies)
    if res.status_code == 200:
        resdict = json.loads(res.content)['body']['illusts'] # 将json转化为python的字典后提取元素
        return [key for key in resdict] # 返回所有图片id
    else:
        print("Can not get the author's picture ids!")
        exit()
 
 
def getPictures(folder, IDlist, cookie): # 访问图片储存的真实网址
    for picid in IDlist:
        url1 = 'https://www.pixiv.net/artworks/{}'.format(picid) # 注意这里referer必不可少,否则会报403
        headers = {
            'User-Agent': user_agent,
            'Cookie': cookie,
            'Referer': url1
        }
        url = 'https://www.pixiv.net/ajax/illust/' + str(picid) + '?lang = zh' #访问储存图片网址的json
        res = requests.get(url, headers=headers, proxies=proxies)
        if res.status_code == 200:
            data = json.loads(res.content)
            picurl = data['body']['urls']['original'] # 在字典中找到储存图片的路径与标题
            title = data['body']['title']
            title = changeTitle(title) # 调整标题
            print(title)
            print(picurl)
            download(folder, picurl, title, headers)
        else:
            print("Can not get the urls of the pictures!")
            exit()
 
 
def changeTitle(title): # 为了防止
    global i
    title = re.sub('[*:]', "", title) # 如果图片中有下列符号,可能会导致图片无法成功下载
    # 注意可能还会有许多不能用于文件命名的符号,如果找到对应符号要将其添加到正则表达式中
    if title == '無題': # pixiv中有许多名为'無題'(日文)的图片,需要对它们加以区分以防止覆盖
        title = title + str(i)
        i = i + 1
    return title
 
 
def download(folder, picurl, title, headers): # 将图片下载到文件夹中
    img = requests.get(picurl, headers=headers, proxies=proxies)
    if img.status_code == 200:
        with open(folder + '\\' + title + '.jpg', 'wb') as file: # 保存图片
            print("downloading:" + title)
            file.write(img.content)
    else:
        print("download pictures error!")
 
 
def main():
    global i
    id = input('input the id of the artist:')
    cookie = input('input your cookie:') # 半自动爬虫,需要自己事先登录pixiv以获取cookie
    folder = makefolder(id)
    IDlist = getAuthorAllPicID(id, cookie)
    getPictures(folder, IDlist, cookie)
 
 
if __name__ == '__main__':
    main()

效果

Python爬取动态网页中图片的完整实例

总结

到此这篇关于Python爬取动态网页中图片的文章就介绍到这了,更多相关Python爬取动态网页图片内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!

原文链接:https://blog.csdn.net/m0_51908955/article/details/114459226