Python爬虫——Urllib库-2

时间:2024-03-02 20:31:45

编解码

问题引入

例如:

https://www.baidu.com/s?wd=章若楠

https://www.baidu.com/s?wd=%E7%AB%A0%E8%8B%A5%E6%A5%A0

第二部分的一串乱码就是章若楠

如果这里是写的章若楠就会

产生这样的错误

所以我们就可以使用get请求方式的quote方法了

get请求方式的quote()方法

urllib.parse.quote("章若楠"):可将参数中的中文变成Unicode编码
import urllib.request
import urllib.parse

url = "https://www.baidu.com/s?wd="

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
}
# 将周杰伦三个字变成Unicode格式
# 我们需要依赖于urllib.parse
name = urllib.parse.quote("章若楠")
# 拼接一下得到最终的字符串
url = url + name

# 因为urlopen()方法不能存储字典,所以headers无法传入
# 请求对象的定制
request = urllib.request.Request(url=url, headers=headers)
# 模拟浏览器发送请求
response = urllib.request.urlopen(request)
# 获取响应的内容
content = response.read().decode("utf-8")

print(content)

成功查询出来结果 


get请求的urlencode方法

应用场景:多个参数时

例如如下URL有章若楠和女两个参数,也可以使用quote,但是比较麻烦

url = "https://www.baidu.com/s?wd=章若楠&sex=女"

但是如果使用urlencode方法就比较容易;呃

data = {
    "wd": "章若楠",
    "sex": "女",
}
a = urllib.parse.urlencode(data)
print(a)

整体代码示例 

import urllib.request
import urllib.parse

url = "https://www.baidu.com/s?"

data = {
    "wd": "章若楠",
    "sex": "女",
    "location": "浙江"
}
new_data = urllib.parse.urlencode(data)
# 请求资源路径
url = url + new_data

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
}

# 请求对象的定制
request = urllib.request.Request(url=url, headers=headers)

# 模拟浏览器发送请求
response = urllib.request.urlopen(request)

# 获取网页源码数据
content = response.read().decode("utf-8")
print(content)


post请求百度翻译

(1)post请求的参数需要进行编码

new_data = urllib.parse.urlencode(data)

(2)参数放置在请求对象定制的参数中

request = urllib.request.Request(url=url, data=new_data, headers=headers)

(3)编码之后需要调用encode方法,否则会报错

new_data = urllib.parse.urlencode(data).encode("utf-8")

        但是即使是加了encode将data编码之后,打印出来的内容还是乱码,这时候就需要将content从字符串转换成JSON对象了

整体代码如下:

import urllib.request
import urllib.parse
import json

# post请求
url = "https://fanyi.baidu.com/sug"

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
}

data = {
    "kw": "spider",
}
# post请求的 参数需要进行编码
new_data = urllib.parse.urlencode(data).encode("utf-8")

# 请求对象的定制
# post请求的参数 不拼接在URL的后面,而是放在请求对象定制的参数中
request = urllib.request.Request(url=url, data=new_data, headers=headers)

# 模拟浏览器发送请求
response = urllib.request.urlopen(request)

# 获取网页源码数据
content = response.read().decode("utf-8")

# 将字符串转换为JSON对象
obj = json.loads(content)
print(obj)

post请求百度翻译之详细翻译

百度翻译存在一个详细翻译,位置下图课可见

然后我们一顿操作就可以得到下面代码

import urllib.request
import urllib.parse
import json

# post请求
url = "https://fanyi.baidu.com/v2transapi?from=en&to=zh"

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
}

data = {
    "from": "en",
    "to": "zh",
    "query": "love",
    "transtype": "realtime",
    "simple_means_flag": "3",
    "sign": "198772.518981",
    "token": "cdd52406abbf29bdf0d424e2889d9724",
    "domain": "common",
    "ts": "1709212364268"
}
# post请求的 参数需要进行编码
new_data = urllib.parse.urlencode(data).encode("utf-8")

# 请求对象的定制
# post请求的参数 不拼接在URL的后面,而是放在请求对象定制的参数中
request = urllib.request.Request(url=url, data=new_data, headers=headers)

# 模拟浏览器发送请求
response = urllib.request.urlopen(request)

# 获取响应的数据
content = response.read().decode("utf-8")

# 将字符串转换为JSON对象
obj = json.loads(content)
print(obj)

再得到如下的结果

wdf,发生了什么   o((⊙﹏⊙))o

被反扒拿下了又  o(╥﹏╥)o

那么来看请求头,

        这么多东西都是真实的浏览器需要发送过去的东西,而我们只发送了一个User-Agent,显然是被识破了

然后把这些参数都加入到header之中

headers = {
    "Accept": "*/*",
    # "Accept-Encoding": "gzip, deflate, br, zstd",
    "Accept-Language": "zh-CN,zh;q=0.9",
    "Acs-Token": "1709208007739_1709212364277_2rynw+ePk52zCeBqFrnpVyboCMK+LPtSWG7fFss9tB46byfbwCQfYELvJyCkm1etX3UxQpeq1u0RZgDNoBMV4TZMgoBePG0jlPUTwV8YiGfTxR3L02wu6DP3wBEe6UBFONiLTSWESnmEOBRoQ3yX7KBs+A8w1QV8BHgguDCGc9Q/foG9jowZncaCVGl2AYTUbzGjkPg8xb4EZ62L2FIjpVZ1oVatDtgSFqtAVEO5W3z7tRVaI0JxFF2kkhw6bxnVHPNSiSkoKD3AXdrFhj2GatPAyn9YXlLw20qoyE+UjZIyRat4xdWkFsdTG/kvPlVLTh7qoabs+NaNVC8a21dlyWxgBsmrTbUEojKiYyaURQG0COiv/u0teilELxPLCo+FwatSE0yD8alqLGXSbi6v/yOOphDWau7zRYMynAEaxaLrQTuOgHfvllflSel+GMBctvdS6RtLdhQb+pIa3Sp1c8C2JvJ/DM/1Th2s+7pdaqE=",
    "Connection": "keep-alive",
    "Content-Length": "152",
    "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
    "Cookie": 'BIDUPSID=2DC3FD925EDB9E9310057AAA4313A978; PSTM=1679797623; BAIDUID=2DC3FD925EDB9E939299595287C491C9:FG=1; REALTIME_TRANS_SWITCH=1; FANYI_WORD_SWITCH=1; HISTORY_SWITCH=1; SOUND_SPD_SWITCH=1; SOUND_PREFER_SWITCH=1; MCITY=-75%3A; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BAIDUID_BFESS=2DC3FD925EDB9E939299595287C491C9:FG=1; ZFY=KUd37zEBYu5HusDOqV1jxs1znlRRBUOop2UvOac44TU:C; RT="z=1&dm=baidu.com&si=8d0cddbe-c90e-4db5-b3a0-3fd3a4f6ea21&ss=lt6jrqb7&sl=3&tt=rei&bcn=https%3A%2F%2Ffclog.baidu.com%2Flog%2Fweirwood%3Ftype%3Dperf&nu=9y8m6cy&cl=6qwh&ld=6pgv&ul=7z34&hd=7z3q"; BA_HECTOR=2k802l8l0l010184242k04a598vrdh1iu0cmp1t; H_PS_PSSID=40009_39661_40206_40211_40215_40222_40246_40274_40294_40289_40286_40317_40080; PSINO=1; delPer=0; APPGUIDE_10_6_9=1; Hm_lvt_64ecd82404c51e03dc91cb9e8c025574=1709210293; Hm_lpvt_64ecd82404c51e03dc91cb9e8c025574=1709210293; ab_sr=1.0.1_MGY0MDFkY2E0MjFjNzAwODk0Yjg1NTk1M2ZmYmUxMjlmMGEyZGRjNTk0MDM4NWE2NmM0ZmQzNzE4NzhhMDBhZWM5M2QxNDEwNzljNjhlNTE1MThhMTg3OWI0NmQ4OTAwOTlhMGExODIxNWM3ZDVmNmJmZTQ1MTIyM2JkNDIzMTRhOWMzYzM2ZTFjZTcyZDQ4MTUxNzBlZjE2NmFmODczYw==',
    "Host": 'fanyi.baidu.com',
    "Origin": 'https://fanyi.baidu.com',
    "Referer": 'https://fanyi.baidu.com/?ext_channel=DuSearch',
    "Sec-Ch-Ua": '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
    'Sec-Ch-Ua-Mobile': '?0',
    'Sec-Ch-Ua-Platform': '"Windows"',
    'Sec-Fetch-Dest': 'empty',
    'Sec-Fetch-Mode': 'cors',
    'Sec-Fetch-Site': 'same-origin',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
    'X-Requested-With': 'XMLHttpRequest'
}

我们成功了吗!!!

并没有,怎么了呢

这里的编码格式并没有utf-8,所以我们不要携带这一行参数

        但是这里你把这里百分之九十的东西删除了都行,只留下一个cookie即可,因为这里只有cookie被用来验证了。

import urllib.request
import urllib.parse
import json

# post请求
url = "https://fanyi.baidu.com/v2transapi?from=en&to=zh"

headers = {
    # "Accept": "*/*",
    # "Accept-Encoding": "gzip, deflate, br, zstd",
    # "Accept-Language": "zh-CN,zh;q=0.9",
    # "Acs-Token": "1709208007739_1709212364277_2rynw+ePk52zCeBqFrnpVyboCMK+LPtSWG7fFss9tB46byfbwCQfYELvJyCkm1etX3UxQpeq1u0RZgDNoBMV4TZMgoBePG0jlPUTwV8YiGfTxR3L02wu6DP3wBEe6UBFONiLTSWESnmEOBRoQ3yX7KBs+A8w1QV8BHgguDCGc9Q/foG9jowZncaCVGl2AYTUbzGjkPg8xb4EZ62L2FIjpVZ1oVatDtgSFqtAVEO5W3z7tRVaI0JxFF2kkhw6bxnVHPNSiSkoKD3AXdrFhj2GatPAyn9YXlLw20qoyE+UjZIyRat4xdWkFsdTG/kvPlVLTh7qoabs+NaNVC8a21dlyWxgBsmrTbUEojKiYyaURQG0COiv/u0teilELxPLCo+FwatSE0yD8alqLGXSbi6v/yOOphDWau7zRYMynAEaxaLrQTuOgHfvllflSel+GMBctvdS6RtLdhQb+pIa3Sp1c8C2JvJ/DM/1Th2s+7pdaqE=",
    # "Connection": "keep-alive",
    # "Content-Length": "152",
    # "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
    "Cookie": 'BIDUPSID=2DC3FD925EDB9E9310057AAA4313A978; PSTM=1679797623; BAIDUID=2DC3FD925EDB9E939299595287C491C9:FG=1; REALTIME_TRANS_SWITCH=1; FANYI_WORD_SWITCH=1; HISTORY_SWITCH=1; SOUND_SPD_SWITCH=1; SOUND_PREFER_SWITCH=1; MCITY=-75%3A; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BAIDUID_BFESS=2DC3FD925EDB9E939299595287C491C9:FG=1; ZFY=KUd37zEBYu5HusDOqV1jxs1znlRRBUOop2UvOac44TU:C; RT="z=1&dm=baidu.com&si=8d0cddbe-c90e-4db5-b3a0-3fd3a4f6ea21&ss=lt6jrqb7&sl=3&tt=rei&bcn=https%3A%2F%2Ffclog.baidu.com%2Flog%2Fweirwood%3Ftype%3Dperf&nu=9y8m6cy&cl=6qwh&ld=6pgv&ul=7z34&hd=7z3q"; BA_HECTOR=2k802l8l0l010184242k04a598vrdh1iu0cmp1t; H_PS_PSSID=40009_39661_40206_40211_40215_40222_40246_40274_40294_40289_40286_40317_40080; PSINO=1; delPer=0; APPGUIDE_10_6_9=1; Hm_lvt_64ecd82404c51e03dc91cb9e8c025574=1709210293; Hm_lpvt_64ecd82404c51e03dc91cb9e8c025574=1709210293; ab_sr=1.0.1_MGY0MDFkY2E0MjFjNzAwODk0Yjg1NTk1M2ZmYmUxMjlmMGEyZGRjNTk0MDM4NWE2NmM0ZmQzNzE4NzhhMDBhZWM5M2QxNDEwNzljNjhlNTE1MThhMTg3OWI0NmQ4OTAwOTlhMGExODIxNWM3ZDVmNmJmZTQ1MTIyM2JkNDIzMTRhOWMzYzM2ZTFjZTcyZDQ4MTUxNzBlZjE2NmFmODczYw==',
    # "Host": 'fanyi.baidu.com',
    # "Origin": 'https://fanyi.baidu.com',
    # "Referer": 'https://fanyi.baidu.com/?ext_channel=DuSearch',
    # "Sec-Ch-Ua": '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
    # 'Sec-Ch-Ua-Mobile': '?0',
    # 'Sec-Ch-Ua-Platform': '"Windows"',
    # 'Sec-Fetch-Dest': 'empty',
    # 'Sec-Fetch-Mode': 'cors',
    # 'Sec-Fetch-Site': 'same-origin',
    # 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
    # 'X-Requested-With': 'XMLHttpRequest'
}

data = {
    "from": "en",
    "to": "zh",
    "query": "love",
    "transtype": "realtime",
    "simple_means_flag": "3",
    "sign": "198772.518981",
    "token": "cdd52406abbf29bdf0d424e2889d9724",
    "domain": "common",
    "ts": "1709212364268"
}
# post请求的 参数需要进行编码
new_data = urllib.parse.urlencode(data).encode("utf-8")

# 请求对象的定制
# post请求的参数 不拼接在URL的后面,而是放在请求对象定制的参数中
request = urllib.request.Request(url=url, data=new_data, headers=headers)

# 模拟浏览器发送请求
response = urllib.request.urlopen(request)

# 获取响应的数据
content = response.read().decode("utf-8")

# 将字符串转换为JSON对象
obj = json.loads(content)
print(obj)

        这个就是百度翻译所需的验证,看见没有连UA甚至都不需要,这就是各种网站的反扒机制需要不同的headers的数值,百度网盘只需要一个cookie


总结

累了,以后再总结ヾ( ̄▽ ̄)Bye~Bye~