跟爬虫干上了^O^

时间:2021-04-22 20:41:25
# !/usr/bin/env python
# -*- coding:utf-8 -*-
import requests
import re
from bs4 import BeautifulSoup 
for x in range(0,637,7):
    x=str(x)
    url = 'http://jingyan.baidu.com/user/npublic?uid=d1b612bceb0dc22ba8ffe137&pn='+x
    reponse = requests.get(url)
    reponse.encoding='utf-8'
    html = reponse.text

    uu = re.findall(r'<a href="(/article/\w+\.html)" title="', html)
    for t in uu:
        url='http://jingyan.baidu.com'+t
        r = requests.get('http://jingyan.baidu.com'+t)
        r.encoding='utf-8'
        h = r.text
        soup = BeautifulSoup(h)
        for i in soup.findAll(['title']):
            print(i.string,'\n')
        print(url,'\n')

运行之后:
跟爬虫干上了^O^跟爬虫干上了^O^跟爬虫干上了^O^
没爬完,电脑ID被封了?
………………
…………
……
…..
….

..
后来发现,不是封ID,而是需要输入验证码。
可能是爬虫跑的太快了,加一个瞌睡虫里面,每隔2秒爬取一个页面:

# !/usr/bin/env python
# -*- coding:utf-8 -*-
import requests
import re
import time
from bs4 import BeautifulSoup 
for x in range(0,637,7):
    x=str(x)
    url = 'http://jingyan.baidu.com/user/npublic?uid=d1b612bceb0dc22ba8ffe137&pn='+x
    reponse = requests.get(url)
    reponse.encoding='utf-8'
    html = reponse.text

    uu = re.findall(r'<a href="(/article/\w+\.html)" title="', html)
    for t in uu:
        url='http://jingyan.baidu.com'+t
        r = requests.get('http://jingyan.baidu.com'+t)
        r.encoding='utf-8'
        h = r.text
        soup = BeautifulSoup(h)
        for i in soup.findAll(['title']):
            print(i.string,'\n')
        print(url,'\n')
        time.sleep(2)

不要手贱的尝试了哈。^O^ ^O^