Scrapy爬取豆瓣电影top250的电影数据、海报,MySQL存储

时间:2022-08-15 09:30:21

从GitHub得到完整项目(https://github.com/daleyzou/douban.git)

1、成果展示

数据库

Scrapy爬取豆瓣电影top250的电影数据、海报,MySQL存储

本地海报图片

Scrapy爬取豆瓣电影top250的电影数据、海报,MySQL存储

2、环境

(1)已安装Scrapy的Pycharm

(2)mysql

(3)连上网络的电脑

3、实体类设计

Scrapy爬取豆瓣电影top250的电影数据、海报,MySQL存储

4、代码

items.py

 class DoubanItem(scrapy.Item):
title = scrapy.Field()
bd = scrapy.Field()
star = scrapy.Field()
quote = scrapy.Field()
img_url = scrapy.Field()
pic_path = scrapy.Field()

doubanmovie.py(爬虫类)

 # -*- coding: utf-8 -*-
import scrapy # noinspection PyUnresolvedReferences
from douban.items import DoubanItem
import sys
reload(sys)
sys.setdefaultencoding('utf-8') class DoubanmovieSpider(scrapy.Spider):
name = 'doubanmovie'
allowed_domains = ['douban.com']
offset = 0
url = "https://movie.douban.com/top250?start="
start_urls = [url + str(offset),] def parse(self, response):
item = DoubanItem()
movies = response.xpath("//div[ @class ='info']")
links = response.xpath("//div[ @class ='pic']//img/@src").extract()
for (each, link) in zip(movies,links):
# 标题
item['title'] = each.xpath('.//span[@class ="title"][1]/text()').extract()[0]
# 信息
item['bd'] = each.xpath('.//div[@ class ="bd"][1]/p/text()').extract()[0]
# 评分
item['star'] = each.xpath('.//div[@class ="star"]/span[@ class ="rating_num"]/text()').extract()[0]
# 简介
quote = each.xpath('.//p[@ class ="quote"] / span / text()').extract()
# quote可能为空,因此需要先进行判断
if quote:
quote = quote[0]
else:
quote = ''
item['quote'] = quote
item['img_url'] = link yield item
if self.offset < 225:
self.offset += 25
yield scrapy.Request(self.url+str(self.offset), callback=self.parse)

pipelines.py

 # -*- coding: utf-8 -*-

 import MySQLdb

 from scrapy import Request
from scrapy.pipelines.images import ImagesPipeline class DoubanPipeline(object): def __init__(self):
self.conn = MySQLdb.connect(host='localhost', port=3306, db='douban', user='root', passwd='root', charset='utf8')
self.cur = self.conn.cursor() def process_item(self, item, spider):
print '--------------------------------------------'
print item['title']
print '--------------------------------------------'
try:
sql = "INSERT IGNORE INTO doubanmovies(title,bd,star,quote_mv,img_url) VALUES(\'%s\',\'%s\',%f,\'%s\',\'%s\')" %(item['title'], item['bd'], float(item['star']), item['quote'], item['title']+".jpg")
self.cur.execute(sql)
self.conn.commit()
except Exception, e:
print "----------------------inserted faild!!!!!!!!-------------------------------"
print e.message
return item def close_spider(self, spider):
print '-----------------------quit-------------------------------------------'
# 关闭数据库连接
self.cur.close()
self.conn.close() # 下载图片
class DownloadImagesPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
image_url = item['img_url']
# 添加meta是为了下面重命名文件名使用
yield Request(image_url,meta={'title': item['title']}) def file_path(self, request, response=None, info=None):
title = request.meta['title'] # 通过上面的meta传递过来item
image_guid = request.url.split('.')[-1]
filename = u'{0}.{1}'.format(title, image_guid)
print '++++++++++++++++++++++++++++++++++++++++++++++++'
print filename
print '++++++++++++++++++++++++++++++++++++++++++++++++'
return filename

middlewares.py

 import random
import base64
from settings import USER_AGENTS
from settings import PROXIES class RandomUserAgent(object):
def process_request(self, request, spider):
useragent = random.choice(USER_AGENTS)
request.headers.setdefault("User-Agent",useragent) class RandomProxy(object):
def process_request(self, request, spider):
proxy = random.choice(PROXIES)
print '---------------------'
print proxy
if proxy['user_passwd'] is None:
# 如果没有代理账户验证
request.meta['proxy'] = "http://" + proxy['ip_port']
else:
base64_userpasswd = base64.b64encode(proxy['user_passwd'])
# 对账户密码进行base64编码转换
request.meta['proxy'] = "http://" + proxy['ip_port']
# 对应到代理服务器的信令格式里
request.headers['Proxy-Authorization'] = 'Basic '+ base64_userpasswd

解释HTTP代理使用base64编码

为什么HTTP代理要使用base64编码:
HTTP代理的原理很简单,就是通过HTTP协议与代理服务器建立连接,
协议信令中包含要连接到的远程主机的IP和端口号,如果有需要身份验证的话还需要加上授权信息,
服务器收到信令后首先进行身份验证,通过后便与远程主机建立连接,连接成功之后会返回给客户端200,
表示验证通过,就这么简单,下面是具体的信令格式:

CONNECT 59.64.128.198:21 HTTP/1.1
Host: 59.64.128.198:21
Proxy-Authorization: Basic bGV2I1TU5OTIz
User-Agent: OpenFetion

其中Proxy-Authorization是身份验证信息,
Basic后面的字符串是用户名和密码组合后进行base64编码的结果,
也就是对username:password进行base64编码。

HTTP/1.0 200 Connection established

OK,客户端收到收面的信令后表示成功建立连接,
接下来要发送给远程主机的数据就可以发送给代理服务器了,
代理服务器建立连接后会在根据IP地址和端口号对应的连接放入缓存,
收到信令后再根据IP地址和端口号从缓存中找到对应的连接,将数据通过该连接转发出去。

settings.py

 USER_AGENTS = [
'Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1',
'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3',
'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)',
'Opera/9.27 (Windows NT 5.2; U; zh-cn)',
'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)',
'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0 ',
'Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Version/3.1 Safari/525.13',
'Mozilla/5.0 (iPhone; U; CPU like Mac OS X) AppleWebKit/420.1 (KHTML, like Gecko) Version/3.0 Mobile/4A93 Safari/419.3',
'Mozilla/5.0 (Linux; U; Android 4.0.3; zh-cn; M032 Build/IML74K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30'
] PROXIES = [
{"ip_port":"202.103.14.155:8118","user_passwd":""},
{"ip_port":"110.73.11.21:8123","user_passwd":""}
]
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'douban.pipelines.DoubanPipeline': 1,
'douban.pipelines.DownloadImagesPipeline': 100
}
IMAGES_STORE = 'D:\Python\Scrapy\douban\Images'

5、运行

(1)打开本地MySQL数据库

(2)创建一个douban的数据库,并新建一个doubanmovie的表

(3)更改代码中连接到数据库代码中的端口、用户名、密码

(4)切换到项目目录下的/douban/douban/spiders中

(5)运行scrapy crawl doubanmovie