python当当网爬虫

时间:2025-03-17 12:01:45

   最终要实现的是将当当网上面的书籍信息,书籍名字,网址和评论数爬取,存入到数据库中。(首先要做的是创建好数据库,创建的数据库名字为dd,创建的表为books,字段为title,link,comment)。

1、创建项目 scrapy startproject dangdang

2、进入项目文件夹创建爬虫文件

>scrapy genspider –t basic dd

3、用pycharm打开这个项目

编辑文件

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
# See documentation in:
# /en/latest/topics/
import scrapy
class DangdangItem():
    # define the fields for your item here like:
    # name = ()
    title=()
    link=()
    comment=()

编辑

# -*- coding: utf-8 -*-
import scrapy
from  import DangdangItem
from  import Request
class DdSpider():
    name = 'dd'
    allowed_domains = ['']
    start_urls = ['/']
    def parse(self, response):
        item=DangdangItem()
        item['title']=('//a[@class="pic"]/@title').extract()
        item['link'] = ('//a[@class="pic"]/@href').extract()
        item['comment'] = ('//a[@class="search_comment_num"]/text()').extract()
        yield item
        for i in range(2,101):#循环爬多页的东西
            url='/pg'+str(i)+'-cp01.54.06.00.00.'
            yield Request(url,callback=)

在文件中打开pipelines

ITEM_PIPELINES = {
   
'': 300,
}

文件,将数据写入数据库

# -*- coding: utf-8 -*-
# Define your item pipelines here
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: /en/latest/topics/
import pymysql
class DangdangPipeline(object):
    def process_item(self, item, spider):
        conn=(host='localhost',port=3306,user='root',passwd='123456',db='dd')
        for i in range(0,len(item['title'])):
            title=item['title'][i]
            link=item['link'][i]
            comment=item['comment'][i]
            sql="insert into books(title,link,comment)values('"+title+"','"+link+"','"+comment+"')"
            (sql)
            ()
       ()
         return item