Scrapy URLError

时间:2020-12-09 01:44:20

错误信息如下:

2015-12-03 16:05:08 [scrapy] INFO: Scrapy 1.0.3 started (bot: LabelCrawler)
2015-12-03 16:05:08 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-12-03 16:05:08 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'LabelCrawler.spiders', 'SPIDER_MODULES': ['LabelCrawler.spiders'], 'BOT_NAME': 'LabelCrawler'}
2015-12-03 16:05:08 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-03 16:05:09 [boto] DEBUG: Retrieving credentials from metadata server.
2015-12-03 16:05:09 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "D:\Anaconda\lib\site-packages\boto\utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "D:\Anaconda\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "D:\Anaconda\lib\urllib2.py", line 449, in _open
'_open', req)
File "D:\Anaconda\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "D:\Anaconda\lib\urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "D:\Anaconda\lib\urllib2.py", line 1197, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 10051] >
2015-12-03 16:05:09 [boto] ERROR: Unable to read instance data, giving up
2015-12-03 16:05:09 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-03 16:05:09 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-03 16:05:09 [scrapy] INFO: Enabled item pipelines:
2015-12-03 16:05:09 [scrapy] INFO: Spider opened
2015-12-03 16:05:09 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-03 16:05:09 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-12-03 16:05:09 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2015-12-03 16:05:09 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)

 原因如下:

  That particular error message is being generated by boto (boto 2.38.0 py27_0), which is used to connect to Amazon S3. Scrapy doesn't have this enabled by default.

解决办法:

1.在settings.py文件中,加上

DOWNLOAD_HANDLERS = {'S3': None,}

但是我按照这个方法做并没有用,所以在spider.py文件中加入

from scrapy import optional_features
optional_features.remove('boto')

  问题解决

说实话,即使报错,也不影响爬虫,但是我有强迫症。。。。