python urllib2 -在抓取之前等待页面完成加载/重定向?

时间:2022-04-27 20:22:55

I'm learning to make web scrapers and want to scrape TripAdvisor for a personal project, grabbing the html using urllib2. However, I'm running into a problem where, using the code below, the html I get back is not correct as the page seems to take a second to redirect (you can verify this by visiting the url) - instead I get the code from the page that initially briefly appears.

我正在学习如何制作网页抓取器,并想为一个个人项目搜集TripAdvisor,用urllib2抓取html。然而,我遇到了一个问题,使用下面的代码,我返回的html不正确,因为页面似乎需要一秒钟来重定向(您可以通过访问url来验证这一点)——相反,我从最初出现的页面中获取代码。

Is there some behavior or parameter to set to make sure the page has completely finished loading/redirecting before getting the website content?

在获取网站内容之前,是否需要设置一些行为或参数来确保页面已经完成了加载/重定向?

import urllib2
from bs4 import BeautifulSoup

bostonPage = urllib2.urlopen("http://www.tripadvisor.com/HACSearch?geo=34438#02,1342106684473,rad:S0,sponsors:ABEST_WESTERN,style:Szff_6")
soup = BeautifulSoup(bostonPage)
print soup.prettify()

Edit: The answer is thorough, however, in the end what solved my problem was this: https://*.com/a/3210737/1157283

编辑:答案是彻底的,然而,最终解决我问题的是:https://*.com/a/3210737/1157283

1 个解决方案

#1


6  

Inreresting the problem isn't a redirect is that page modifies the content using javascript, but urllib2 doesn't have a JS engine it just GETS data, if you disabled javascript on your browser you will note it loads basically the same content as what urllib2 returns

问题不是重定向页面使用javascript修改内容,但是urllib2没有JS引擎它只获取数据,如果你禁用浏览器上的javascript你会注意到它加载的内容与urllib2返回的内容基本相同

import urllib2
from BeautifulSoup import BeautifulSoup

bostonPage = urllib2.urlopen("http://www.tripadvisor.com/HACSearch?geo=34438#02,1342106684473,rad:S0,sponsors:ABEST_WESTERN,style:Szff_6")
soup = BeautifulSoup(bostonPage)
open('test.html', 'w').write(soup.read())

test.html and disabling JS in your browser, easiest in firefox content -> uncheck enable javascript, generates identical result sets.

测试。在浏览器中禁用JS(最简单的firefox内容)——>不检查是否启用javascript,生成相同的结果集。

So what can we do well, first we should check if the site offers an API, scrapping tends to be frown up http://www.tripadvisor.com/help/what_type_of_tripadvisor_content_is_available

那么我们能做什么呢,首先我们应该检查一下这个网站是否提供了一个API,放弃的是http://www.tripadvisor.com/help/what_type_of_tripadvisor_content_is_available

Travel/Hotel API's? it looks they might, though with some restrictions.

旅游/酒店API的吗?尽管有一些限制,但看起来他们可能会这么做。

But if we still need to scrape it, with JS, then we can use selenium http://seleniumhq.org/ its mainly used for testing, but its easy and has fairly good docs.

但是,如果我们仍然需要用JS将它刮下来,那么我们可以使用selenium http://seleniumhq.org/它主要用于测试,但是它很容易,并且有很好的文档。

I also found this Scraping websites with Javascript enabled? and this http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

我还发现这个抓取网站启用了Javascript ?这个http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

hope that helps.

希望有帮助。

As a side note:

边注:

>>> import urllib2
>>> from bs4 import BeautifulSoup
>>> 
>>> bostonPage = urllib2.urlopen("http://www.tripadvisor.com/HACSearch?geo=34438#02,1342106684473,rad:S0,sponsors:ABEST_WESTERN,style:Szff_6")
>>> value = bostonPage.read()
>>> soup = BeautifulSoup(value)
>>> open('test.html', 'w').write(value)

#1


6  

Inreresting the problem isn't a redirect is that page modifies the content using javascript, but urllib2 doesn't have a JS engine it just GETS data, if you disabled javascript on your browser you will note it loads basically the same content as what urllib2 returns

问题不是重定向页面使用javascript修改内容,但是urllib2没有JS引擎它只获取数据,如果你禁用浏览器上的javascript你会注意到它加载的内容与urllib2返回的内容基本相同

import urllib2
from BeautifulSoup import BeautifulSoup

bostonPage = urllib2.urlopen("http://www.tripadvisor.com/HACSearch?geo=34438#02,1342106684473,rad:S0,sponsors:ABEST_WESTERN,style:Szff_6")
soup = BeautifulSoup(bostonPage)
open('test.html', 'w').write(soup.read())

test.html and disabling JS in your browser, easiest in firefox content -> uncheck enable javascript, generates identical result sets.

测试。在浏览器中禁用JS(最简单的firefox内容)——>不检查是否启用javascript,生成相同的结果集。

So what can we do well, first we should check if the site offers an API, scrapping tends to be frown up http://www.tripadvisor.com/help/what_type_of_tripadvisor_content_is_available

那么我们能做什么呢,首先我们应该检查一下这个网站是否提供了一个API,放弃的是http://www.tripadvisor.com/help/what_type_of_tripadvisor_content_is_available

Travel/Hotel API's? it looks they might, though with some restrictions.

旅游/酒店API的吗?尽管有一些限制,但看起来他们可能会这么做。

But if we still need to scrape it, with JS, then we can use selenium http://seleniumhq.org/ its mainly used for testing, but its easy and has fairly good docs.

但是,如果我们仍然需要用JS将它刮下来,那么我们可以使用selenium http://seleniumhq.org/它主要用于测试,但是它很容易,并且有很好的文档。

I also found this Scraping websites with Javascript enabled? and this http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

我还发现这个抓取网站启用了Javascript ?这个http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

hope that helps.

希望有帮助。

As a side note:

边注:

>>> import urllib2
>>> from bs4 import BeautifulSoup
>>> 
>>> bostonPage = urllib2.urlopen("http://www.tripadvisor.com/HACSearch?geo=34438#02,1342106684473,rad:S0,sponsors:ABEST_WESTERN,style:Szff_6")
>>> value = bostonPage.read()
>>> soup = BeautifulSoup(value)
>>> open('test.html', 'w').write(value)