1. 引言
现代网页往往其HTML只有基本结构,而数据是通过AJAX或其他方法获取后填充,这样的模式对爬虫有一定阻碍,但是熟练以后获取并不困难,本文以爬取天猫评论为例简单讲讲动态获取以及自定义Pipeline进行数据清洗的过程。
2. 爬取商品信息
我们访问s.taobao.com/search?q=你搜索的关键字 时可以很容易的获取到搜索结果页面,不难发现淘宝把搜索结果的信息嵌入到了该获取结果的head标签之中,可以很容易的通过xpath将该信息抽取出来并整理成一个Json,你可以发现其中中文部分是由Unicode编码编写的,你可以自己写一个convert函数去解决这个问题。
这里我也提供一个简单的convert函数,可以仅转换文本中的Unicode编码:
public class Unicode2utf8Utils {
public static String convert(String unicodeString) {
StringBuilder stringBuilder = new StringBuilder();
int i = -1;
int pos = 0;
while ((i = unicodeString.indexOf("\\u", pos)) != -1) {
stringBuilder.append(unicodeString.substring(pos, i));
if (i + 5 < unicodeString.length()) {
pos = i + 6;
stringBuilder.append((char) Integer.parseInt(unicodeString.substring(i + 2, i + 6), 16));
}
}
return stringBuilder.toString();
}
}
这里由于淘宝该数据是用Json格式发送的,可以很容易的用JsonPathSelector这个工具去获取想要的字符,当然,使用前需要对数据进行一些清理,让它是一个方便识别的Json文本。(这些清理是根据具体实践中爬到的内容进行分析得到的,你需要自己实践看看需要清除哪些内容。)
else if (page.getUrl().regex(urlList).match()) {
//获取页面script并转码为中文
String origin = Unicode2utf8Utils.convert(page.getHtml().xpath("//head/script[7]").toString());
//从script中获取json
Matcher jsonMatcher = Pattern.compile("\\{.*\\}").matcher(origin);
//如果成功获取json数据
if (jsonMatcher.find()) {
//清理乱码
String jsonString = jsonMatcher.group().replaceAll("\"navEntries\".*?,", "")
.replaceAll(",\"p4pdata\".*?\\\"\\}\"", "").replaceAll("\"spuList\".*?,", "");
//选择auctions列表
List<String> auctions = new JsonPathSelector("mods.itemlist.data.auctions[*]").selectList(jsonString);
有关于JsonPathSelector的语法(JsonPath),可以参考这里JSONPath - XPath for JSON
使用方法即是新建一个JsonPathSelector并使用其select或selectList方法获取所求元素。
对于每条元素的处理,这里建议用阿里巴巴提供的fastjackson库去操作。
//对于每一项商品
for (String auction : auctions) {
Map map = JSON.parseObject(auction);
//获取评论url
String commentUrl = (String) map.get("comment_url");
if (commentUrl == null) continue;
//获取商品id
Matcher itemIdMatcher = Pattern.compile("id=\\d+").matcher(commentUrl);
String itemIdString = null;
if (itemIdMatcher.find()) itemIdString = itemIdMatcher.group().replace("id=", "");
else continue;
//获取商店ip
String shopLink = new JsonPathSelector("shopLink").select(auction);
Matcher shopIdMatcher = Pattern.compile("user_number_id=\\d+").matcher(shopLink);
String shopIdString = null;
if (shopIdMatcher.find()) shopIdString = shopIdMatcher.group().replace("user_number_id=", "");
else continue;
//记录信息
map.put("itemId", itemIdString);
map.put("sellerId", shopIdString);
page.putField(itemIdString, map);
3. 爬取商品评论
至此我们的爬虫已经可以获取每个关键词的第一页商品信息,那么,如何获取其评论呢?注意到我在上段代码中获取了itemId和sellerId,利用这两个信息,我们可以获取评论。
打开一个评论页面,用浏览器的调试工具进行查看,会发现Network页面中的各种请求,我们可以挨个排查,查到,对于淘宝评论,将会发送如下请求:
https://rate.taobao.com/feedRateList.htm?auctionNumId=551058447857&userNumId=3167078258¤tPageNum=1&pageSize=20&rateType=&orderType=sort_weight&attribute=&sku=&hasSku=false&folded=0&ua=094%23UVQ6qM6U6l36u6ty666666BojjfaWoDLGsIU6Sf5RfSra4LjKWEeohUmxRUbiVNjH6Q6tusO%2Fbxm6M6QjLTM%2BR4t66W6nSkS1aQ6tHI6a486atAt6tlORWGWZHnBps8hHD80ee8tloiOPTTML6QtKBSv%2B6n0%2FxnKTeCbb1gD%2BlJiqwmPHGbgsSPNHG%2Fs%2FzmRR4pkHLd0%2BNJ9fpEJ%2FD76rYHg9yg9INGuG3hVxw01f9A2qP0vzP16Jjblbb%2FxxNnBuAmVHHEes9Jvkohr1G03Sv0yDg%2FAb9bnGzTspoo9%2B2raJp00HTElcncOzLSPIzvjT9n9zyoza5a2V7L%2BHpZYCWLYD7m%2F4est8Rws41d1V2R2D1jxDbS7Cn8Ez7C9w3FR3RoiAo0VcxtIsPgvI7SQVEjh9HS1bepYoRZep8Hws5zeVgc%2BApH6k0jKpgPs%2FzsBLuDczud0%2BNaAagcbpbHNvVTWALd414oEy3hV47Pv%2BU6Aa1ce1PZgkjc62Ty1ex77LRFAwLAk6M64jLTM%2B5PyzTXNAeTI09%2F0PkZvfRCGC15qjLTXi5Pyz9fNAehU09%2F0CRut66lLAeoM%2FDyr6M6ujLTWvMon0CR%3D&_ksTS=1498362441431_2073&callback=jsonp_tbcrate_reviews_list
对于天猫评论,我们可以发现如下请求:
https://rate.tmall.com/list_detail_rate.htm?itemId=549440936281&spuId=846223934&sellerId=1996270577&order=3¤tPage=1&append=0&content=1&tagId=&posi=&picture=&ua=096UW5TcyMNYQwiAiwQRHhBfEF8QXtHcklnMWc%3D%7CUm5Ockt%2FSnRPcEh0T3pCfCo%3D%7CU2xMHDJ7G2AHYg8hAS8XIw0tA18%2BWDRTLVd5L3k%3D%7CVGhXd1llXGhdY1hnX2NYbVVrXGFDf0tyT3FJdEF8RHBNcU1zSnRaDA%3D%3D%7CVWldfS0SMg02Dy8QMB4jHzFnMQ%3D%3D%7CVmhIGCUFOBgkGiIePgc6BzsbJxkiFzcDPwAgHCIZLAw5AzxqPA%3D%3D%7CV2xMHDJXLwEhHSIcPAEhHSMeJHIk%7CWGFBET8RMQo%2BBiYdKBAwCz4GPmg%2B%7CWWBAED4QMAgxCioWKREtDTcPMApcCg%3D%3D%7CWmNDEz0TMwoxCSkVKhQvDzUBNQBWAA%3D%3D%7CW2NDEz0TM2NaZVx8QH9Dfl5gWmBAfkN8XmJWblBsU2tLd0p%2FX2NbDS0QMB4wECUcIRxKHA%3D%3D%7CXGVYZUV4WGdHe0J%2BXmBYYkJ7W2VYeExsWXlDY19nMQ%3D%3D&isg=AoKCefv4bxJaWnPIzoTSiRgq04hIRrnyMb30i8ybA_WoHyOZtOJ-fD4dtS2Y&needFold=0&_ksTS=1498362526461_1756&callback=jsonp1757
分析这两个URL,我们可以发现,对于天猫连接,必要的属性为itemId,sellerId和currentPage,前两个可以从商品信息的comment_url和shopLink两个属性中通过正则匹配获取到,最后一个是页数。淘宝的连接也十分类似,只是属性名称有所变动。因此,我们可以针对这两种连接发送AJAX请求。自己重新用这些属性构造连接后,能够成功获得评论信息。
if (!map.get("comment_count").toString().isEmpty()) {
if(commentUrl.contains("taobao")){
for (int i = 1; i <= 5; ++i) {
String taoBaoUrl = "https://rate.taobao.com/feedRateList.htm?auctionNumId=" + itemIdString + "&userNumId=" + shopIdString + "¤tPageNum=" + i;
page.addTargetRequest(taoBaoUrl);
}
}else {
for (int i = 1; i <= 5; ++i) {
String tmallUrl = "https://rate.tmall.com/list_detail_rate.htm?itemId=" + itemIdString + "&sellerId=" + shopIdString + "¤tPage=" + i;
page.addTargetRequest(tmallUrl);
}
}
}
对于评论信息的获取如下:
if (page.getUrl().regex(tmallComment).match()) {
String text = page.getRawText().replace("\"rateDetail\":", "");
//记录信息
Map map = JSON.parseObject(text);
if (map.get("rateList") == null) return;
Matcher itemIdMatcher = Pattern.compile("itemId=\\d+").matcher(page.getRequest().getUrl());
String itemIdString = null;
if (itemIdMatcher.find()) itemIdString = itemIdMatcher.group().replace("itemId=", "");
Matcher shopIdMatcher = Pattern.compile("sellerId=\\d+").matcher(page.getRequest().getUrl());
String shopIdString = null;
if (shopIdMatcher.find()) shopIdString = shopIdMatcher.group().replace("sellerId=", "");
Matcher currentPageMatcher = Pattern.compile("currentPage=\\d+").matcher(page.getRequest().getUrl());
String currentPageString = null;
if (currentPageMatcher.find()) currentPageString = currentPageMatcher.group().replace("currentPage=", "");
map.put("currentPage",currentPageString);
map.put("itemId", itemIdString);
map.put("sellerId", shopIdString);
map.put("url", page.getRequest().getUrl());
page.putField(itemIdString, map);
} else if (page.getUrl().regex(tbComment).match()) {
Matcher jsonMatcher = Pattern.compile("\\{.*\\}").matcher(page.getRawText());
if (jsonMatcher.find()) {
Map map = JSON.parseObject(jsonMatcher.group());
//如果触发反爬虫,报错
if (map.get("url") != null && map.get("url").toString().matches(urlSec)) {
System.out.println("Meet the anti-Spider!");
return;
}
if (map.get("comments") == null) return;
Matcher itemIdMatcher = Pattern.compile("auctionNumId=\\d+").matcher(page.getRequest().getUrl());
String itemIdString = null;
if (itemIdMatcher.find()) itemIdString = itemIdMatcher.group().replace("auctionNumId=", "");
Matcher shopIdMatcher = Pattern.compile("userNumId=\\d+").matcher(page.getRequest().getUrl());
String shopIdString = null;
if (shopIdMatcher.find()) shopIdString = shopIdMatcher.group().replace("userNumId=", "");
Matcher currentPageMatcher = Pattern.compile("currentPageNum=\\d+").matcher(page.getRequest().getUrl());
String currentPageString = null;
if (currentPageMatcher.find()) currentPageString = currentPageMatcher.group().replace("currentPageNum=", "");
map.put("currentPage",currentPageString);
map.put("itemId", itemIdString);
map.put("sellerId", shopIdString);
map.put("url", page.getRequest().getUrl());
page.putField(itemIdString, map);
}
}
这里可以看到,处理淘宝评论和天猫评论的过程是十分相似的,但是天猫评论没有反爬,而淘宝评论会有反爬手段,目前我的一些简单的规避反爬虫的方法都不奏效,也未能推测出反爬的方法,因此这个爬虫几乎只能获取一条商品一页的淘宝评论。想获取更多,还要想出躲避反爬的方法。因此我标题里仅说天猫评论。
4. 数据清洗
webmagic的Pipeline是可以自定义的,因而可以在其中进行数据清洗工作,以使数据在爬取后自动清洗。这里给出我自定义的Pipeline:
public class MyTBJsonPipeline extends FilePersistentBase implements Pipeline {
public MyTBJsonPipeline(String path) {
this.setPath(path);
}
@Override
public void process(ResultItems resultItems, Task task) {
try {
Iterator iterator = resultItems.getAll().values().iterator();
while (iterator.hasNext()) {
Map map = (Map) iterator.next();
String name = map.get("itemId").toString();
if (map.get("raw_title") == null) {
if (map.get("rateList")!=null)
name += "_tmall_comment";
else name += "_taobao_comment";
name+="_"+map.get("currentPage");
}
PrintWriter printWriter = new PrintWriter(new FileWriter(this.getFile(path + name + ".json")));
printWriter.write(JSON.toJSONString(map));
printWriter.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
我针对每一条商品都存储了单独的文件,并将评论单独存储(文件名有所关联),进而能够清晰的展示信息。
完整代码见个人github:https://github.com/CieloSun/FashionSpider