Java网络爬虫Hello world实现——Httpclient爬取百度首页

时间:2023-03-08 18:56:41
Java网络爬虫Hello world实现——Httpclient爬取百度首页

1.创建Maven项目

Java网络爬虫Hello world实现——Httpclient爬取百度首页

2.Httpclient Maven地址

 <dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.5</version>
</dependency>

在pom.xml文件中添加Httpclient jar包

 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.gxy.blogs</groupId>
<artifactId>Demo</artifactId>
<version>0.0.1-SNAPSHOT</version> <dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.5</version>
</dependency>
</dependencies> </project>

3.主要代码

 package cha01;

 import java.io.IOException;
import org.apache.http.HttpEntity;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils; public class Test { public static void main(String[] args) throws IOException {
CloseableHttpClient httpclient=HttpClients.createDefault();
HttpGet httpget=new HttpGet("http://www.baidu.com");
CloseableHttpResponse response=httpclient.execute(httpget);
HttpEntity entity=response.getEntity();
System.out.println(entity);
String page=EntityUtils.toString(entity, "utf-8");
System.out.println(page);
response.close();
httpclient.close();
}
}

4.运行结果

Java网络爬虫Hello world实现——Httpclient爬取百度首页