Volley源码学习笔记

时间:2023-03-09 15:27:07
Volley源码学习笔记

标签(空格分隔): Volley


创建RequestQueue

使用Volley的时候,我们首先需要创建一个RequestQueue对象,用于添加各种请求,创建的方法是Volley.newRequestQueue(Context context)

public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
} public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
} if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
} Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start(); return queue;
}

可以看到,Volley.newRequestQueue()最终会调用其两个参数的重载方法,如果版本是2.3以下则会创建HttpClientStack,2.3以上则会创建HurlStack,前者使用HttpClient,而后者使用HttpUrlConnection进行通信。紧接着,创建DiskBaseCache对象,作为参数传入新建的RequestQueue对象中,DiskBaseCache使用SD卡进行缓存。最后调用queue.start()启动。

补充:当然我们可以自行创建RequestQueue,这样就可以修改缓存目录、线程池大小等等。。

我们先看一下queue的成员变量,方便我们理解后续的操作:

//重复的请求将加入这个集合
private final Map<String, Queue<Request>> mWaitingRequests =
new HashMap<String, Queue<Request>>();
//所有正在处理的请求任务的集合
private final Set<Request> mCurrentRequests = new HashSet<Request>();
//缓存任务的队列
private final PriorityBlockingQueue<Request> mCacheQueue =
new PriorityBlockingQueue<Request>();
//网络请求队列
private final PriorityBlockingQueue<Request> mNetworkQueue =
new PriorityBlockingQueue<Request>();
//默认线程池大小
private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;
//用于响应数据的存储与获取
private final Cache mCache;
//用于网络请求
private final Network mNetwork;
//用于分发响应数据
private final ResponseDelivery mDelivery;
//网络请求调度
private NetworkDispatcher[] mDispatchers;
//缓存调度
private CacheDispatcher mCacheDispatcher; public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
//可以看到,新建一个绑定主线程Looper的Handler对象,用于将响应结果传送回主线程中
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}
接下来看一下queue.start()方法:
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

这里就是将缓存调度器跟4个网络调度器启动,它们会分别不断从相应的队列中获取请求,若队列为空,则会阻塞

有了RequestQueue之后,我们就可以调用其add方法将请求加入到队列中去,这样Volley就会帮我们响应请求,那么我们现在看一下add方法源码:
public Request add(Request request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
} // Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
//设置为不需要缓存,则请求直接采用网络请求方式
mNetworkQueue.add(request);
return request;
} // Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
//如果mWaitingRequests存在该key,说明有同样的请求进来过,Volley使用队列存储,所以先获取该队列
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey);
//如果队列为空,说明先前的请求为同类型中的第一个,被加入到缓存队列中去
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request>();
}
stagedRequests.add(request);
//更新队列
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
//当前请求为同类型中请求的第一个,所以直接加入缓存队列中,此时,同类型队列为null,将其加入到mWaitingRequest中
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}

首先将request对象放入mCurrentRequests中,接着判断request是否需要缓存,如果不需要则直接加入到网络请求队列。否则,查看mWaitingRequests中是否存在该请求的缓存key,如果存在,说明有同样的请求进来过,由于Volley使用队列来存储,所以我们通过request的缓存key从mWaitingRequests中获取同类型的队列,如果获取到的队列为null,说明先前的request为同类型中的第一个且已经被加入到mCacheQueue中,否则,直接将request加入到队列中去。

注意:通过RequestQueue#add方法添加同类型中的request只有一个进入到mCacheQueue中,其他都会暂存在队列中,等待第一个进入mCacheQueue的request请求结束

再来看看queue中的finish()方法:

void finish(Request<?> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
} if (request.shouldCache()) {
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests);
}
}
}
}

可以看到,先将request从mCurrentRequests中去掉,然后下面的代码是request需要缓存的时候才会进入;下面分析一下if语句内的代码作用:

前面写到,但我们的request是不需要缓存的时候是直接进入网络请求队列中,而当我们的request需要缓存,那么就会获取request的缓存key,相同缓存key的request只有一个会进入mCacheQueue中,其他都会暂存在队列中,并且将这个队列放入到mWaitingRequests中去。所以当我们queue中finish()方法被调用时,说明进入mCacheQueue中的那一个reqest已经响应结束,那么我们存储在队列中的同类型requests自然就开始响应了,也就是从mWaitingRequests中取出队列,然后将队列中的request全部加入到缓存队列中


前面的RequestQueue#start方法中启动了一个缓存调度器跟四个网络调度器,其实是Thread的子类,用于在后台处理请求,分别负责从缓存跟网络中获取数据

CacheDispatcher

缓存调度器,负责后台处理请求,从缓存中获取数据。既然是Thread的子类,那么我们看一下当中的run方法。
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache.
mCache.initialize(); while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available. 取出缓存队列的任务(可能会阻塞)
final Request request = mCacheQueue.take();
request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
} // Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
} // If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
} // We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry); // Mark the response as intermediate.
response.intermediate = true; // Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
} } catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}

NetWorkDispatcher

跟`CacheDispatcher`一样,它也是`Thread`的子类,负责从网络请求队列中获取request。下面分析它的run方法:
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
} try {
request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
} // Tag the request (if API >= 14)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
} // Perform the network request. 发送网络请求
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
//表示这个请求仅仅是要求更新新鲜度,并且返回的是304,即可结束本请求
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
} // Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete"); // Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
} // Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e));
}
}
}

流程很清晰,同样是循环从队列中取出请求。如果是要求更新新鲜度并且是返回304(表示新鲜的),则继续循环。否则解析为Response对象,需要缓存的话就写到mCache对象中去;最后交由mDelivery传输响应结果

Delivery

Volley中,缓存调度器或者网络调度器在完成请求时,都是由Delivery来负责传输响应结果,这里传输结果是由子线程传到主线程当中,其中使用到的就是我们常用的Handler机制。来看看具体的实现:

public interface ResponseDelivery {
/**
* Parses a response from the network or cache and delivers it.
*/
public void postResponse(Request<?> request, Response<?> response); /**
* Parses a response from the network or cache and delivers it. The provided
* Runnable will be executed after delivery.
*/
public void postResponse(Request<?> request, Response<?> response, Runnable runnable); /**
* Posts an error for the given request.
*/
public void postError(Request<?> request, VolleyError error);
}

这个一个接口,定义了Delivery中传输结果的方法,其实现类是ExecutorDelivery,来看一下它的源码:

 /** Used for posting responses, typically to the main thread. */
private final Executor mResponsePoster; /**
* Creates a new response delivery interface.
* @param handler {@link Handler} to post responses on
*/
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
} @Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
} @Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

结合前面的RequestQueue中的构造方法:

public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}

这里总结一下:ExecutorDelivery是通过postResponse()方法来传输结果,可以看到,最终会走到 mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable))这行代码,其中的mResponsePoster是一个执行器,其内部只有一个execute()方法,我们可以在ExecutorDelivery的构造方法当中看到,execute()方法内部就直接调用了handler.post(Runnable runnable),而这个handler就是RequestQueue传入的new Handler(Looper.getMainLooper()),因此就可以从子线程传输响应结果到我们的主线程中了


回调接口响应结果

紧接着上面的分析,在ExecutorDeliverypostResponse方法中mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));新建一个ResponseDeliveryRunnable对象来响应,那我们看一下ResponseDeliveryRunnable:

private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable; public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable;
} @SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
} // Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
} // If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
} // If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}
}

在这里,判断Response是否成功,然后分别调用mRequest的deliverResponse()跟deliverError()方法,这里面就会回调Request对象的响应接口,也就是新建Request传入的成功或者失败接口。