前面对Camera2的初始化以及预览的相关流程进行了详细分析,本文将会对Camera2的capture(拍照)流程进行分析。
Camera2相关文章的目录如下:
android6.0源码分析之Camera API2.0简介
android6.0源码分析之Camera2 HAL分析
android6.0源码分析之Camera API2.0下的初始化流程分析
android6.0源码分析之Camera API2.0下的Preview(预览)流程分析
android6.0源码分析之Camera API2.0下的Capture流程分析
android6.0源码分析之Camera API2.0下的video流程分析
Camera API2.0的应用
前面分析preview的时候,当预览成功后,会使能ShutterButton,即可以进行拍照,定位到ShutterButton的监听事件为onShutterButtonClick方法:
//CaptureModule.java
@Override
public void onShutterButtonClick() {
//Camera未打开
if (mCamera == null) {
return;
}
int countDownDuration = mSettingsManager.getInteger(SettingsManager
.SCOPE_GLOBAL,Keys.KEY_COUNTDOWN_DURATION);
if (countDownDuration > 0) {
// 开始倒计时
mAppController.getCameraAppUI().transitionToCancel();
mAppController.getCameraAppUI().hideModeOptions();
mUI.setCountdownFinishedListener(this);
mUI.startCountdown(countDownDuration);
// Will take picture later via listener callback.
} else {
//即刻拍照
takePictureNow();
}
}
首先,读取Camera的配置,判断配置是否需要延时拍照,此处分析不需延时的情况,即调用takePictureNow方法:
//CaptureModule.java
private void takePictureNow() {
if (mCamera == null) {
Log.i(TAG, "Not taking picture since Camera is closed.");
return;
}
//创建Capture会话并开启会话
CaptureSession session = createAndStartCaptureSession();
//获取Camera的方向
int orientation = mAppController.getOrientationManager()
.getDeviceOrientation().getDegrees();
//初始化图片参数
PhotoCaptureParameters params = new PhotoCaptureParameters(
session.getTitle(), orientation, session.getLocation(),
mContext.getExternalCacheDir(), this, mPictureSaverCallback,
mHeadingSensor.getCurrentHeading(), mZoomValue, 0);
//装配Session
decorateSessionAtCaptureTime(session);
//拍照
mCamera.takePicture(params, session);
}
它首先调用createAndStartCaptureSession来创建一个CaptureSession并且启动会话,这里并且会进行初始参数的设置,譬如设置CaptureModule(此处实参为this)为图片处理的回调(后面再分析):
//CaptureModule.java
private CaptureSession createAndStartCaptureSession() {
//获取会话时间
long sessionTime = getSessionTime();
//当前位置
Location location = mLocationManager.getCurrentLocation();
//设置picture name
String title = CameraUtil.instance().createJpegName(sessionTime);
//创建会话
CaptureSession session = getServices().getCaptureSessionManager()
.createNewSession(title, sessionTime, location);
//开启会话
session.startEmpty(new CaptureStats(mHdrPlusEnabled),new Size(
(int) mPreviewArea.width(), (int) mPreviewArea.height()));
return session;
}
首先,获取会话的相关参数,包括会话时间,拍照的照片名字以及位置信息等,然后调用Session管理来创建CaptureSession,最后将此CaptureSession启动。到这里,会话就创建并启动了,所以接着分析上面的拍照流程,它会调用OneCameraImpl的takePicture方法来进行拍照:
//OneCameraImpl.java
@Override
public void takePicture(final PhotoCaptureParameters params, final CaptureSession session) {
...
// 除非拍照已经返回,否则就广播一个未准备好状态的广播,即等待本次拍照结束
broadcastReadyState(false);
//创建一个线程
mTakePictureRunnable = new Runnable() {
@Override
public void run() {
//拍照
takePictureNow(params, session);
}
};
//设置回调,此回调后面将分析,它其实就是CaptureModule,它实现了PictureCallback
mLastPictureCallback = params.callback;
mTakePictureStartMillis = SystemClock.uptimeMillis();
//如果需要自动聚焦
if (mLastResultAFState == AutoFocusState.ACTIVE_SCAN) {
mTakePictureWhenLensIsStopped = true;
} else {
//拍照
takePictureNow(params, session);
}
}
在拍照里,首先广播一个未准备好的状态广播,然后进行拍照的回调设置,并且判断是否有自动聚焦,如果是则将mTakePictureWhenLensIsStopped 设为ture,即即刻拍照被停止了,否则则调用OneCameraImpl的takePictureNow方法来发起拍照请求:
//OneCameraImpl.java
public void takePictureNow(PhotoCaptureParameters params, CaptureSession
session) {
long dt = SystemClock.uptimeMillis() - mTakePictureStartMillis;
try {
// 构造JPEG图片拍照的请求
CaptureRequest.Builder builder = mDevice.createCaptureRequest(
CameraDevice.TEMPLATE_STILL_CAPTURE);
builder.setTag(RequestTag.CAPTURE);
addBaselineCaptureKeysToRequest(builder);
// Enable lens-shading correction for even better DNGs.
if (sCaptureImageFormat == ImageFormat.RAW_SENSOR) {
builder.set(CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE,
CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE_ON);
} else if (sCaptureImageFormat == ImageFormat.JPEG) {
builder.set(CaptureRequest.JPEG_QUALITY, JPEG_QUALITY);
.getJpegRotation(params.orientation, mCharacteristics));
}
//用于preview的控件
builder.addTarget(mPreviewSurface);
//用于图片显示的控件
builder.addTarget(mCaptureImageReader.getSurface());
CaptureRequest request = builder.build();
if (DEBUG_WRITE_CAPTURE_DATA) {
final String debugDataDir = makeDebugDir(params.debugDataFolder,
"normal_capture_debug");
Log.i(TAG, "Writing capture data to: " + debugDataDir);
CaptureDataSerializer.toFile("Normal Capture", request,
new File(debugDataDir,"capture.txt"));
}
//拍照,mCaptureCallback为回调
mCaptureSession.capture(request, mCaptureCallback, mCameraHandler);
} catch (CameraAccessException e) {
Log.e(TAG, "Could not access camera for still image capture.");
broadcastReadyState(true);
params.callback.onPictureTakingFailed();
return;
}
synchronized (mCaptureQueue) {
mCaptureQueue.add(new InFlightCapture(params, session));
}
}
与preview类似,都是通过CaptureRequest来与Camera进行通信的,通过session的capture来进行拍照,并设置拍照的回调函数为mCaptureCallback:
//CameraCaptureSessionImpl.java
@Override
public synchronized int capture(CaptureRequest request,CaptureCallback callback,Handler handler)throws CameraAccessException{
...
handler = checkHandler(handler,callback);
return addPendingSequence(mDeviceImpl.capture(request,createCaptureCallbackProxy(
handler,callback),mDeviceHandler));
}
代码与preview中的类似,都是将请求加入到待处理的请求集,现在看CaptureCallback回调:
//OneCameraImpl.java
private final CameraCaptureSession.CaptureCallback mCaptureCallback = new CameraCaptureSession.CaptureCallback(){
@Override
public void onCaptureStarted(CameraCaptureSession session,CaptureRequest request,long
timestamp,long frameNumber){
//与preview类似
if(request.getTag() == RequestTag.CAPTURE&&mLastPictureCallback!=null){
mLastPictureCallback.onQuickExpose();
}
}
...
@Override
public void onCaptureCompleted(CameraCaptureSession session,CaptureRequest request
,TotalCaptureResult result){
autofocusStateChangeDispatcher(result);
if(result.get(CaptureResult.CONTROL_AF_STATE) == null){
//检查自动聚焦的状态
AutoFocusHelper.checkControlAfState(result);
}
...
if(request.getTag() == RequestTag.CAPTURE){
synchronized(mCaptureQueue){
if(mCaptureQueue.getFirst().setCaptureResult(result).isCaptureComplete()){
capture = mCaptureQueue.removeFirst();
}
}
if(capture != null){
//拍照结束
OneCameraImpl.this.onCaptureCompleted(capture);
}
}
super.onCaptureCompleted(session,request,result);
}
...
}
这是Native层在处理请求时,会调用相应的回调,如capture开始时,会回调onCaptureStarted,具体的在preview中有过分析,当拍照结束时,会回调onCaptureCompleted方法,其中会根据CaptureResult来检查自动聚焦的状态,并通过TAG判断其是Capture动作时,再来看它是否是队列中的第一个请求,如果是,则将请求移除,因为请求已经处理成功,最后再调用OneCameraImpl的onCaptureCompleted方法来进行处理:
//OneCameraImpl.java
private void onCaptureCompleted(InFlightCapture capture){
if(isCaptureImageFormat == ImageFormat.RAW_SENSOR){
...
File dngFile = new File(RAW_DIRECTORY,capture.session.getTitle()+".dng");
writeDngBytesAndClose(capture.image,capture.totalCaptureResult,mCharacteristics,dngFile);
}else{
//解析result中的图片数据
byte[] imageBytes = acquireJpegBytesAndClose(capture.image);
//保存Jpeg图片
saveJpegPicture(imageBytes,capture.parameters,capture.session,capture.totalCaptureResult);
}
broadcastReadyState(true);
//调用回调
capture.parameters.callback.onPictureTaken(capture.session);
}
如代码所示,首先,对result中的图片数据进行了解析,然后调用saveJpegPicture方法将解析得到的图片数据进行保存,最后再调用里面的回调(即CaptureModule,前面在初始化Parameters时说明了,它实现了PictureCallbak接口)的onPictureTaken方法,所以,接下来先分析saveJpegPicture方法:
//OneCameraImpl.java
private void saveJpegPicture(byte[] jpegData,final PhotoCaptureParameters captureParams,CaptureSession session,CaptureResult result){
...
ListenableFuture<Optional<Uri>> futureUri = session.saveAndFinish(jpegData,width,
height,rotation,exif);
Futures.addCallback(futureUri,new FutureCallback<Optional<Uri>>(){
@Override
public void onSuccess(Optional<Uri> uriOptional){
captureParams.callback.onPictureSaved(mOptional.orNull());
}
@Override
public void onFailure(Throwable throwable){
captureParams.callback.onPictureSaved(null);
}
});
}
它最后会回调onPictureSaved方法来对图片进行保存,所以需要分析CaptureModule的onPictureSaved方法:
//CaptureModule.java
@Override
public void onPictureSaved(Uri uri){
mAppController.notifyNewMedia(uri);
}
mAppController的实现为CameraActivity,所以分析notifyNewMedia方法:
//CameraActivity.java
@Override
public void notifyNewMedia(Uri uri){
...
if(FilmstripItemUtils.isMimeTypeVideo(mimeType)){
//如果拍摄的是video
sendBroadcast(new Intent(CameraUtil.ACTION_NEW_VIDEO,uri));
newData = mVideoItemFactory.queryContentUri(uri);
...
}else if(FilmstripItemUtils.isMimeTypeImage(mimeType)){
//如果是拍摄图片
CameraUtil.broadcastNewPicture(mAppContext,uri);
newData = mPhotoItemFactory.queryCotentUri(uri);
...
}else{
return;
}
new AsyncTask<FilmstripItem,Void,FilmstripItem>(){
@Override
protected FilmstripItem doInBackground(FilmstripItem... Params){
FilmstripItem data = params[0];
MetadataLoader.loadMetadata(getAndroidContet(),data);
return data;
}
...
}
}
由代码可知,这里有两种数据的处理,一种是video,另一种是image。而我们这里分析的是capture图片数据,所以首先会根据在回调函数传入的参数Uri和PhotoItemFactory来查询到相应的拍照数据,然后再开启一个异步的Task来对此数据进行处理,即通过MetadataLoader的loadMetadata来加载数据,并返回。至此,capture的流程就基本分析结束了,下面将给出capture流程的整个过程中的时序图: