众所周知,在Android中Camera采用了C/S架构,其中Camera server 与 Camera client之间通过Android Binder IPC机制进行通信。
在Camera实现的框架中,除开HAL层以及驱动层以下是采用的C语言进行编写以外,其余的都是c++ 和java这两大经典面向对象的语言来实现的。
网络上大部分的分析,是基于一个client端对server端的过程调用,一步一步的深入到驱动底层。而我自己,更愿意从对象的角度来分析camera的脉络。
其实,整个Camera框架,主体上来说,就两类对象,这里可以简化为两个对象,其中一个是Camera server对象,另外一个是Camera client对象。
这两个对象之间的交流沟通,是通过第三方对象来搞定的,主要是binder对象,当然也还有一些其他辅助的对象。
在参阅网络上大量优秀博客文章以及源代码后,对自己的分析做一个简要的笔记。
一、Camera Server 对象
1. Camera Server 对象的定义
class CameraService 定义在了frameworks/av/services/camera/libcameraservice/CameraService.h文件中:
(这个类太牛逼了,300多行,简化吧)
class CameraService :
public BinderService<CameraService>,
public BnCameraService,
public IBinder::DeathRecipient,
public camera_module_callbacks_t
{
// Implementation of BinderService<T>
static char const* getServiceName() { return "media.camera"; } CameraService();
virtual ~CameraService(); //... /////////////////////////////////////////////////////////////////////
// HAL Callbacks
virtual void onDeviceStatusChanged(int cameraId,
int newStatus); ///////////////////////////////////////////////////////////////////// //... /////////////////////////////////////////////////////////////////////
// ICameraService
virtual int32_t getNumberOfCameras();
virtual status_t getCameraInfo(int cameraId,
struct CameraInfo* cameraInfo);
virtual status_t getCameraCharacteristics(int cameraId,
CameraMetadata* cameraInfo);
virtual status_t getCameraVendorTagDescriptor(/*out*/ sp<VendorTagDescriptor>& desc); virtual status_t connect(const sp<ICameraClient>& cameraClient, int cameraId,
const String16& clientPackageName, int clientUid,
/*out*/
sp<ICamera>& device); virtual status_t connectLegacy(const sp<ICameraClient>& cameraClient, int cameraId,
int halVersion, const String16& clientPackageName, int clientUid,
/*out*/
sp<ICamera>& device); virtual status_t connectPro(const sp<IProCameraCallbacks>& cameraCb,
int cameraId, const String16& clientPackageName, int clientUid,
/*out*/
sp<IProCameraUser>& device); virtual status_t connectDevice(
const sp<ICameraDeviceCallbacks>& cameraCb,
int cameraId,
const String16& clientPackageName,
int clientUid,
/*out*/
sp<ICameraDeviceUser>& device); virtual status_t addListener(const sp<ICameraServiceListener>& listener);
virtual status_t removeListener(
const sp<ICameraServiceListener>& listener); virtual status_t getLegacyParameters(
int cameraId,
/*out*/
String16* parameters); // OK = supports api of that version, -EOPNOTSUPP = does not support
virtual status_t supportsCameraApi(
int cameraId, int apiVersion); // Extra permissions checks
virtual status_t onTransact(uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags); //... /////////////////////////////////////////////////////////////////////
// CameraClient functionality
class BasicClient : public virtual RefBase {
public:
virtual status_t initialize(camera_module_t *module) = ;
virtual void disconnect();
//....
}; //...
class Client : public BnCamera, public BasicClient
{
public:
typedef ICameraClient TCamCallbacks; // ICamera interface (see ICamera for details)
virtual void disconnect();
virtual status_t connect(const sp<ICameraClient>& client) = ;
virtual status_t lock() = ;
virtual status_t unlock() = ;
virtual status_t setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer)=;
virtual void setPreviewCallbackFlag(int flag) = ;
virtual status_t setPreviewCallbackTarget(
const sp<IGraphicBufferProducer>& callbackProducer) = ;
virtual status_t startPreview() = ;
virtual void stopPreview() = ;
virtual bool previewEnabled() = ;
virtual status_t storeMetaDataInBuffers(bool enabled) = ;
virtual status_t startRecording() = ;
virtual void stopRecording() = ;
virtual bool recordingEnabled() = ;
virtual void releaseRecordingFrame(const sp<IMemory>& mem) = ;
virtual status_t autoFocus() = ;
virtual status_t cancelAutoFocus() = ;
virtual status_t takePicture(int msgType) = ;
virtual status_t setParameters(const String8& params) = ;
virtual String8 getParameters() const = ;
virtual status_t sendCommand(int32_t cmd, int32_t arg1, int32_t arg2) = ; // Interface used by CameraService
Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const String16& clientPackageName,
int cameraId,
int cameraFacing,
int clientPid,
uid_t clientUid,
int servicePid);
~Client(); // return our camera client
const sp<ICameraClient>& getRemoteCallback() {
return mRemoteCallback;
} virtual sp<IBinder> asBinderWrapper() {
return asBinder();
} protected:
static Mutex* getClientLockFromCookie(void* user);
// convert client from cookie. Client lock should be acquired before getting Client.
static Client* getClientFromCookie(void* user); virtual void notifyError(ICameraDeviceCallbacks::CameraErrorCode errorCode,
const CaptureResultExtras& resultExtras); // Initialized in constructor // - The app-side Binder interface to receive callbacks from us
sp<ICameraClient> mRemoteCallback; }; // class Client class ProClient : public BnProCameraUser, public BasicClient { //...}; private: // Delay-load the Camera HAL module
virtual void onFirstRef(); // Step 1. Check if we can connect, before we acquire the service lock.
status_t validateConnect(int cameraId,
/*inout*/
int& clientUid) const; // Step 2. Check if we can connect, after we acquire the service lock.
bool canConnectUnsafe(int cameraId,
const String16& clientPackageName,
const sp<IBinder>& remoteCallback,
/*out*/
sp<BasicClient> &client); // When connection is successful, initialize client and track its death
status_t connectFinishUnsafe(const sp<BasicClient>& client,
const sp<IBinder>& remoteCallback); virtual sp<BasicClient> getClientByRemote(const wp<IBinder>& cameraClient); //.... camera_module_t *mModule;
//... };
这个类定义的东西太多了,300多行的东西。其实,简而言之,它分为这几个部分:
(1). 和binder通信有关的东西,比如:
static char const* getServiceName() { return "media.camera"; }
(2). 和client有关的东西:
在CameraService 的内部,定义了一个Client类。当远端Client对server进行调用操作的时候,其最终会把该动作落实到 在CameraService 内部的这个Client 类实例化上来.
class Client : public BnCamera, public BasicClient
{
//...
};
(3). 和HAL层有关的东西,比如:
// HAL Callbacks
virtual void onDeviceStatusChanged(int cameraId,
int newStatus);
---
camera_module_t *mModule;
就整个类来说,CameraService 主要有这三大部分。当然,这只是一部分,还有其他很多重要的部分,比如 MediaPlayer 等等,此处进行简化。
从以上列出来的三部分可以知道CameraServer的大概工作内容了: 通过Binder联系Client;通过HAL联系底层驱动。
2.Camera Server 对象的产生过程
因为Binder的关系,Camera server对象是需要向Binder的相关机构进行注册,否则client无法通过Binder找到它。当Camera server注册以后,它就静静的等待着Client的到来。
(1). init.rc 文件
以展讯sc7731为例,在 device/sprd/scx35/recovery/init.rc 文件中,将camera归属到media的组类别中:
service media /system/bin/mediaserver
class factorytest
user media
group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm
ioprio rt
该文件在Android 的第一个应用程序/init 中会被解析的,在system/core/init/init.c 文件中:
int main(int argc, char **argv)
{
//...
init_parse_config_file("/init.rc");
//...
}
在这里会将上述的mediaserver的服务解析出来,至于解析后详细的去向经过,此处略去。
(2). 将 CameraService 注册到Binder ServiceManager里面
在文件 frameworks/av/media/mediaserver/main_mediaserver.cpp 中进行注册:
int main(int argc __unused, char** argv)
{
//...
CameraService::instantiate();
//...
}
至于 CameraService::instantiate() 的实现,在BinderService这个模板基类里面已经实现过了,在文件 frameworks/native/include/binder/BinderService.h 中:
template<typename SERVICE>
class BinderService
{
public:
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(
String16(SERVICE::getServiceName()),
new SERVICE(), allowIsolated);
} static void instantiate() { publish(); } //...
};
将 SERVICE::getServiceName() 替换成 CameraService::getServiceName()即可。在frameworks/av/services/camera/libcameraservice/CameraService.h 文件中:
class CameraService :
public BinderService<CameraService>,
public BnCameraService,
public IBinder::DeathRecipient,
public camera_module_callbacks_t
{
static char const* getServiceName() { return "media.camera"; }
};
将new SERVICE() 替换成 new CameraService(), 那么,有意思的事情就发生了,就这样,一个camera server 的对象产生了。
二、Camera Client对象
1. Camera Client在jni层的定义
在 frameworks/av/include/camera/Camera.h 文件中定义如下:
class Camera :
public CameraBase<Camera>,
public BnCameraClient
{
public:
enum {
USE_CALLING_UID = ICameraService::USE_CALLING_UID
}; // construct a camera client from an existing remote
static sp<Camera> create(const sp<ICamera>& camera);
static sp<Camera> connect(int cameraId,
const String16& clientPackageName,
int clientUid); static status_t connectLegacy(int cameraId, int halVersion,
const String16& clientPackageName,
int clientUid, sp<Camera>& camera); virtual ~Camera(); status_t reconnect();
status_t lock();
status_t unlock(); // pass the buffered IGraphicBufferProducer to the camera service
status_t setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer); // start preview mode, must call setPreviewTarget first
status_t startPreview(); // stop preview mode
void stopPreview(); // get preview state
bool previewEnabled(); // start recording mode, must call setPreviewTarget first
status_t startRecording(); // stop recording mode
void stopRecording(); // get recording state
bool recordingEnabled(); // release a recording frame
void releaseRecordingFrame(const sp<IMemory>& mem); // autoFocus - status returned from callback
status_t autoFocus(); // cancel auto focus
status_t cancelAutoFocus(); // take a picture - picture returned from callback
status_t takePicture(int msgType); // set preview/capture parameters - key/value pairs
status_t setParameters(const String8& params); // get preview/capture parameters - key/value pairs
String8 getParameters() const; // send command to camera driver
status_t sendCommand(int32_t cmd, int32_t arg1, int32_t arg2); // tell camera hal to store meta data or real YUV in video buffers.
status_t storeMetaDataInBuffers(bool enabled); void setListener(const sp<CameraListener>& listener);
void setRecordingProxyListener(const sp<ICameraRecordingProxyListener>& listener); // Configure preview callbacks to app. Only one of the older
// callbacks or the callback surface can be active at the same time;
// enabling one will disable the other if active. Flags can be
// disabled by calling it with CAMERA_FRAME_CALLBACK_FLAG_NOOP, and
// Target by calling it with a NULL interface.
void setPreviewCallbackFlags(int preview_callback_flag);
status_t setPreviewCallbackTarget(
const sp<IGraphicBufferProducer>& callbackProducer); sp<ICameraRecordingProxy> getRecordingProxy(); // ICameraClient interface
virtual void notifyCallback(int32_t msgType, int32_t ext, int32_t ext2);
virtual void dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata);
virtual void dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr); class RecordingProxy : public BnCameraRecordingProxy
{
public:
RecordingProxy(const sp<Camera>& camera); // ICameraRecordingProxy interface
virtual status_t startRecording(const sp<ICameraRecordingProxyListener>& listener);
virtual void stopRecording();
virtual void releaseRecordingFrame(const sp<IMemory>& mem); private:
sp<Camera> mCamera;
}; protected:
Camera(int cameraId);
Camera(const Camera&);
Camera& operator=(const Camera); sp<ICameraRecordingProxyListener> mRecordingProxyListener; friend class CameraBase;
};
相比较server端的定义来说,该类非常的简单和清晰。它实质上是,为jni的实现封装了对应的c++接口而已。它和 jni 文件 android_hardware_Camera.cpp 中的接口,几乎是一一对应的。
在 frameworks/base/core/jni/android_hardware_Camera.cpp 文件中,有以下表格:
static JNINativeMethod camMethods[] = {
{ "getNumberOfCameras",
"()I",
(void *)android_hardware_Camera_getNumberOfCameras },
{ "_getCameraInfo",
"(ILandroid/hardware/Camera$CameraInfo;)V",
(void*)android_hardware_Camera_getCameraInfo },
{ "native_setup",
"(Ljava/lang/Object;IILjava/lang/String;)I",
(void*)android_hardware_Camera_native_setup },
{ "native_release",
"()V",
(void*)android_hardware_Camera_release },
{ "setPreviewSurface",
"(Landroid/view/Surface;)V",
(void *)android_hardware_Camera_setPreviewSurface },
{ "setPreviewTexture",
"(Landroid/graphics/SurfaceTexture;)V",
(void *)android_hardware_Camera_setPreviewTexture },
{ "setPreviewCallbackSurface",
"(Landroid/view/Surface;)V",
(void *)android_hardware_Camera_setPreviewCallbackSurface },
{ "startPreview",
"()V",
(void *)android_hardware_Camera_startPreview },
{ "_stopPreview",
"()V",
(void *)android_hardware_Camera_stopPreview },
{ "previewEnabled",
"()Z",
(void *)android_hardware_Camera_previewEnabled },
{ "setHasPreviewCallback",
"(ZZ)V",
(void *)android_hardware_Camera_setHasPreviewCallback },
{ "_addCallbackBuffer",
"([BI)V",
(void *)android_hardware_Camera_addCallbackBuffer },
{ "native_autoFocus",
"()V",
(void *)android_hardware_Camera_autoFocus },
{ "native_cancelAutoFocus",
"()V",
(void *)android_hardware_Camera_cancelAutoFocus },
{ "native_takePicture",
"(I)V",
(void *)android_hardware_Camera_takePicture },
{ "native_setParameters",
"(Ljava/lang/String;)V",
(void *)android_hardware_Camera_setParameters },
{ "native_getParameters",
"()Ljava/lang/String;",
(void *)android_hardware_Camera_getParameters },
{ "reconnect",
"()V",
(void*)android_hardware_Camera_reconnect },
{ "lock",
"()V",
(void*)android_hardware_Camera_lock },
{ "unlock",
"()V",
(void*)android_hardware_Camera_unlock },
{ "startSmoothZoom",
"(I)V",
(void *)android_hardware_Camera_startSmoothZoom },
{ "stopSmoothZoom",
"()V",
(void *)android_hardware_Camera_stopSmoothZoom },
{ "setDisplayOrientation",
"(I)V",
(void *)android_hardware_Camera_setDisplayOrientation },
{ "_enableShutterSound",
"(Z)Z",
(void *)android_hardware_Camera_enableShutterSound },
{ "_startFaceDetection",
"(I)V",
(void *)android_hardware_Camera_startFaceDetection },
{ "_stopFaceDetection",
"()V",
(void *)android_hardware_Camera_stopFaceDetection},
{ "enableFocusMoveCallback",
"(I)V",
(void *)android_hardware_Camera_enableFocusMoveCallback},
};
2. Camera Client对象的产生过程
(1). App 层
当App试图打开摄像头时,会启动一个线程,用于打开摄像头,在文件 packages/apps/LegacyCamera/src/com/android/camera/Camera.java 中:
Thread mCameraOpenThread = new Thread(new Runnable() {
public void run() {
//...
mCameraDevice = Util.openCamera(Camera.this, mCameraId); //open camera
//...
}
}); public void onCreate(Bundle icicle) {
mCameraOpenThread.start();
};
(2)frame -java 层
frameworks/base/core/java/android/hardware/Camera.java 文件中
public static Camera open(int cameraId) {
return new Camera(cameraId);
} /** used by Camera#open, Camera#open(int) */
Camera(int cameraId) {
int err = cameraInitNormal(cameraId);
//...
} private int cameraInitNormal(int cameraId) {
return cameraInitVersion(cameraId, CAMERA_HAL_API_VERSION_NORMAL_CONNECT);
} private int cameraInitVersion(int cameraId, int halVersion) {
//....
return native_setup(new WeakReference<Camera>(this), cameraId, halVersion, packageName);
}
native_setup 通过上面的methods[]表格可以看出,是jni提供的。
(3)JNI层
在 jni/android_hardware_Camera.cpp 文件中,native_setup 对应着jni的 android_hardware_Camera_native_setup()方法:
// connect to camera service
static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
{
// Convert jstring to String16 sp<Camera> camera; // Default path: hal version is don't care, do normal camera connect.
camera = Camera::connect(cameraId, clientName,
Camera::USE_CALLING_UID); // We use a weak reference so the Camera object can be garbage collected.
// The reference is only used as a proxy for callbacks.
sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
context->incStrong((void*)android_hardware_Camera_native_setup);
camera->setListener(context); // save context in opaque field
env->SetLongField(thiz, fields.context, (jlong)context.get());
return NO_ERROR;
}
可以简单粗暴的认为, sp<Camera> camera; 这句代码就是声明了一个 Camera相关的指针或者是引用,它会指向一个Camera对象。----当然,实质上,这是Android中的智能指针,表面对象引用计数的一个东西。
那这里就主要看下Camera::connect()是怎么返回一个对象指针(引用)的。
(4)frame-c++层
在 frameworks/av/camera/Camera.cpp 文件中:
sp<Camera> Camera::connect(int cameraId, const String16& clientPackageName,
int clientUid)
{
return CameraBaseT::connect(cameraId, clientPackageName, clientUid);
}
类Camera 继承于类 CameraBaseT,CameraBaseT定义在了 frameworks/av/include/camera/CameraBase.h 文件中,成员函数实现在了 frameworks/av/camera/CameraBase.cpp 文件中。
这里看下CameraBaseT::connect()的动作:
template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId,
const String16& clientPackageName,
int clientUid)
{
sp<TCam> c = new TCam(cameraId); //通过SM获取CameraService在本地的一个引用。调用connect函数后最终调用CameraService侧的connect()函数
const sp<ICameraService>& cs = getCameraService(); if (cs != ) {
TCamConnectService fnConnectService = TCamTraits::fnConnectService;
status = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid,
/*out*/ c->mCamera);
}
//...
}
获取CameraService在本地的一个引用,这行代码很简单。而比较有意思的是,是如何将client端的connect交换到server端。
由于这里是一个模板,模板的原则是带入或者说用实例去替换。
上面的 TCam 可以使用 Camera 来替换;但是TCamTraits可没人传进来,怎么替换呢?
在 frameworks/av/include/camera/CameraBase.h 文件中:
template <typename TCam, typename TCamTraits = CameraTraits<TCam> >
class CameraBase : public IBinder::DeathRecipient
{
//...
};
在CameraBase模板定义的时候,可以看到了 typename TCamTraits = CameraTraits<TCam> 简而言之,就是TCamTraits使用默认的CameraTraits<TCam>的来代替就行了。
而CameraTraits<TCam>中的 TCam 再次使用 Camera 来代替,那就形成了这个格式:
TCamConnectService fnConnectService = TCamTraits::fnConnectService; => TCamConnectService fnConnectService = CameraTraits<Camera>::fnConnectService;
而 CameraTraits<Camera>::fnConnectService 在 frameworks/av/camera/Camera.cpp 文件中有明确的表示:
CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService =
&ICameraService::connect;
于是,这样,就把client端的connect交换到了 ICameraService 的名录下了。而这个ICameraService 与 server是有着远亲继承关系的:
可以回头看下 CameraService 的继承关系:
class CameraService :
public BinderService<CameraService>,
public BnCameraService,
public IBinder::DeathRecipient,
public camera_module_callbacks_t
{
//...
};
这里有继承 BnCameraService 类,再看下 BnCameraService 的定义:
class BnCameraService: public BnInterface<ICameraService>
{
//...
};
再跟下 public BnInterface<ICameraService> 的东西:
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
//...
};
到了这里,当我们使用 ICameraService 替换掉 INTERFACE 后,一切的远亲关系就明了了。也就无需赘言了。
就这样,最终来到了 CameraService 的connect()成员函数里了。
现在,就把对象从client转换成server吧:
在 frameworks/av/services/camera/libcameraservice/CameraService.cpp 文件中:
status_t CameraService::connect(
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
/*out*/
sp<ICamera>& device) { //... sp<Client> client;
{
//...
status = connectHelperLocked(/*out*/client,
cameraClient,
cameraId,
clientPackageName,
clientUid,
callingPid);
}
// important: release the mutex here so the client can call back
// into the service from its destructor (can be at the end of the call) device = client; //通过指针(引用)方式,将获取到的Camera对象,返回到client那边去,然后再逐一返回到java层。当然,这种说通过指针的方式,仅是一种粗暴简单的说法。
return OK;
}
这里面的 connectHelperLocked()很重要,它将是整个Camera框架中,获取一个Camera client对象的终点:
status_t CameraService::connectHelperLocked(
/*out*/
sp<Client>& client,
/*in*/
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
int callingPid,
int halVersion,
bool legacyMode) { //...
client = new CameraClient(this, cameraClient,
clientPackageName, cameraId,
facing, callingPid, clientUid, getpid(), legacyMode); //...
status_t status = connectFinishUnsafe(client, client->getRemote()); //...
mClient[cameraId] = client; //...
return OK;
}
好! 这里会new 一个camera client,看下 CameraClient的定义,在 frameworks/av/services/camera/libcameraservice/api1/CameraClient.h 文件中:
class CameraClient : public CameraService::Client
{
public:
// ICamera interface (see ICamera for details)
virtual void disconnect();
virtual status_t connect(const sp<ICameraClient>& client);
virtual status_t lock();
virtual status_t unlock();
virtual status_t setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer);
virtual void setPreviewCallbackFlag(int flag);
virtual status_t setPreviewCallbackTarget(
const sp<IGraphicBufferProducer>& callbackProducer);
virtual status_t startPreview();
virtual void stopPreview();
virtual bool previewEnabled();
virtual status_t storeMetaDataInBuffers(bool enabled);
virtual status_t startRecording();
virtual void stopRecording();
virtual bool recordingEnabled();
virtual void releaseRecordingFrame(const sp<IMemory>& mem);
virtual status_t autoFocus();
virtual status_t cancelAutoFocus();
virtual status_t takePicture(int msgType);
virtual status_t setParameters(const String8& params);
virtual String8 getParameters() const;
virtual status_t sendCommand(int32_t cmd, int32_t arg1, int32_t arg2); // Interface used by CameraService
CameraClient(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const String16& clientPackageName,
int cameraId,
int cameraFacing,
int clientPid,
int clientUid,
int servicePid,
bool legacyMode = false);
~CameraClient(); status_t initialize(camera_module_t *module);
//... private: //...
};
从定义可以知道,CameraClient 是来自 CameraService::Client 这个内部类的。
而 CameraService::Client 这个内部类,可以世俗的认为,是 CameraService 专为 Client 在心中留下的位置。
所谓的Client的远端调用,最终都会落实到 CameraService::Client 这个里面去做,再通过继承的关系,就顺利成章的把这一切任务交给了 CameraClient 的实例.
其实,到了这里,还不能完全说,真正的camera client对象已经被new出来了。因为Camera最终会和具体的设备相关。在new了一个 CameraClient 后,还需要考虑设备上的一些问题。
这里需要关注下下面这行代码的处理,因为它会告诉上层一个 CameraClient 是否构造成功的标志:
status_t status = connectFinishUnsafe(client, client->getRemote());
status_t CameraService::connectFinishUnsafe(const sp<BasicClient>& client,
const sp<IBinder>& remoteCallback) {
status_t status = client->initialize(mModule); if (status != OK) {
ALOGE("%s: Could not initialize client from HAL module.", __FUNCTION__);
return status;
}
if (remoteCallback != NULL) {
remoteCallback->linkToDeath(this);
} return OK;
}
看下 client->initialize 的实现, 在 frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp 文件中:
status_t CameraClient::initialize(camera_module_t *module) {
int callingPid = getCallingPid();
status_t res; // Verify ops permissions
res = startCameraOps();
if (res != OK) {
return res;
} char camera_device_name[];
snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId); mHardware = new CameraHardwareInterface(camera_device_name);
res = mHardware->initialize(&module->common);
if (res != OK) {
ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
mHardware.clear();
return res;
} //设置HAL层的回掉函数
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
(void *)(uintptr_t)mCameraId); // Enable zoom, error, focus, and metadata messages by default
enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE); LOG1("CameraClient::initialize X (pid %d, id %d)", callingPid, mCameraId);
return OK;
}
看下 mHardware->initialize 的实现, 在文件 frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h 中:
class CameraHardwareInterface : public virtual RefBase {
public:
status_t initialize(hw_module_t *module)
{
camera_module_t *cameraModule = reinterpret_cast<camera_module_t *>(module);
//....
}
};
在这里,出现了 hw_module_t 这个HAL层特有的数据结构---也即是说,这里就开始涉及到了 HAL层。
那么,当一个 CameraClient 被new 后,initialize会去HAL层处理相关事务,如果没有意外,那这个 CameraClient 对象就真正的new 成功了。然后根据状态标志,将一路返回,直到App那里。
而App可能将开始打开摄像头的下一步动作:预览。
但这已经不再本篇 Camera对象的分析范围内了。
当摄像头client对象被建立(打开)后,下面的预览、拍照等等一切操作,将依赖刚才那个返回的 CameraClient 对象引用。在文件 android_hardware_Camera.cpp中:
sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
{
sp<Camera> camera;
Mutex::Autolock _l(sLock);
JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
if (context != NULL) {
camera = context->getCamera();
}
ALOGI("get_native_camera: context=%p, camera=%p", context, camera.get());
if (camera == ) {
jniThrowRuntimeException(env,
"Camera is being used after Camera.release() was called");
} if (pContext != NULL) *pContext = context;
return camera;
}
get_native_camera会去获取已经创建好了的 CameraClient 对象。比如拍照:
static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, jint msgType)
{
//...
sp<Camera> camera = get_native_camera(env, thiz, &context);
//...
}
比如预览:
static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
ALOGV("startPreview");
sp<Camera> camera = get_native_camera(env, thiz, NULL);
if (camera == ) return;
//...
}
等等一系列动作,将依赖那个返回的 CameraClient 对象引用。直到 release 动作的发生。
(over)
2016-01-1
sc7731 Android 5.1 Camera 学习之一Camera 两个对象的更多相关文章
-
vue 源码 学习days8-比较两个对象的方法
// 在面试中可能会遇到, 思想重要 // 比较两个对象是否是相等的 两个对象 // 1. js 中对象是无法使用 == 来比较的, 比是地址 // 2. 我们一般会定义如果对象的各个属性值都相等 那 ...
-
sc7731 Android 5.1 Camera 学习之二 framework 到 HAL接口整理
前面已经分析过,Client端发起远程调用,而实际完成处理任务的,是Server端的 CameraClient 实例.远程client 和 server是两个不同的进程,它们使用binder作为通信工 ...
-
Android 5.1 Camera 架构学习之Camera初始化
Android Camera 采用C/S架构,client 与server两个独立的线程之间(CameraService)使用Binder通信. 一 CameraService的注册. 1.手机开机后 ...
-
android从应用到驱动之&mdash;camera(2)---cameraHAL的实现
本文是camera系列博客,上一篇是: android从应用到驱动之-camera(1)---程序调用流程 本来想用这一篇博客把cameraHAL的实现和流程都给写完的.搞了半天,东西实在是太多了.这 ...
-
android从应用到驱动之—camera(1)---程序调用流程
一.开篇 写博客还得写开篇介绍,可惜,这个不是我所擅长的.就按我自己的想法写吧. 话说camera模块,从上层到底层一共包含着这么几个部分: 1.apk------java语言 2.camera的ja ...
-
android从应用到驱动之—camera(1)---程序调用流程[转]
一.开篇 写博客还得写开篇介绍,可惜,这个不是我所擅长的.就按我自己的想法写吧. 话说camera模块,从上层到底层一共包含着这么几个部分: 1.apk------java语言 2.camera的ja ...
-
Android 举例说明自己的定义Camera图片和预览,以及前后摄像头切换
如何调用本地图片,并调用系统拍摄的图像上一博文解释(http://blog.csdn.net/a123demi/article/details/40003695)的功能. 而本博文将通过实例实现自己定 ...
-
【转】android camera(四):camera 驱动 GT2005
关键词:android camera CMM 模组 camera参数 GT2005 摄像头常见问题 平台信息: 内核:linux系统:android 平台:S5PV310(samsung exyn ...
-
【转】android camera(三):camera V4L2 FIMC
关键词:android camera CMM 模组 camera参数 CAMIF V4L2 平台信息:内核:linux系统:android 平台:S5PV310(samsung exynos ...
随机推荐
-
【Python】函数基础简介
一.函数 1. 简介 函数是组织好的,可重复使用的,用来实现单一,或相关联功能的代码段.函数能提高应用的模块性,和代码的重复利用率. 2. 组成 函数代码块以 def 关键词开头,后接函数名和圆括号( ...
-
php--纯静态和伪静态的区别与关系
先前说了什么是纯静态和伪静态,现在介绍一下他们的区别? 首先肯定的是纯静态和伪静态都是SEO的产物,但纯静态和伪静态还是有很大区别的.纯静态是生成真实的HTML页面保存到服务器端,用户访问时直接访问这 ...
-
几种JavaScript富应用MVC MVVM框架
Ember.js.Backbone.js.Knockout.js.Spine.js.Batman.js , Angular.js 前端中的MVVM设计模式让UI与数据模型可以很轻松的相互更新,这意味着 ...
-
C#事件作用和用法
例如有下面的需求需要实现:程序主画面中弹出一个子窗口.此时主画面仍然可以接收用户的操作(子窗口是非模态的).子窗口上进行某些操作,根据操作的结果要在主画面上显示不同的数据. 即如下图所示: 大多数我们 ...
-
安装ImageMagick扩展出现configure: error: not found. Please provide a path to MagickWand-config or Wand- config program
安装ImageMagick扩展报错: checking ImageMagick MagickWand API configuration program... checking Testing /u ...
-
框架应用:Spring framework (五) - Spring MVC技术
软件开发中的MVC设计模式 软件开发的目标是减小耦合,让模块之前关系清晰. MVC模式在软件开发中经常和ORM模式一起应用,主要作用是将(数据抽象,数据实体传输和前台数据展示)分层,这样前台,后台,数 ...
-
iOS-硬件授权检测【通讯录、相机、相册、日历、麦克风、定位授权】
总结下几个常用到的获取手机权限,从iOS8以后,获取手机某种权限需要在info.plist文件中添加权限的描述文件 <key>NSContactsUsageDescription</ ...
-
mysql面试题
01. 列举常见的关系型数据库和非关系型都有那些? 1.关系型数据库通过外键关联来建立表与表之间的关系,---------常见的有:SQLite.Oracle.mysql 2.非关系型数据库通常指数据 ...
-
[Unity优化]UI优化(三):GraphicRebuild
参考链接: https://blog.csdn.net/jingangxin666/article/details/80143176 调试过程: 1.修改Image的颜色 2.Graphic.SetV ...
-
MySQL插入,更新,删除数据
插入 单行插入 1.insert into 表名 values(col1_value,col2_value,...); 每个列必须提供一个值,如果没有值,要提供NULL值 每个列必须与它在表中定义的次 ...