摘要:
Android音频策略中的音量控制方面主分为以下几点:
1.软件音量曲线和硬件音量曲线的加载过程及修改方法
2.音量调节过程的实现
3.音频流类型与其别名的对照关系
一、软音量曲线的加载及修改
概述:
在运行AudioPolicyManager的构造函数时会解析音频策略配置文件:audio_policy_configuration.xml,得到大部分音频策略信息,其中包括硬件模块、输入/输出设备及音量曲线等,这些信息会被保存在AudioPolicyConfig中
关键类及说明:
类名 | 作用 |
---|---|
AudioPolicyConfig | 保存音频策略信息,包括硬件模块,输入/输出设备,音量曲线等 |
IVolumeCurvesCollection | 音频曲线类的基类,规定了音频曲线类需要实现的一些基本方法 |
VolumeCurvesCollection | 音频曲线功能的实现者,继承了KeyedVector,保存audio_stream_type_t和VolumeCurvesForStream的对应关系 |
VolumeCurvesForStream | 继承了KeyedVector,保存device_category和VolumeCurve的对应关系 |
VolumeCurve | 用于描述解析出的音频曲线的具体数值 |
在的构造函数中会执行两个方法:
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
: AudioPolicyManager(clientInterface, false /*forTesting*/)
{
loadConfig();
initialize();
}
- 1
- 2
- 3
- 4
- 5
- 6
其中loadConfig方法会先判断宏USE_XML_AUDIO_POLICY_CONF
的值,如果这个宏被定义则说明要从音频策略配置文件读取音频策略信息(基本都会定义),之后便会去解析配置文件audio_policy_configuration.xml:
void AudioPolicyManager::loadConfig() {
#ifdef USE_XML_AUDIO_POLICY_CONF
if (deserializeAudioPolicyXmlConfig(getConfig()) != NO_ERROR) {
#else
if ((ConfigParsingUtils::loadConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE, getConfig()) != NO_ERROR)
&& (ConfigParsingUtils::loadConfig(AUDIO_POLICY_CONFIG_FILE, getConfig()) != NO_ERROR)) {
#endif
ALOGE("could not load audio policy configuration file, setting defaults");
getConfig().setDefault();
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
调用deserializeAudioPolicyXmlConfig(getConfig())
方法去解析配置文件时,使用getConfig()方法传入一个参数,这个参数的类型是AudioPolicyConfig
在中:
class AudioPolicyConfig
{
public:
AudioPolicyConfig(HwModuleCollection &hwModules,
DeviceVector &availableOutputDevices,
DeviceVector &availableInputDevices,
sp<DeviceDescriptor> &defaultOutputDevices,
VolumeCurvesCollection *volumes = nullptr)
: mHwModules(hwModules),
mAvailableOutputDevices(availableOutputDevices),
mAvailableInputDevices(availableInputDevices),
mDefaultOutputDevices(defaultOutputDevices),
mVolumeCurves(volumes),
mIsSpeakerDrcEnabled(false)
......
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
可以看到AudioPolicyConfig类中包含了很多信息,其中mVolumeCurves就是音频曲线
接着看deserializeAudioPolicyXmlConfig
方法的实现:
#ifdef USE_XML_AUDIO_POLICY_CONF
// Treblized audio policy xml config will be located in /odm/etc or /vendor/etc.
static const char *kConfigLocationList[] =
{"/odm/etc", "/vendor/etc", "/system/etc"};
static const int kConfigLocationListSize =
(sizeof(kConfigLocationList) / sizeof(kConfigLocationList[0]));
static status_t deserializeAudioPolicyXmlConfig(AudioPolicyConfig &config) {
char audioPolicyXmlConfigFile[AUDIO_POLICY_XML_CONFIG_FILE_PATH_MAX_LENGTH];
std::vector<const char*> fileNames;
status_t ret;
if (property_get_bool(".a2dp_offload.supported", false) &&
property_get_bool(".a2dp_offload.disabled", false)) {
// A2DP offload supported but disabled: try to use special XML file
fileNames.push_back(AUDIO_POLICY_A2DP_OFFLOAD_DISABLED_XML_CONFIG_FILE_NAME);
}
fileNames.push_back(AUDIO_POLICY_XML_CONFIG_FILE_NAME);
for (const char* fileName : fileNames) {
for (int i = 0; i < kConfigLocationListSize; i++) {
PolicySerializer serializer;
snprintf(audioPolicyXmlConfigFile, sizeof(audioPolicyXmlConfigFile),
"%s/%s", kConfigLocationList[i], fileName);
ret = serializer.deserialize(audioPolicyXmlConfigFile, config);
if (ret == NO_ERROR) {
return ret;
}
}
}
return ret;
}
#endif
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
这段代码做了一个循环,解析了”/odm/etc”, “/vendor/etc”, “/system/etc”这三个路径和AUDIO_POLICY_XML_CONFIG_FILE_NAME拼接的文件,而AUDIO_POLICY_XML_CONFIG_FILE_NAME就是audio_policy_configuration.xml
#define AUDIO_POLICY_XML_CONFIG_FILE_NAME "audio_policy_configuration.xml"
- 1
在audio_policy_configuration.xml中,通过include的方式包含了两个xml文件:
......
<!-- Volume section -->
<xi:include href="audio_policy_volumes.xml"/>
<xi:include href="default_volume_tables.xml"/>
......
- 1
- 2
- 3
- 4
- 5
- 6
在audio_policy_volumes.xml中,规定了音频流、输出设备和音量曲线的关系:
因为不同的音频流使用不同的音频曲线,而同一音频流在输出设备不同时也采用不同的音频曲线,所以必须规定这三者的对应关系,xml中的这种对应关系被方法解析后,在代码中体现为VolumeCurvesCollection-VolumeCurvesForStream-VolumeCurve的对应关系
audio_policy_volumes.xml:
<volume stream="音频类型" deviceCategory="输出设备"
ref="音频曲线"/>
<volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA"
ref="DEFAULT_DEVICE_CATEGORY_EXT_MEDIA_VOLUME_CURVE"/>
<volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_HEARING_AID"
ref="DEFAULT_HEARING_AID_VOLUME_CURVE"/>
<volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_HEADSET"
ref="DEFAULT_MEDIA_VOLUME_CURVE"/>
<volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_SPEAKER"
ref="DEFAULT_DEVICE_CATEGORY_SPEAKER_VOLUME_CURVE"/>
<volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_EARPIECE"
ref="DEFAULT_MEDIA_VOLUME_CURVE"/>
<volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA"
ref="DEFAULT_MEDIA_VOLUME_CURVE"/>
......
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
在default_volume_tables.xml中规定了具体音频曲线的值,如DEFAULT_MEDIA_VOLUME_CURVE
曲线上点的xy值:
......
<reference name="DEFAULT_MEDIA_VOLUME_CURVE">
<!-- Default Media reference Volume Curve -->
<point>1,-5800</point>
<point>20,-4000</point>
<point>60,-1700</point>
<point>100,0</point>
</reference>
<reference name="DEFAULT_DEVICE_CATEGORY_HEADSET_VOLUME_CURVE">
<!--Default Volume Curve -->
<point>1,-4950</point>
<point>33,-3350</point>
<point>66,-1700</point>
<point>100,0</point>
</reference>
......
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
所以如果想要修改音频曲线,只要修改default_volume_tables.xml文件就行了
在PolicySerializer类的方法中解析了audio_policy_volumes.xml中音频类型-输出设备-音频曲线之间的对应关系,并将这种用xml文件描述的对应关系转化为用代码表示,也就是VolumeCurvesCollection-VolumeCurvesForStream-VolumeCurve的对应关系。
VolumeCurvesCollection保存音频流类型和VolumeCurvesForStream的对应关系,在VolumeCurvesCollection构造时,会为每一个音频流类型创建一个空的VolumeCurvesForStream:
class VolumeCurvesCollection : public KeyedVector<audio_stream_type_t, VolumeCurvesForStream>,
public IVolumeCurvesCollection
{
public:
VolumeCurvesCollection()
{
// Create an empty collection of curves
for (ssize_t i = 0 ; i < AUDIO_STREAM_CNT; i++) {
audio_stream_type_t stream = static_cast<audio_stream_type_t>(i);
KeyedVector::add(stream, VolumeCurvesForStream());
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
VolumeCurvesForStream保存特定音频类型使用不同设备时对应的音频曲线,这种输出设备和音频曲线的对应关系在调用解析audio_policy_volumes.xml时被赋值给VolumeCurvesForStream
VolumeCurve保存具体的音频曲线,在解析default_volume_tables.xml时赋值
我们来看一下方法,这里面解析了很多东西,我们只看和音频曲线有关的:
:
status_t PolicySerializer::deserialize(const char *configFile, AudioPolicyConfig &config)
{
......
// deserialize volume section
VolumeTraits::Collection volumes;
deserializeCollection<VolumeTraits>(doc, cur, volumes, &config);
config.setVolumes(volumes);
......
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
首先定义了一个VolumeTraits::Collection类型的对象volumes,在中有以下定义:
typedef VolumeCurvesCollection Collection;
- 1
即volumes对象就是VolumeCurvesCollection类型,之后调用模板deserializeCollection<VolumeTraits>
去逐个解析xml文件中各种标签并对volumes赋值,当然也会对VolumeCurvesCollection中空的VolumeCurvesForStream赋值,VolumeCurvesForStream又会对其中保存的VolumeCurve赋值。最后调用(volumes)
,对AudioPolicyConfig中的mVolumeCurves变量赋值:
void setVolumes(const VolumeCurvesCollection &volumes)
{
if (mVolumeCurves != nullptr) {
*mVolumeCurves = volumes;
}
}
- 1
- 2
- 3
- 4
- 5
- 6
在deserializeCollection<VolumeTraits>(doc, cur, volumes, &config)
方法中完成了对xml的解析,笔者认为即使不知道具体的解析过程也不会影响理解和使用,故在此不再深究,如果想要深入研究可以参考这篇文章的后半部分deserializeCollection的具体实现
至此软件音量曲线的加载就完成了,调节音量时,会根据传入的stream参数先找到VolumeCurvesForStream对象,再根据传入的device参数找到具体的VolumeCurve,最后根据index参数及音量曲线计算出音量的分贝值
二、软音量调节流程
概述:
Android提供两种接口来调节硬件音量:adjustStreamVolume和setStreamVolume,adjustStreamVolume传入音量调节的方向,setStreamVolume直接传入音量值。调节过程中首先根据音频流类型找到输出设备,再根据音频流类型和输出设备找到音频曲线并计算出音量的db值,最后将音量值设置到对应的混音线程PlayBackThread中,实现音量调节。需要注意的是当音调至0时称为muteAdjust,Android会对这种情况做一些特别处理
关键类及说明:
类名 | 说明 |
---|---|
VolumeStreamState | 保存一个音频流类型所有与音量相关的信息 |
我们来看一下中的adjustStreamVolume函数,只截取了关键的部分:
protected void adjustStreamVolume(int streamType, int direction, int flags,
String callingPackage, String caller, int uid) {
......
ensureValidDirection(direction);//检查参数合法性
ensureValidStreamType(streamType);
boolean isMuteAdjust = isMuteAdjust(direction);//判断当前的调节是否导致静音,如果静音的话则进行一些额外操作
......
int streamTypeAlias = mStreamVolumeAlias[streamType];//由streamType得到streamTypeAlias,streamTypeAlias是对有相似行为的streamType的归类
VolumeStreamState streamState = mStreamStates[streamTypeAlias];//由streamTypeAlias得到VolumeStreamState
//从streamTypeAlias得到device,这个函数的最终实现在
//大概的逻辑是由audio_stream_type_t得到路由策略routing_strategy,再由routing_strategy得到device
//之后再判断在此device的所有output中是否存在至少一个当前stream类型处于Active状态,如果存在则返回此device
final int device = getDeviceForStream(streamTypeAlias);
......
int step;
......
//清理待处理的音量处理命令
synchronized (mSafeMediaVolumeState) {
mPendingVolumeCommand = null;
}
......
//rescaleIndex用于将音量值的变化量从源流类型变换到目标流类型下,
//由于不同的流类型的音量调节范围不同,所以这个转换是必需的
//在VolumeStreamState保存的音量值是真实值的10倍,在真正传入底层前会除以10
step = rescaleIndex(10, streamType, streamTypeAlias);
// 响铃模式处理
if (((flags & AudioManager.FLAG_ALLOW_RINGER_MODES) != 0) ||
(streamTypeAlias == getUiSoundsStreamType())) {
int ringerMode = getRingerModeInternal();
......
}
......
//获取到目前的旧音量
int oldIndex = mStreamStates[streamType].getIndex(device);
//最后判断一下是否设置音量
if (adjustVolume && (direction != AudioManager.ADJUST_SAME)) {
mAudioHandler.removeMessages(MSG_UNMUTE_STREAM);
//对静音情况的特殊处理
if (isMuteAdjust) {
boolean state;
if (direction == AudioManager.ADJUST_TOGGLE_MUTE) {
state = !streamState.mIsMuted;
} else {
state = direction == AudioManager.ADJUST_MUTE;
}
if (streamTypeAlias == AudioSystem.STREAM_MUSIC) {
setSystemAudioMute(state);
}
for (int stream = 0; stream < mStreamStates.length; stream++) {
if (streamTypeAlias == mStreamVolumeAlias[stream]) {
if (!(readCameraSoundForced()
&& (mStreamStates[stream].getStreamType()
== AudioSystem.STREAM_SYSTEM_ENFORCED))) {
//对静音的处理最终调用了VolumeStreamState的mute方法
mStreamStates[stream].mute(state);
}
}
}
//对非mute情况的处理
} else if ((direction == AudioManager.ADJUST_RAISE) &&
!checkSafeMediaVolume(streamTypeAlias, aliasIndex + step, device)) {
Log.e(TAG, "adjustStreamVolume() safe volume index = " + oldIndex);
mVolumeController.postDisplaySafeVolumeWarning(flags);
} else if (streamState.adjustIndex(direction * step, device, caller)//这一步很关键,adjustIndex将新计算出的音量保存在VolumeStreamState中
|| streamState.mIsMuted) {
// Post message to set system volume (it in turn will post a
// message to persist).
if (streamState.mIsMuted) {
// Unmute the stream if it was previously muted
if (direction == AudioManager.ADJUST_RAISE) {
// unmute immediately for volume up
streamState.mute(false);
} else if (direction == AudioManager.ADJUST_LOWER) {
if (mIsSingleVolume) {
sendMsg(mAudioHandler, MSG_UNMUTE_STREAM, SENDMSG_QUEUE,
streamTypeAlias, flags, null, UNMUTE_STREAM_DELAY);
}
}
}
//设置到底层,注意传入了当前这个streamState对象,音量值保存在其中
sendMsg(mAudioHandler,
MSG_SET_DEVICE_VOLUME,
SENDMSG_QUEUE,
device,
0,
streamState,
0);
}
int newIndex = mStreamStates[streamType].getIndex(device);
......
int index = mStreamStates[streamType].getIndex(device);
sendVolumeUpdate(streamType, oldIndex, index, flags);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
我们先来看一下设置静音的VolumeStreamState的mute方法,其先对mIsMuted
变量赋值(重点),然后发送一个massage,最后发送一个广播通知:
:
public void mute(boolean state) {
boolean changed = false;
synchronized (VolumeStreamState.class) {
if (state != mIsMuted) {
changed = true;
mIsMuted = state;
sendMsg(mAudioHandler,
MSG_SET_ALL_VOLUMES,
SENDMSG_QUEUE,
0,
0,
this, 0);
}
}
if (changed) {
// Stream mute changed, fire the intent.
Intent intent = new Intent(AudioManager.STREAM_MUTE_CHANGED_ACTION);
intent.putExtra(AudioManager.EXTRA_VOLUME_STREAM_TYPE, mStreamType);
intent.putExtra(AudioManager.EXTRA_STREAM_VOLUME_MUTED, state);
sendBroadcastToAll(intent);
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
在handleMessage中处理这个massage,调用setAllVolumes:
case MSG_SET_ALL_VOLUMES:
setAllVolumes((VolumeStreamState) msg.obj);
break;
- 1
- 2
- 3
之后再调用VolumeStreamState的applyAllVolumes:
private void setAllVolumes(VolumeStreamState streamState) {
// Apply volume
streamState.applyAllVolumes();
// Apply change to all streams using this one as alias
int numStreamTypes = AudioSystem.getNumStreamTypes();
for (int streamType = numStreamTypes - 1; streamType >= 0; streamType--) {
if (streamType != streamState.mStreamType &&
mStreamVolumeAlias[streamType] == streamState.mStreamType) {
mStreamStates[streamType].applyAllVolumes();
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
在applyAllVolumes中,如果mIsMuted
,则对index赋值为0,调用AudioSystem的setStreamVolumeIndex
方法将index传到底层:
public void applyAllVolumes() {
synchronized (VolumeStreamState.class) {
// apply device specific volumes first
int index;
......
if (mIsMuted) {
index = 0;
} else {
index = (getIndex(AudioSystem.DEVICE_OUT_DEFAULT) + 5)/10;
}
AudioSystem.setStreamVolumeIndex(
mStreamType, index, AudioSystem.DEVICE_OUT_DEFAULT);
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
再来看一下adjustStreamVolume方法最后的设置音量的方法,同样是发送了一个massage:
sendMsg(mAudioHandler,
MSG_SET_DEVICE_VOLUME,
SENDMSG_QUEUE,
device,
0,
streamState,
0);
- 1
- 2
- 3
- 4
- 5
- 6
- 7
之后的调用流程如出一辙:setDeviceVolume -> mStreamStates[streamType].applyDeviceVolume_syncVSS ->
原来静音处理和正常的设置音量最终调用的接口是一样,唯一不同的是静音接口会设置mIsMuted变量值,如果mIsMuted为true,则设置到底层的index必然为0,由此看来,VolumeStreamState的mute方法还是很重要的,如果不调用mute方法去解除静音状态,那不管设置什么音量都是没有用的
而且我们注意到,setStreamVolumeIndex
方法传入的三个参数分别为音频类型-音量量级-输出设备
联系之前软音量曲线的知识点,可想而知这几个参数在计算具体音量值时会帮助我们找到正确的音量曲线,事实上也正是如此,在中会使用mVolumeCurves->volIndexToDb
计算出volumeDB,具体的计算过程可以结合上面的讲解阅读源代码,最终设置到混音线程PlayBackThread中,我们用时序图来表示整个调用流程:
最终在AudioFlinger::PlaybackThread的实现中,只是简单的对变量赋值:
void AudioFlinger::PlaybackThread::setStreamVolume(audio_stream_type_t stream, float value)
{
Mutex::Autolock _l(mLock);
mStreamTypes[stream].volume = value;
broadcast_l();
}
- 1
- 2
- 3
- 4
- 5
- 6
这样赋值后并没有什么实际操作,那么音量设置到什么时候才会起作用呢?
在PlaybackThread::threadLoop函数中,对音频数据进行处理前需要先调用prepareTracks_l来做准备工作:
AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(
Vector< sp<Track> > *tracksToRemove)
{
......
float volume = masterVolume
* mStreamTypes[track->streamType()].volume
* vh;
......
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
可以看到之前设置的音量值在这里会被取出来,之后计算出的这个音量会传入底层,到这里音量设置的流程就全部结束了
三、硬音量调节流程及音量曲线的实现
概述:
硬件音量曲线可以在中实现,也可以在Hal层中实现。在中。定义mUseDynamicRouting会使得setGroupVolume方法走上截然不同的道路——设置硬件音量,在中根据setGroupVolume方法传入的值和audio_Policy_Configuration.xml中提供的信息,进行计算得到音量gainInMillibels,之后将gainInMillibels一路传到Hal层,在Hal层根据音频曲线再次计算音量值,最后调用BSP提供的接口设置音量
关键类及说明:
类名 | 说明 |
---|---|
CarVolumeGroup | 对音频类型的分组,将某几种音频类型分为一组,并保存这几种类型contextNumber-busNumber-CarAudioDeviceInfo三者的对应关系 |
CarAudioDeviceInfo | 对音频设备的抽象,用于描述音频设备 |
AudioGain | 音频设备的属性之一,CarAudioDeviceInfo类中包含该类 |
AudioGainConfig | 对音量值和其他一些信息的封装 |
AudioPortConfig | 对AudioGain和AudioGainConfig的封装,其能在Native层传递 |
在中,提供了名为setGroupVolume的接口,该接口对mUseDynamicRouting变量进行判断:如果为假,则调节软件音量,如果为真,则转而调节硬件音量:
:
@Override
public void setGroupVolume(int groupId, int index, int flags) {
synchronized (mImplLock) {
enforcePermission(Car.PERMISSION_CAR_CONTROL_AUDIO_VOLUME);
callbackGroupVolumeChange(groupId, flags);
// For legacy stream type based volume control
if (!mUseDynamicRouting) {
mAudioManager.setStreamVolume(STREAM_TYPES[groupId], index, flags);
return;
}
CarVolumeGroup group = getCarVolumeGroup(groupId);
Log.d(CarLog.TAG_AUDIO, "group " + groupId + " setCurrentGainIndex" + index);
group.setCurrentGainIndex(index);
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
其中会根据传入的参数groupId决定去设置哪一个CarVolumeGroup的音量,为了解释CarVolumeGroup,我们需要知道关于CarAudioService:
在CarAudioService初始化时,会获取描述输出设备信息的抽象类CarAudioDeviceInfo(其本质还是解析audio_Policy_Configuration.xml)每一个CarAudioDeviceInfo都有一个设备代号,称为busNumber,在文件中规定了每一种音频流(contextNumber)对应的设备代号,可根据具体需求修改:
static int sContextToBusMap[] = {
-1, // INVALID
0, // MUSIC_CONTEXT
1, // NAVIGATION_CONTEXT
2, // VOICE_COMMAND_CONTEXT
3, // CALL_RING_CONTEXT
4, // CALL_CONTEXT
5, // ALARM_CONTEXT
6, // NOTIFICATION_CONTEXT
7, // SYSTEM_SOUND_CONTEXT
};
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
这就明确了contextNumber-busNumber-CarAudioDeviceInfo三者的对应关系。例如当contextNumber为MUSIC_CONTEXT时,其对应的设备代号busNumber为0,而0这个设备代号又对应了一个CarAudioDeviceInfo。为了便于管理和定制化音频策略,Android又依据音频的contextNumber将不同的音频类型分为几组,在car_volume_groups.xml中:
<volumeGroups xmlns:car="/apk/res-auto">
<group>
<context car:context="music"/>
<context car:context="call_ring"/>
<context car:context="notification"/>
<context car:context="system_sound"/>
</group>
<group>
<context car:context="navigation"/>
<context car:context="voice_command"/>
</group>
<group>
<context car:context="call"/>
</group>
<group>
<context car:context="alarm"/>
</group>
</volumeGroups>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
还是在CarAudioService初始化时,会解析car_volume_groups.xml,每一组用一个CarVolumeGroup类表示,在CarVolumeGroup类中保存了这组中几个音频类型的contextNumber-busNumber-CarAudioDeviceInfo对应关系,这样通过setGroupVolume的参数groupId就可以方便的控制调节哪些音频类型对应的输出设备的硬件音量了
setGroupVolume方法接下来会调用中的setCurrentGainIndex方法:
:
group.setCurrentGainIndex(index);
- 1
CarVolumeGroup的setCurrentGainIndex方法会首先调用getGainForIndex方法,根据音量最小值和步长计算得出一个gainInMillibels,如果步长为1且音量最小值为0,则计算出的gainInMillibels就等于index,一般都是相等的。之后遍历这个CarVolumeGroup中所有音频类型对应的音频设备,逐个调用它们的setCurrentGain方法,最后将音量值保存起来,在中:
void setCurrentGainIndex(int gainIndex) {
int gainInMillibels = getGainForIndex(gainIndex);
Preconditions.checkArgument(
gainInMillibels >= mMinGain && gainInMillibels <= mMaxGain,
"Gain out of range (" +
mMinGain + ":" +
mMaxGain +") " +
gainInMillibels + "index " +
gainIndex);
for (int i = 0; i < mBusToCarAudioDeviceInfos.size(); i++) {
CarAudioDeviceInfo info = mBusToCarAudioDeviceInfos.valueAt(i);
info.setCurrentGain(gainInMillibels);
}
mCurrentGainIndex = gainIndex;
Settings.Global.putInt(mContentResolver,
CarAudioManager.getVolumeSettingsKeyForGroup(mId), gainIndex);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
用于计算音量的getGainForIndex方法:
private int getGainForIndex(int gainIndex) {
return mMinGain + gainIndex * mStepSize;
}
- 1
- 2
- 3
这就是我们可以用来实现硬件音量曲线的第一个位置
这里用于计算音量值的参数都是audio_Policy_Configuration.xml中< gains >标签规定的:mMinGain对应minValueMB(一般为0)指最小音量;mStepSize对应stepValueMB(一般为1)指步长。
我们可以用自己的音量算法代替原有的计算方法,也可以直接使用一个xml表示音量曲线。这样,我们只要修改中的getGainForIndex方法,就可以引入自己的硬件音量曲线了
我们接着往下看,接着会依次调用这个CarVolumeGroup中所有音频类型对应的音频设备的setCurrentGain方法:
在CarAudioDeviceInfo的setCurrentGain方法中,会先判断参数的合法性,再获取到AudioGain,将AudioGain和音量值封装为AudioGainConfig,再利用getAudioDevicePort()
得到一个AudioDevicePort类,这两个类在中会被封装为AudioPortConfig类,这个类可以在Native层传递,最后调用AudioManager的方法setAudioPortGain:
:
void setCurrentGain(int gainInMillibels) {
// Clamp the incoming value to our valid range. Out of range values ARE legal input
if (gainInMillibels < mMinGain) {
gainInMillibels = mMinGain;
} else if (gainInMillibels > mMaxGain) {
gainInMillibels = mMaxGain;
}
// Push the new gain value down to our underlying port which will cause it to show up
// at the HAL.
AudioGain audioGain = getAudioGain();
if (audioGain == null) {
Log.e(CarLog.TAG_AUDIO, "getAudioGain() returned null.");
return;
}
// size of gain values is 1 in MODE_JOINT
AudioGainConfig audioGainConfig = audioGain.buildConfig(
AudioGain.MODE_JOINT,
audioGain.channelMask(),
new int[] { gainInMillibels },
0);
if (audioGainConfig == null) {
Log.e(CarLog.TAG_AUDIO, "Failed to construct AudioGainConfig");
return;
}
Log.d(CarLog.TAG_AUDIO, "setAudioPortGain: " + getAudioDevicePort() + audioGainConfig);
int r = AudioManager.setAudioPortGain(getAudioDevicePort(), audioGainConfig);
if (r == AudioManager.SUCCESS) {
// Since we can't query for the gain on a device port later,
// we have to remember what we asked for
mCurrentGain = gainInMillibels;
} else {
Log.e(CarLog.TAG_AUDIO, "Failed to setAudioPortGain: " + r);
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
可以看到最终调用了AudioManager的setAudioPortGain方法,在这里将AudioPort和AudioGainConfig封装为AudioPortConfig,调用(config):
:
public static int setAudioPortGain(AudioPort port, AudioGainConfig gain) {
if (port == null || gain == null) {
return ERROR_BAD_VALUE;
}
AudioPortConfig activeConfig = port.activeConfig();
AudioPortConfig config = new AudioPortConfig(port, activeConfig.samplingRate(),
activeConfig.channelMask(), activeConfig.format(), gain);
config.mConfigMask = AudioPortConfig.GAIN;
return AudioSystem.setAudioPortConfig(config);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
接下来的故事可以说是“Old Story”了,我们熟悉的音频系统中的各种关键类都会出现,如AudioSystem、AudioPolicyService、AudioPolicyManager、AudioFlinger等,通过JNI和HIDL等方式,最终调用到Hal层里面BSP提供的接口:
我们简单梳理一下其中的知识点:
1.在参数传入后,逻辑反而变得简单了,因为大部分的策略已经在之前做好了,后面基本上只是起到传递参数的作用
2.在AudioSystem之后会调用到android_media_AudioSystem.cpp(图里没有画出来),这个文件是JNI实现的一部分,在这里会将Java中的类AudioGainConfig利用convertAudioPortConfigToNative方法转化为C++中的结构体audio_port_config,这种操作在JNI中很常见
3.在进入Hal之前调用到时,会先从audio_port_config结构体中取出代表设备的module,再根据module从Audioflinger中取出初始化时得到的AudioHwDevice,调用这个设备的setAudioPortConfig方法:
:
/* Set audio port configuration */
status_t AudioFlinger::PatchPanel::setAudioPortConfig(const struct audio_port_config *config)
{
ALOGV("setAudioPortConfig");
sp<AudioFlinger> audioflinger = mAudioFlinger.promote();
if (audioflinger == 0) {
return NO_INIT;
}
audio_module_handle_t module;
if (config->type == AUDIO_PORT_TYPE_DEVICE) {
module = config->ext.device.hw_module;
} else {
module = config->ext.mix.hw_module;
}
ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(module);
if (index < 0) {
ALOGW("setAudioPortConfig() bad hw module %d", module);
return BAD_VALUE;
}
AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index);
return audioHwDevice->hwDevice()->setAudioPortConfig(config);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
我们终于调用到了Hal层的adev_set_audio_port_config方法,不同的平台在Hal层中有不同的实现,原生的Hal中的实现在audio_hw.c中:
static int adev_set_audio_port_config(struct audio_hw_device *dev,
const struct audio_port_config *config) {
int ret = 0;
struct generic_audio_device *adev = (struct generic_audio_device *)dev;
const char *bus_address = config->ext.device.address;
struct generic_stream_out *out = hashmapGet(adev->out_bus_stream_map, bus_address);
if (out) {
pthread_mutex_lock(&out->lock);
int gainIndex = (config->gain.values[0] - out->gain_stage.min_value) /
out->gain_stage.step_value;
int totalSteps = (out->gain_stage.max_value - out->gain_stage.min_value) /
out->gain_stage.step_value;
int minDb = out->gain_stage.min_value / 100;
int maxDb = out->gain_stage.max_value / 100;
// curve: 10^((minDb + (maxDb - minDb) * gainIndex / totalSteps) / 20)
out->amplitude_ratio = pow(10,
(minDb + (maxDb - minDb) * (gainIndex / (float)totalSteps)) / 20);
pthread_mutex_unlock(&out->lock);
ALOGD("%s: set audio gain: %f on %s",
__func__, out->amplitude_ratio, bus_address);
} else {
ALOGE("%s: can not find output stream by bus_address:%s", __func__, bus_address);
ret = -EINVAL;
}
return ret;
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
可以看到这里又计算了一次音量值:
int gainIndex = (config->gain.values[0] - out->gain_stage.min_value) /
out->gain_stage.step_value;
- 1
- 2
这就是我们可以用来实现硬件音量曲线的第一个位置,如果想要实现自己的音量曲线,修改adev_set_audio_port_config方法即可实现
四、音频流类型与其别名
使用中定义的常量可以表示音频流类型,例如:AudioSystem.STREAM_MUSIC,Android5.0以后,调用音频播放接口时,建议使用音频属性(AudioAttributes)来替代音频流,因为AudioAttributes可以携带比音频流更多的信息,如:Content、Usage、flag等,其中Usage表示音频的使用场景,与原来的音频流的定义有些相似,并且与中定义的音频流有对应关系,在中:
private static int toVolumeStreamType(boolean fromGetVolumeControlStream, AudioAttributes aa) {
// flags to stream type mapping
if ((aa.getFlags() & FLAG_AUDIBILITY_ENFORCED) == FLAG_AUDIBILITY_ENFORCED) {
return fromGetVolumeControlStream ?
AudioSystem.STREAM_SYSTEM : AudioSystem.STREAM_SYSTEM_ENFORCED;
}
if ((aa.getFlags() & FLAG_SCO) == FLAG_SCO) {
return fromGetVolumeControlStream ?
AudioSystem.STREAM_VOICE_CALL : AudioSystem.STREAM_BLUETOOTH_SCO;
}
// usage to stream type mapping
switch (aa.getUsage()) {
case USAGE_MEDIA:
case USAGE_GAME:
return AudioSystem.STREAM_MUSIC;
// this is navi now by SongJie@2020-10-21
case USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:
return AudioSystem.STREAM_SYSTEM_ENFORCED;
case USAGE_ASSISTANCE_SONIFICATION:
return AudioSystem.STREAM_SYSTEM;
case USAGE_VOICE_COMMUNICATION:
return AudioSystem.STREAM_VOICE_CALL;
case USAGE_VOICE_COMMUNICATION_SIGNALLING:
return fromGetVolumeControlStream ?
AudioSystem.STREAM_VOICE_CALL : AudioSystem.STREAM_DTMF;
case USAGE_ALARM:
return AudioSystem.STREAM_ALARM;
case USAGE_NOTIFICATION_RINGTONE:
return AudioSystem.STREAM_RING;
case USAGE_NOTIFICATION:
case USAGE_NOTIFICATION_COMMUNICATION_REQUEST:
case USAGE_NOTIFICATION_COMMUNICATION_INSTANT:
case USAGE_NOTIFICATION_COMMUNICATION_DELAYED:
case USAGE_NOTIFICATION_EVENT:
return AudioSystem.STREAM_NOTIFICATION;
case USAGE_ASSISTANT:
case USAGE_ASSISTANCE_ACCESSIBILITY:
return AudioSystem.STREAM_ACCESSIBILITY;
case USAGE_UNKNOWN:
return AudioSystem.STREAM_MUSIC;
default:
if (fromGetVolumeControlStream) {
throw new IllegalArgumentException("Unknown usage value " + aa.getUsage() +
" in audio attributes");
} else {
return AudioSystem.STREAM_MUSIC;
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
同时,在定义的流类型与中定义的audio_stream_type_t结构体一一对应,所以在JNI中经常可以将其类型强制转换,如:
AudioSystem::setStreamVolumeIndex(static_cast <audio_stream_type_t>(stream),
index,
(audio_devices_t)device));
- 1
- 2
- 3
以上,欢迎指正与讨论!