高通8650 AudioReach 架构中 Virtual Mic & Loopback
然而,由于平板移除了自带的扬声器,在 DP in 场景下,平板需要使用座舱的扬声器。答:上层的任何应用程序在获取录音时,最终都会通过 Android 的标准 API 调用到 AudioFlinger,从 HAL 获取录音。上面提到,DP in 被用作录音设备。在实际方案讨论中,有人建议在 HAL 中添加 IPC,如果选择在 HAL 中添加 IPC,就需要找到 pcm_read 的位置进行场景区分。
背景
有整车客户项目,要将一块平板设备集成到座舱内,也就是作为车载平板使用。该平板移除了电池,并加入了 MCU,MIC 和 Speaker 也被去掉改为使用座舱上的 MIC 和 Speaker。整块平板仅供电与座舱物理连接,MIC 和 Speaker 的数据则通过无线方式进行传输。此外,平板上还特意添加了一个 USB Type-C 接口,用于实现 DP in 功能。传统的有线投屏通常是将音视频从平板投放到外部显示器,而 DP in 则让平板充当显示器,将音视频投射到平板上。
虚拟麦克风
问:如何将座舱内麦克风的录音传输给平板上的第三方应用程序,比如会议类应用?这个问题主要针对未连接耳机的场景。
答:上层的任何应用程序在获取录音时,最终都会通过 Android 的标准 API 调用到 AudioFlinger,从 HAL 获取录音。因此,我们可以在 AudioFlinger 中创建一个 IPC,并添加一个read
接口。当 AudioFlinger 从 HAL 读取麦克风录音时,进行场景区分,调用这个 IPC 的read
接口。该接口需要在应用程序中实现,因此平板上会有一个应用程序通过局域网与座舱保持连接,并且具备足够的权限。当 IPC read
接口被调用时,这个应用程序将从座舱获取录音并返回。参考代码如下:
未经测试
status_t AudioFlinger::RecordThread::createAudioPatch_l(const struct audio_patch *patch,
audio_patch_handle_t *handle)
{
status_t status = NO_ERROR;
... ...
if (inDeviceType() == AUDIO_DEVICE_IN_BUILTIN_MIC && sock <= 0) {
struct sockaddr_in addr;
memset(&addr, 0, sizeof(struct sockaddr_in));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = inet_addr("10.106.246.70");
addr.sin_port = htons(2333);
sock = socket(AF_INET, SOCK_DGRAM, 0);
if (connect(sock, (struct sockaddr *)&addr, sizeof(addr)) == 0) {
sendto(sock, &mBufferSize, sizeof(mBufferSize), 0, NULL, 0);
} else
ALOGE("%d, error: %s.\n", __LINE__, strerror(errno));
}
return status;
}
bool AudioFlinger::RecordThread::threadLoop()
{
... ...
ATRACE_BEGIN("read");
size_t bytesRead;
status_t result;
ALOGE("%p, %s, inDeviceType() == AUDIO_DEVICE_IN_BUILTIN_MIC %d, sock %d, mBufferSize %zu",
this, __func__, inDeviceType() == AUDIO_DEVICE_IN_BUILTIN_MIC, sock, mBufferSize);
if (inDeviceType() == AUDIO_DEVICE_IN_BUILTIN_MIC && sock > 0) {
bytesRead = recvfrom(sock, (uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, 0, NULL, 0);
result = bytesRead;
} else {
result = mSource->read((uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead);
}
ATRACE_END();
}
IPC可以选择其他方案,比如 Binder,或者直接使用 AudioControl。如果使用基于网络的 socket,是否可以绕过应用程序这一环节,直接与座舱连接,从而省去一层中转?
补充一下,AudioControl 是一个独立的进程,可以参考以下示例:
#include <android/hardware/automotive/audiocontrol/2.0/IAudioControl.h>
using namespace android::hardware::automotive::audiocontrol::V2_0;
android::sp<IAudioControl> audiocontrol = IAudioControl::getService();
int32_t main(int argc, char** argv) {
uint8_t buf[3840];
while (true) {
audiocontrol->read([&](const auto &_hidl_out_data){
memcpy(buf, hidl_out_data.data(), sizeof(buf));});
}
return 0;
}
Android.mk
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := audiocontrol_test
LOCAL_SRC_FILES := test.cpp
LOCAL_SHARED_LIBRARIES := \
libutils \
libhidlbase \
android.hardware.automotive.audiocontrol@2.0
LOCAL_PROPRIETARY_MODULE := true
include $(BUILD_EXECUTABLE)
在实际方案讨论中,有人建议在 HAL 中添加 IPC,如果选择在 HAL 中添加 IPC,就需要找到 pcm_read 的位置进行场景区分。
在实际方案的实现中,我们在 HAL 层使用了 AF_UNIX socket,因为平台不允许使用基于网络的 socket。
DP in
正常的设计方案是让 DP 芯片在音频解码后将信号传输给 AMP。然而,由于平板移除了自带的扬声器,在 DP in 场景下,平板需要使用座舱的扬声器。因此,硬件设计中,将 DP 芯片的音频解码后信号传输到 MI2S,使其作为录音设备。上层应用程序获取录音后,通过与座舱的连接来使座舱扬声器播放音频。
上面提到,DP in 被用作录音设备。驱动层的主要工作是调试 MI2S,具体信息可参见《80-40939-54_REV_AC_SM8650_External_Mi2S_Interface.pdf》文档。前期通过 agmcap 命令验证基础功能。主要的挑战在于如何在 PAL -> HAL -> Framework 的架构中增加对 DP in 的支持,因为这个功能在当前平台中并未得到支持。
可以使用 tinycap(高通平台上对应的是 agmcap),这是一个用于调试音频驱动的优秀工具,它通过节点获取音频,代码实现简单易懂。我们只需确保应用程序对 PCM 节点有足够的访问权限,然后将 tinycap 移植到应用程序中即可。由于 DP in 是一种特殊的录音设备,专供特定应用使用,不需要开放给其他第三方应用,因此这个方案是可行的。
vendor/qcom/opensource/agm/plugins/tinyalsa/test/agmcap.c
unsigned int capture_sample(FILE *file, unsigned int card, unsigned int device,
unsigned int channels, unsigned int rate,
enum pcm_format format, unsigned int period_size,
unsigned int period_count, unsigned int cap_time,
struct device_config *dev_config, unsigned int stream_kv,
unsigned int device_kv, unsigned int instance_kv, unsigned int devicepp_kv)
{
struct pcm_config config;
struct pcm *pcm;
struct mixer *mixer;
char *buffer;
char *intf_name = dev_config->name;
unsigned int size;
unsigned int bytes_read = 0;
unsigned int frames = 0;
struct timespec end;
struct timespec now;
uint32_t miid = 0;
int ret = 0;
stream_kv = stream_kv ? stream_kv : PCM_RECORD;
memset(&config, 0, sizeof(config));
config.channels = channels;
config.rate = rate;
config.period_size = period_size;
config.period_count = period_count;
config.format = format;
config.start_threshold = 0;
config.stop_threshold = 0;
config.silence_threshold = 0;
mixer = mixer_open(card);
if (!mixer) {
printf("Failed to open mixer\n");
return 0;
}
/* set device/audio_intf media config mixer control */
if (set_agm_device_media_config(mixer, intf_name, dev_config)) {
printf("Failed to set device media config\n");
goto err_close_mixer;
}
/* set audio interface metadata mixer control */
if (set_agm_audio_intf_metadata(mixer, intf_name, device_kv, CAPTURE,
dev_config->rate, dev_config->bits, stream_kv)) {
printf("Failed to set device metadata\n");
goto err_close_mixer;
}
/* set stream metadata mixer control */
if (set_agm_capture_stream_metadata(mixer, device, stream_kv, CAPTURE, STREAM_PCM,
instance_kv)) {
printf("Failed to set pcm metadata\n");
goto err_close_mixer;
}
if (devicepp_kv != 0) {
if (set_agm_streamdevice_metadata(mixer, device, stream_kv, CAPTURE, STREAM_PCM,
intf_name, devicepp_kv)) {
printf("Failed to set pcm metadata\n");
goto err_close_mixer;
}
}
ret = agm_mixer_get_miid (mixer, device, intf_name, STREAM_PCM, TAG_STREAM_MFC, &miid);
if (ret) {
printf("MFC not present for this graph\n");
} else {
if (configure_mfc(mixer, device, intf_name, TAG_STREAM_MFC,
STREAM_PCM, rate, channels, get_tinyalsa_pcm_bit_width(format), miid)) {
printf("Failed to configure stream mfc\n");
goto err_close_mixer;
}
}
/* connect pcm stream to audio intf */
if (connect_agm_audio_intf_to_stream(mixer, device, intf_name, STREAM_PCM, true)) {
printf("Failed to connect pcm to audio interface\n");
goto err_close_mixer;
}
pcm = pcm_open(card, device, PCM_IN, &config);
if (!pcm || !pcm_is_ready(pcm)) {
printf("Unable to open PCM device (%s)\n",
pcm_get_error(pcm));
goto err_close_mixer;
}
size = pcm_frames_to_bytes(pcm, pcm_get_buffer_size(pcm));
buffer = malloc(size);
if (!buffer) {
printf("Unable to allocate %u bytes\n", size);
goto err_close_pcm;
}
printf("Capturing sample: %u ch, %u hz, %u bit\n", channels, rate,
pcm_format_to_bits(format));
if (pcm_start(pcm) < 0) {
printf("start error\n");
goto err_close_pcm;
}
clock_gettime(CLOCK_MONOTONIC, &now);
end.tv_sec = now.tv_sec + cap_time;
end.tv_nsec = now.tv_nsec;
while (capturing && !pcm_read(pcm, buffer, size)) {
if (fwrite(buffer, 1, size, file) != size) {
printf("Error capturing sample\n");
break;
}
bytes_read += size;
if (cap_time) {
clock_gettime(CLOCK_MONOTONIC, &now);
if (now.tv_sec > end.tv_sec ||
(now.tv_sec == end.tv_sec && now.tv_nsec >= end.tv_nsec))
break;
}
}
frames = pcm_bytes_to_frames(pcm, bytes_read);
free(buffer);
pcm_stop(pcm);
err_close_pcm:
connect_agm_audio_intf_to_stream(mixer, device, intf_name, STREAM_PCM, false);
pcm_close(pcm);
err_close_mixer:
mixer_close(mixer);
return frames;
}
End
以下是旧的解决方案:计划采用从 PAL -> HAL -> Framework 的逐层传递方式。除了 Framework,其余部分已经完成,但后来这个方案没有被采用。
我们在代码中使用了 PAL_DEVICE_IN_AUX_DIGITAL
作为 DP in 的设备标识符。
adb shell agmcap /data/test.wav -D 100 -d 101 -i MI2S-LPAIF_VA-TX-PRIMARY -dkv 0xA3000001 -dppkv 0xAD000017 -skv 0xB1000011
adb shell agmcap /data/test.wav -D 100 -d 101 -i MI2S-LPAIF_VA-TX-PRIMARY -dkv 0xA3000001 -dppkv 0xAD000017 -skv 0xB1000011 -r 48000 speaker-mic + voice_recognition_record
adb shell agmcap /data/test.wav -D 100 -d 101 -i MI2S-LPAIF_VA-TX-PRIMARY -dkv 0xA3000010 -dppkv 0xAD000017 -skv 0xB1000011 -r 48000 hdmi-tx + voice_recognition_record
adb shell agmcap /data/test.wav -D 100 -d 101 -i MI2S-LPAIF_VA-TX-PRIMARY -dkv 0xA3000010 -dppkv 0xAD000017 -skv 0xB1000001 -r 48000 hdmi-tx + pcm_record
adb shell agmcap /data/test.wav -D 100 -d 101 -i MI2S-LPAIF_VA-TX-PRIMARY -dkv 0xA3000001 -dppkv 0xAD000017 -skv 0xB1000001 -r 48000 speaker-mic + pcm_record
1. resourcemanager_kalama_mtp.xml
<in-device>
<id>PAL_DEVICE_IN_AUX_DIGITAL</id>
<back_end_name>MI2S-LPAIF_VA-TX-PRIMARY</back_end_name>
<snd_device_name>display-port</snd_device_name>
<fractional_sr>1</fractional_sr>
<max_channels>4</max_channels>
<channels>1</channels>
<samplerate>48000</samplerate>
<!-- <snd_device_name>handset-mic</snd_device_name> -->
<ec_enable>0</ec_enable>
<usecase>
<name>PAL_STREAM_VOICE_CALL</name>
<priority>1</priority>
</usecase>
<usecase>
<name>PAL_STREAM_DEEP_BUFFER</name>
<ec_enable>1</ec_enable>
<custom-config key="unprocessed-hdr-mic-landscape">
<channels>4</channels>
<snd_device_name>unprocessed-hdr-mic-landscape</snd_device_name>
</custom-config>
<custom-config key="unprocessed-hdr-mic-portrait">
<channels>4</channels>
<snd_device_name>unprocessed-hdr-mic-portrait</snd_device_name>
</custom-config>
<custom-config key="unprocessed-hdr-mic-inverted-landscape">
<channels>4</channels>
<snd_device_name>unprocessed-hdr-mic-inverted-landscape</snd_device_name>
</custom-config>
<custom-config key="unprocessed-hdr-mic-inverted-portrait">
<channels>4</channels>
<snd_device_name>unprocessed-hdr-mic-inverted-portrait</snd_device_name>
</custom-config>
</usecase>
</in-device>
2. usecaseKvManager.xml
<device id="PAL_DEVICE_IN_AUX_DIGITAL">
<keys_and_values>
<!-- DEVICETX - HANDSETMIC -->
<graph_kv key="0xA3000000" value="TODO"/>
</keys_and_values>
<keys_and_values SidetoneMode="SW">
<!-- SW_SIDETONE - SW_SIDETONE_ON -->
<graph_kv key="0xBA000000" value="0xBA000001"/>
</keys_and_values>
</device>
<devicepp id="PAL_DEVICE_IN_AUX_DIGITAL">
<keys_and_values StreamType="PAL_STREAM_DEEP_BUFFER,PAL_STREAM_COMPRESSED">
<!-- DEVICETX - HANDSETMICjjjjjjjjjjjjjjjjjjj -->
<graph_kv key="0xA3000000" value="0xA3000001"/>
<!-- DEVICEPP_TX - DEVICEPP_TX_AUDIO_FLUENCE_SMECNS -->
<graph_kv key="0xAD000000" value="0xAD000017"/>
</keys_and_values>
</devicepp>
3. AudioDevice.cpp
void AudioDevice::FillAndroidDeviceMap() {
android_device_map_.insert(std::make_pair(AUDIO_DEVICE_IN_AUX_DIGITAL, PAL_DEVICE_IN_AUX_DIGITAL));
}
4. kvh2xml.h
enum Key_DeviceTX {
HDMI_TX = 0xA3000011, /**< @h2xmle_name {HDMI_Tx}*/
};
5. GKV
6. pal/device/src/Device.cpp
std::shared_ptr<Device> Device::getInstance(struct pal_device *device,
std::shared_ptr<ResourceManager> Rm)
switch (device->id) {
case PAL_DEVICE_IN_AUX_DIGITAL:
}
}
std::shared_ptr<Device> Device::getObject(pal_device_id_t dev_id)
{
switch(dev_id) {
case PAL_DEVICE_IN_AUX_DIGITAL:
}
int DisplayPort::start() {
if (objRx) //TODO
status = configureDpEndpoint();
}
Done
更多推荐
所有评论(0)