IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 移动开发 -> 分析PeerConnectionFactory.createPeerConnectionFactory之AudioDeviceModule(Android-RTC-4) -> 正文阅读

[移动开发]分析PeerConnectionFactory.createPeerConnectionFactory之AudioDeviceModule(Android-RTC-4)

上图是之前 Android-RTC-1?简单分析得出来的,一张关于PeerConnectionFactory的至下而上结构图。今天继续从PeerConnectionFactory入手,承接 Android-RTC-3 的内容,分析createPeerConnectionFactory。

PeerConnectionFactory之PeerConnectionFactory.createPeerConnectionFactory。

从上图可以粗略的知道PeerConnectionFactory有三个主要模块,AudioDeviceModule / VideoEncoder/DecoderFactory / NetworkOptions,我们先来聚焦一下demo的入参情况。

// AppRTCDemo.PeerConnectionClient.java
private void createPeerConnectionFactoryInternal(PeerConnectionFactory.Options options) {
    // ... ...
    final AudioDeviceModule adm = createJavaAudioDevice();
    final boolean enableH264HighProfile =
            VIDEO_CODEC_H264_HIGH.equals(peerConnectionParameters.videoCodec);
    final VideoEncoderFactory encoderFactory;
    final VideoDecoderFactory decoderFactory;
    if (peerConnectionParameters.videoCodecHwAcceleration) {
        encoderFactory = new DefaultVideoEncoderFactory(
                rootEglBase.getEglBaseContext(), true, enableH264HighProfile);
        decoderFactory = new DefaultVideoDecoderFactory(rootEglBase.getEglBaseContext());
    } else {
        encoderFactory = new SoftwareVideoEncoderFactory();
        decoderFactory = new SoftwareVideoDecoderFactory();
    }
    factory = PeerConnectionFactory.builder()
            .setOptions(options)
            .setAudioDeviceModule(adm)
            .setVideoEncoderFactory(encoderFactory)
            .setVideoDecoderFactory(decoderFactory)
            .createPeerConnectionFactory();
    Log.d(TAG, "Peer connection factory created.");
    adm.release();
}

public static class Options {
    static final int ADAPTER_TYPE_UNKNOWN = 0;
    static final int ADAPTER_TYPE_ETHERNET = 1;
    static final int ADAPTER_TYPE_WIFI = 2;
    static final int ADAPTER_TYPE_CELLULAR = 4;
    static final int ADAPTER_TYPE_VPN = 8;
    static final int ADAPTER_TYPE_LOOPBACK = 16;
    static final int ADAPTER_TYPE_ANY = 32;
    public int networkIgnoreMask;
    public boolean disableEncryption;
    public boolean disableNetworkMonitor;

    public Options() { }
    @CalledByNative("Options")
    int getNetworkIgnoreMask() {
        return this.networkIgnoreMask;
    }
    @CalledByNative("Options")
    boolean getDisableEncryption() {
        return this.disableEncryption;
    }
    @CalledByNative("Options")
    boolean getDisableNetworkMonitor() {
        return this.disableNetworkMonitor;
    }
}

PeerConnectionFactory.Options只是简单的new了一个默认构造,重点还是要放在?AudioDeviceModule /?VideoEncoderFactory / VideoDecoderFactory。

一、JavaAudioDeviceModule

// AppWebRTC-demo中的PeerConnectionClient.java
AudioDeviceModule createJavaAudioDevice() {
    if (!peerConnectionParameters.useOpenSLES) {
        Log.w(TAG, "External OpenSLES ADM not implemented yet.");
        // TODO: Add support for external OpenSLES ADM.
    }
    // Set audio record error callbacks.
    JavaAudioDeviceModule.AudioRecordErrorCallback audioRecordErrorCallback;
	// Set audio track error callbacks.
    JavaAudioDeviceModule.AudioTrackErrorCallback audioTrackErrorCallback;
    // Set audio record state callbacks.
    JavaAudioDeviceModule.AudioRecordStateCallback audioRecordStateCallback;
    // Set audio track state callbacks.
    JavaAudioDeviceModule.AudioTrackStateCallback audioTrackStateCallback;
    // 篇幅关系就不把代码贴全了。
    return JavaAudioDeviceModule.builder(appContext)
            .setSamplesReadyCallback(saveRecordedAudioToFile)
            .setUseHardwareAcousticEchoCanceler(!peerConnectionParameters.disableBuiltInAEC)
            .setUseHardwareNoiseSuppressor(!peerConnectionParameters.disableBuiltInNS)
            .setAudioRecordErrorCallback(audioRecordErrorCallback)
            .setAudioTrackErrorCallback(audioTrackErrorCallback)
            .setAudioRecordStateCallback(audioRecordStateCallback)
            .setAudioTrackStateCallback(audioTrackStateCallback)
            .createAudioDeviceModule();
}
// org.webrtc.audio.JavaAudioDeviceModule.java
public static class Builder {
    private final Context context;
    private final AudioManager audioManager;
    private int inputSampleRate = WebRtcAudioManager.getSampleRate(audioManager);
    private int outputSampleRate = WebRtcAudioManager.getSampleRate(audioManager);
    private int audioSource = WebRtcAudioRecord.DEFAULT_AUDIO_SOURCE;
    private int audioFormat = WebRtcAudioRecord.DEFAULT_AUDIO_FORMAT;
    private AudioTrackErrorCallback audioTrackErrorCallback;
    private AudioRecordErrorCallback audioRecordErrorCallback;
    private SamplesReadyCallback samplesReadyCallback;
    private AudioTrackStateCallback audioTrackStateCallback;
    private AudioRecordStateCallback audioRecordStateCallback;
    private boolean useHardwareAcousticEchoCanceler = isBuiltInAcousticEchoCancelerSupported();
    private boolean useHardwareNoiseSuppressor = isBuiltInNoiseSuppressorSupported();
    private boolean useStereoInput;
    private boolean useStereoOutput;

    public AudioDeviceModule createAudioDeviceModule() {
        if (useHardwareNoiseSuppressor) {
            Logging.d(TAG, "HW NS will be used.");
        } else {
            if (isBuiltInNoiseSuppressorSupported()) {
                Logging.d(TAG, "Overriding default behavior; now using WebRTC NS!");
            }
            Logging.d(TAG, "HW NS will not be used.");
        }
        if (useHardwareAcousticEchoCanceler) {
            Logging.d(TAG, "HW AEC will be used.");
        } else {
            if (isBuiltInAcousticEchoCancelerSupported()) {
                Logging.d(TAG, "Overriding default behavior; now using WebRTC AEC!");
            }
            Logging.d(TAG, "HW AEC will not be used.");
        }
        final WebRtcAudioRecord audioInput = new WebRtcAudioRecord(context, audioManager, audioSource, audioFormat, audioRecordErrorCallback, audioRecordStateCallback, samplesReadyCallback, useHardwareAcousticEchoCanceler, useHardwareNoiseSuppressor);
        final WebRtcAudioTrack audioOutput = new WebRtcAudioTrack(context, audioManager, audioTrackErrorCallback, audioTrackStateCallback);
        return new JavaAudioDeviceModule(context, audioManager, audioInput, audioOutput, inputSampleRate, outputSampleRate, useStereoInput, useStereoOutput);
    }
}

1、首先从AudioDeviceModule入手,分析java层代码。音频模块涉及回音消除和降噪,android原生系统已经抽象实现了相关功能模块,但是质量效果的优劣取决于硬件设备。可以往下跟踪?isBuiltInAcousticEchoCancelerSupported /?isBuiltInNoiseSuppressorSupported两个静态方法,专门规避了AOSP中的软实现回音消除/降噪功能。尽可能的使用硬件的回音消除和降噪的功能。

// 专们特指定 AEC特性 的UUID,相当于识别码
public static final UUID EFFECT_TYPE_AEC = UUID   
            .fromString("7b491460-8d4d-11e0-bd61-0002a5d5c51b");
// UUIDs for Software Audio Effects that we want to avoid using.
// AOSP中软实现的回音消除模块的UUID,避免使用软实现。
private static final UUID AOSP_ACOUSTIC_ECHO_CANCELER =
            UUID.fromString("bb392ec0-8d4d-11e0-a896-0002a5d5c51b");
public static boolean isAcousticEchoCancelerSupported() {
    if (Build.VERSION.SDK_INT < 18)
            return false;
    return isEffectTypeAvailable(AudioEffect.EFFECT_TYPE_AEC, AOSP_ACOUSTIC_ECHO_CANCELER);
}
// Returns true if an effect of the specified type is available. Functionally
// equivalent to (NoiseSuppressor|AutomaticGainControl|...).isAvailable(), but
// faster as it avoids the expensive OS call to enumerate effects.
private static boolean isEffectTypeAvailable(UUID effectType, UUID blackListedUuid) {
    Descriptor[] effects = AudioEffect.queryEffects();
    if (effects == null) {
        return false;
    }
    for (Descriptor d : effects) {
        if (d.type.equals(effectType)) {
            return !d.uuid.equals(blackListedUuid);
        }
    }
    return false;
}

2、回头继续看createAudioDeviceModule当中的audioInput(WebRtcAudioRecord)和 audioOutput(WebRtcAudioTrack)?

看看WebRtcAudioRecord模块,结构图如上所示,大体上也是Android原生API——AudioRecord的封装。但是此模块的接口很多都打了自定义标签@CalledByNative,都是从native层调用的。具体逻辑还是要往下深入AudioDeviceModle才能一并展开。

同理,WebRtcAudioTrack也是对Android原生API——AudioTrack的封装。关键函数已经标记。

    private final Context context;
    private final AudioManager audioManager;
    private final WebRtcAudioRecord audioInput;
    private final WebRtcAudioTrack audioOutput;
    private final int inputSampleRate;
    private final int outputSampleRate;
    private final boolean useStereoInput;
    private final boolean useStereoOutput;

    private final Object nativeLock = new Object();
    private long nativeAudioDeviceModule;

    private JavaAudioDeviceModule(Context context, AudioManager audioManager,
                                  WebRtcAudioRecord audioInput, WebRtcAudioTrack audioOutput, int inputSampleRate,
                                  int outputSampleRate, boolean useStereoInput, boolean useStereoOutput) {
        this.context = context;
        this.audioManager = audioManager;
        this.audioInput = audioInput;
        this.audioOutput = audioOutput;
        this.inputSampleRate = inputSampleRate;
        this.outputSampleRate = outputSampleRate;
        this.useStereoInput = useStereoInput;
        this.useStereoOutput = useStereoOutput;
    }

最后我们回到JavaAudioDeviceModule.builder.createAudioDeviceModule,可以清晰明了的看到JavaAudioDeviceModule的成员变量,基本上都是Android系统关于音频相关的原生API,nativeAudioDeviceModule用于保存native层的对象地址值,相当于c++的对象指针。这种写法间接的把java对象和cpp对象建立起关联,在很多Android native开源项目是非常常见的。

二、jni层的AudioDeviceModule

分析完Java业务层的代码,顺着脉络深入native层分析。先回滚至Java层入口函数

//org.webrtc.PeerConnectionFactory.Builder
public PeerConnectionFactory createPeerConnectionFactory() {
    PeerConnectionFactory.checkInitializeHasBeenCalled();
    if (this.audioDeviceModule == null) {
        this.audioDeviceModule = JavaAudioDeviceModule.builder(ContextUtils.getApplicationContext()).createAudioDeviceModule();
    }
    return PeerConnectionFactory.nativeCreatePeerConnectionFactory(
    ContextUtils.getApplicationContext(), 
    this.options, 
    this.audioDeviceModule.getNativeAudioDeviceModulePointer(), 
    this.audioEncoderFactoryFactory.createNativeAudioEncoderFactory(), 
    this.audioDecoderFactoryFactory.createNativeAudioDecoderFactory(), 
    this.videoEncoderFactory,
    this.videoDecoderFactory,
    this.audioProcessingFactory == null ? 0L : this.audioProcessingFactory.createNative(),
    this.fecControllerFactoryFactory == null ? 0L : this.fecControllerFactoryFactory.createNative(),
    this.networkControllerFactoryFactory == null ? 0L : this.networkControllerFactoryFactory.createNativeNetworkControllerFactory(),
    this.networkStatePredictorFactoryFactory == null ? 0L : this.networkStatePredictorFactoryFactory.createNativeNetworkStatePredictorFactory(),
    this.neteqFactoryFactory == null ? 0L : this.neteqFactoryFactory.createNativeNetEqFactory()
);
}

以上是PeerConnectionFactory的createPeerConnectionFactory 创建工厂实体类的函数,这个入口函数以后会多次碰见,毕竟它承载了太多东西了;这里先着重分析AudioDeviceModule,即分析JavaAudioDeviceModule.getNativeAudioDeviceModulePointer。

//org.webrtc.audio.JavaAudioDeviceModule
public long getNativeAudioDeviceModulePointer() {
    synchronized(this.nativeLock) {
        if (this.nativeAudioDeviceModule == 0L) {
            this.nativeAudioDeviceModule = nativeCreateAudioDeviceModule(
                                            this.context, this.audioManager, 
                                            this.audioInput, this.audioOutput, 
                                            this.inputSampleRate, this.outputSampleRate, 
                                            this.useStereoInput, this.useStereoOutput);
        }
        return this.nativeAudioDeviceModule;
    }
}
// out\arm64-v8a\gen\sdk\android\generated_java_audio_jni\JavaAudioDeviceModule_jni.h
JNI_GENERATOR_EXPORT jlong
    Java_org_webrtc_audio_JavaAudioDeviceModule_nativeCreateAudioDeviceModule(
    JNIEnv* env,
    jclass jcaller,
    jobject context,
    jobject audioManager,
    jobject audioInput,
    jobject audioOutput,
    jint inputSampleRate,
    jint outputSampleRate,
    jboolean useStereoInput,
    jboolean useStereoOutput) {
  return JNI_JavaAudioDeviceModule_CreateAudioDeviceModule(env,
base::android::JavaParamRef<jobject>(env, context),
base::android::JavaParamRef<jobject>(env, audioManager),
base::android::JavaParamRef<jobject>(env, audioInput),
base::android::JavaParamRef<jobject>(env, audioOutput), 
inputSampleRate, outputSampleRate,
useStereoInput, useStereoOutput);
}
// sdk\android\src\jni\audio_device\java_audio_device_module.cc
static jlong JNI_JavaAudioDeviceModule_CreateAudioDeviceModule(
    JNIEnv* env,
    const JavaParamRef<jobject>& j_context,
    const JavaParamRef<jobject>& j_audio_manager,
    const JavaParamRef<jobject>& j_webrtc_audio_record,
    const JavaParamRef<jobject>& j_webrtc_audio_track,
    int input_sample_rate,
    int output_sample_rate,
    jboolean j_use_stereo_input,
    jboolean j_use_stereo_output) {
  AudioParameters input_parameters;
  AudioParameters output_parameters;
  GetAudioParameters(env, j_context, j_audio_manager, input_sample_rate,
                     output_sample_rate, j_use_stereo_input,
                     j_use_stereo_output, &input_parameters,
                     &output_parameters);
  auto audio_input = std::make_unique<AudioRecordJni>(
      env, input_parameters, kHighLatencyModeDelayEstimateInMilliseconds,
      j_webrtc_audio_record);
  auto audio_output = std::make_unique<AudioTrackJni>(env, output_parameters,
                                                      j_webrtc_audio_track);
  return jlongFromPointer(CreateAudioDeviceModuleFromInputAndOutput(
                              AudioDeviceModule::kAndroidJavaAudio,
                              j_use_stereo_input, j_use_stereo_output,
                              kHighLatencyModeDelayEstimateInMilliseconds,
                              std::move(audio_input), std::move(audio_output))
                              .release());
}

看着以上的调用链,层层封装,有点复杂。但不要方抓重点,两个AudioParameters对象input_parameters和output_parameters;两个智能指针对象AudioRecordJni(audio_input)和AudioTrackJni(audio_output),然后继续分析两个重点方法 GetAudioParameters /?CreateAudioDeviceModuleFromInputAndOutput

// sdk\android\src\jni\audio_device\audio_device_module.cc
void GetAudioParameters(JNIEnv* env,
                        const JavaRef<jobject>& j_context,
                        const JavaRef<jobject>& j_audio_manager,
                        int input_sample_rate,
                        int output_sample_rate,
                        bool use_stereo_input,
                        bool use_stereo_output,
                        AudioParameters* input_parameters,
                        AudioParameters* output_parameters) {
  const int output_channels = use_stereo_output ? 2 : 1;
  const int input_channels = use_stereo_input ? 2 : 1;
  const size_t output_buffer_size = Java_WebRtcAudioManager_getOutputBufferSize(
      env, j_context, j_audio_manager, output_sample_rate, output_channels);
  const size_t input_buffer_size = Java_WebRtcAudioManager_getInputBufferSize(
      env, j_context, j_audio_manager, input_sample_rate, input_channels);
  output_parameters->reset(output_sample_rate,
                           static_cast<size_t>(output_channels),
                           static_cast<size_t>(output_buffer_size));
  input_parameters->reset(input_sample_rate,
                          static_cast<size_t>(input_channels),
                          static_cast<size_t>(input_buffer_size));
  RTC_CHECK(input_parameters->is_valid());
  RTC_CHECK(output_parameters->is_valid());
}
// org.webrtc.audio.WebRtcAudioManager.java
@CalledByNative
static int getOutputBufferSize(
        Context context, AudioManager audioManager, int sampleRate, int numberOfOutputChannels) {
    return isLowLatencyOutputSupported(context)
            ? getLowLatencyFramesPerBuffer(audioManager)
            : getMinOutputFrameSize(sampleRate, numberOfOutputChannels);
}
private static boolean isLowLatencyOutputSupported(Context context) {
    return context.getPackageManager().hasSystemFeature(PackageManager.FEATURE_AUDIO_LOW_LATENCY);
}
private static int getLowLatencyFramesPerBuffer(AudioManager audioManager) {
    if (Build.VERSION.SDK_INT < 17) {
        return DEFAULT_FRAME_PER_BUFFER;
    }
    String framesPerBuffer =
            audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
    return framesPerBuffer == null ? DEFAULT_FRAME_PER_BUFFER : Integer.parseInt(framesPerBuffer);
}
private static int getMinOutputFrameSize(int sampleRateInHz, int numChannels) {
    final int bytesPerFrame = numChannels * (BITS_PER_SAMPLE / 8);
    final int channelConfig =
            (numChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO);
    return AudioTrack.getMinBufferSize(
            sampleRateInHz, channelConfig, AudioFormat.ENCODING_PCM_16BIT)
            / bytesPerFrame;
}

可以看到GetAudioParameters 也是反调用Java层的WebRtcAudioManager的两个系统方法。到这简单总结一下关于音频:采样率(sample rate)、采样位数(bit per sample)、通道数(channel number)、比特率or码率 的关系。

1、采样率是指每秒采样多少次。
2、采样位数是指每次数字采样的数据,用多少位数来保存,常见有8位或是16位。
3、通道常见单声道音频,立体声道,若是单声道,那么每次采样一个通道,若是立体声道,那么每次采样两个不同声道
4、比特率 = 采样率 * 采样位数 * 通道数(buffersize = 比特率 / 8)(纯理论)
5、实际情况系统都会进行一定的空间压缩优化。对于Android系统,可以使用系统API,AudioRecord.getMinBufferSize 和?AudioTrack.getMinBufferSize得出最小支持的size。

接下来分析CreateAudioDeviceModuleFromInputAndOutput。很简单,就是返回一个AndroidAudioDeviceModule对象的指针。

// sdk\android\src\jni\audio_device\audio_device_module.cc
rtc::scoped_refptr<AudioDeviceModule> CreateAudioDeviceModuleFromInputAndOutput(
    AudioDeviceModule::AudioLayer audio_layer,
    bool is_stereo_playout_supported,
    bool is_stereo_record_supported,
    uint16_t playout_delay_ms,
    std::unique_ptr<AudioInput> audio_input,
    std::unique_ptr<AudioOutput> audio_output) {
  RTC_DLOG(INFO) << __FUNCTION__;
  return rtc::make_ref_counted<AndroidAudioDeviceModule>(
      audio_layer, is_stereo_playout_supported, is_stereo_record_supported,
      playout_delay_ms, std::move(audio_input), std::move(audio_output));
}

到这里准备展开Native层的AudioDeviceModule——AndroidAudioDeviceModule,在此之前总结以上的分析内容。可以用以下这一张图简单概括。

AudioRecordJni对标WebRtcAudioRecord,AudioTrackJni对标WebRtcAudioTrack,nativeAudioDeviceModule保存着AndroidAudioDeviceModule对象指针的地址值。

三、AndroidAudioDeviceModule

犹豫篇幅和代码复杂度的关系,这一节不打算刨根问底的贴代码。先用一张图简单概述AndroidAudioDeviceModule的内部状态。

到这里能清晰的看清楚整个AudioDeviceModule的构成,接着再简单的分析AudioRecordJni——WebRtcAudioRecord的部分关键代码。AudioTrackJni——WebRtcAudioTrack暂时放一放,因为没有分析到传输模块,还没有外部音频数据的流入。

// sdk\android\src\jni\audio_device\audio_record_jni.cc
int32_t AudioRecordJni::InitRecording() {
  RTC_DCHECK(thread_checker_.IsCurrent());
  if (initialized_) {
    // Already initialized.
    return 0;
  }

  int frames_per_buffer = Java_WebRtcAudioRecord_initRecording(
      env_, j_audio_record_, audio_parameters_.sample_rate(),
      static_cast<int>(audio_parameters_.channels()));
  if (frames_per_buffer < 0) {
    direct_buffer_address_ = nullptr;
    return -1;
  }
  frames_per_buffer_ = static_cast<size_t>(frames_per_buffer);

  const size_t bytes_per_frame = audio_parameters_.channels() * sizeof(int16_t);

  initialized_ = true;
  return 0;
}
// org.webrtc.audio.WebRtcAudioRecord.java
@CalledByNative
private int initRecording(int sampleRate, int channels) {
    if (audioRecord != null) {
        return -1;
    }
    final int bytesPerFrame = channels * getBytesPerSample(audioFormat);
    final int framesPerBuffer = sampleRate / BUFFERS_PER_SECOND;
    byteBuffer = ByteBuffer.allocateDirect(bytesPerFrame * framesPerBuffer);
    if (!(byteBuffer.hasArray())) {
        return -1;
    }
    emptyBytes = new byte[byteBuffer.capacity()];
	
    // 比起每次回调传递ByteBuffer(需要调用笨重的GetDirectBufferAddress方法)
	// 我们可以简单的传递一次缓存地址到内部native层。
    nativeCacheDirectBufferAddress(nativeAudioRecord, byteBuffer);
	
    final int channelConfig = channelCountToConfiguration(channels);
    // 请注意,此size大小不能保证在负载下顺利录制。
    int minBufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
    if (minBufferSize == AudioRecord.ERROR || minBufferSize == AudioRecord.ERROR_BAD_VALUE) {
        return -1;
    }
	// 使用 大于所需最小值的缓冲区空间,确保在负载下顺利录制。
	// 经验证,它不会增加实际记录延迟。
	// BUFFER_SIZE_FACTOR = 2
    int bufferSizeInBytes = Math.max(BUFFER_SIZE_FACTOR * minBufferSize, byteBuffer.capacity());
    try {
        if (Build.VERSION.SDK_INT >= 23) {
            this.audioRecord = createAudioRecordOnMOrHigher(this.audioSource, sampleRate, channelConfig, this.audioFormat, bufferSizeInBytes);
            if (this.preferredDevice != null) {
                this.setPreferredDevice(this.preferredDevice);
            }
        } else {
            this.audioRecord = createAudioRecordOnLowerThanM(this.audioSource, sampleRate, channelConfig, this.audioFormat, bufferSizeInBytes);
        }
    } catch (IllegalArgumentException e) {
        releaseAudioResources();
        return -1;
    }

    if (this.audioRecord != null && this.audioRecord.getState() == 1) {
        this.effects.enable(this.audioRecord.getAudioSessionId());
		// 启用硬件降噪,回音消除的特性。
        this.logMainParameters();
        this.logMainParametersExtended();
        return framesPerBuffer;
    } else {
        this.releaseAudioResources();
        return -1;
    }
}

以初始化 initRecording 为例。代码流程如上,其他方法也是这样一一对应。可能有同学会问,那什么才真正的调用init / start / stop之类的功能函数呢?不要急,因为现在还停留在初始化的构造阶段,往后分析功能的时候就能知道了。

本章总结:Android-RTC-AudioDeviceModule模块构造如下,希望下图对你有帮助。

?

  移动开发 最新文章
Vue3装载axios和element-ui
android adb cmd
【xcode】Xcode常用快捷键与技巧
Android开发中的线程池使用
Java 和 Android 的 Base64
Android 测试文字编码格式
微信小程序支付
安卓权限记录
知乎之自动养号
【Android Jetpack】DataStore
上一篇文章      下一篇文章      查看所有文章
加:2021-08-20 15:14:29  更:2021-08-20 15:14:56 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/23 9:44:35-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码