声网是一家提供语音、视频即时通讯服务的公司,他的服务大多基于WebRTC开源项目并进行一些优化和修改。而讯飞语音识别应该不用多说了,老罗在发布会上介绍得已经够详细了。

那么下面进入今天的主题,就是让声网和讯飞识别同时使用,之前可能有朋友没遇到过这样的需求,那先说一下让两者同时使用会出现啥问题,为什么要做修改呢?其实原因很简单,即时通讯过程中毫无疑问肯定会用到麦克风和扬声器的,而语音识别呢,麦克风当然也是必须的了,好,那问题来了,同时有两个地方需要调用麦克风,Android系统到底要分配给谁呢?经测试,这问题对于Android5.0和5.1一点问题都没有,他们好像对麦克风这个硬件资源进行了抽象和封装,所有调用者其实拿的都是实际音频流的一份拷贝。但是其他系统一旦同时使用这两者,就肯定会报出AudioRecord -38的错误,而且每次都是讯飞识别报出,因为声网每次启动通讯时都会把麦克风资源给抢了。。。好,既然这样,我们就得另辟蹊径了。

经过思考,由于讯飞提供自定义音频源的方式,因此我们决定从改变讯飞音频源的方式入手,但是由于声网的加入通讯和退出通讯是随时都可能发生的,因此,如果每次切换都要改变讯飞的配置,那么两者的耦合性太大了,如果以后音频源不止原生AudioRecord和声网,那么又得修改讯飞了,这显然是不符合软件工程开发的思想的。所以我们最后决定用发布/订阅者模式进行设计,首先弄一个manager管理所有订阅者和当前发布者,这里发布和订阅者之间的关系显然是1对多的,因此订阅者是一个列表,而发布者就应该是一个成员对象。然后定义发布者和订阅者两者的接口,其中发布者的接口就应该包括开启录音和关闭录音,而订阅者的接口就更简单,通知有音频源到来就行。废话不再多说,先上代码。

public class XLAudioRecordManager {
    private static XLAudioRecordManager instance = null;
    private List<XLAudioRecordSubscriberCallback> subscribors = new ArrayList<>();
    private final static String TAG = XLAudioRecordManager.class.getSimpleName();
    private XLAudioRecord internalAudioPublisher; // 内部的音频提供者
    private XLAudioRecordPublisherCallback curPublisher; // 只需要一个发布者

    public void setCurPublisher(XLAudioRecordPublisherCallback curPublisher) {
        this.curPublisher = curPublisher;
    }

    public void initCurPublisher() {
        curPublisher = internalAudioPublisher;
    }

    private XLAudioRecordManager() {
        internalAudioPublisher = new XLAudioRecord();
        initCurPublisher();
    }

    public static XLAudioRecordManager getInstance() {
        if (instance == null) {
            instance = new XLAudioRecordManager();
        }
        return instance;
    }

    public void writeAudio(byte[] audioBuffer, int offset, int length) {
        for (XLAudioRecordSubscriberCallback callback : subscribors) {
            callback.onAudio(audioBuffer, offset, length);
        }
    }

    public void subscribe(XLAudioRecordSubscriberCallback callback) {
        this.subscribors.add(callback);
    }

    public void unSubscribe(XLAudioRecordSubscriberCallback callback) {
        this.subscribors.remove(callback);
    }

    // 订阅者接口
    public interface XLAudioRecordSubscriberCallback {
        void onAudio(byte[] audioData, int offset, int length);
    }

    // 发布者接口
    public interface XLAudioRecordPublisherCallback {
        void onStartRecording();
        void onStopRecording();
    }

    public void startRecording() {
        // 通知发布者开始采集音频流
        curPublisher.onStartRecording();
    }

    public void stopRecording() {
        // 通知发布者停止采集音频流
        curPublisher.onStopRecording();
    }

}
可以从上面代码中看到,该管理还维护了一个内部的音频源发布者,其实就是原生的AudioRecord,这样外部也不需要知道没有声网介入时音频流从何而来了。OK,下面可以通过看一下这个XLAudioRecord了解发布者是怎么实现的。

public class XLAudioRecord implements XLAudioRecordManager.XLAudioRecordPublisherCallback {
    private AudioRecord mAudioRecord = null;
    private boolean isRecording = false; // 判断AudioRecord是否需要开启
    public static final int SAMPLE_RATE = 16000;
    private int mMinbuffer = 1000;
    public static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
    public static final int AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
    private final static String TAG = XLAudioRecord.class.getSimpleName();
//    private AcousticEchoCanceler acousticEchoCanceler;

    public XLAudioRecord() {
        mMinbuffer = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, AUDIO_FORMAT);
        if (mMinbuffer != AudioRecord.ERROR_BAD_VALUE) {
//            initAudioRecord();
//            acousticEchoCanceler = AcousticEchoCanceler.create(mAudioRecord.getAudioSessionId());
//            acousticEchoCanceler.setEnabled(true);

        } else {
            Log.e(TAG, "AudioRecord getMinBuffer error");
        }
    }

    private void initAudioRecord() {
        int trytimes = 0;
        while (true) {
            try {
                mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE,
                        CHANNEL_CONFIG, AUDIO_FORMAT, mMinbuffer);
                if (mAudioRecord.getRecordingState() == AudioRecord.STATE_INITIALIZED) {
                    break;
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
            if (trytimes >= 5) {
                Log.e(TAG, "AudioRecord initialize error");
                break;
            }
            trytimes++;
        }
    }

    @Override
    public void onStartRecording() {
        isRecording = true;
        new Thread(new Runnable() {
            @Override
            public void run() {
                initAudioRecord();
                mAudioRecord.startRecording();
                while (isRecording) {
                    byte[] audioData = new byte[mMinbuffer];
                    int bufferSize = mAudioRecord.read(audioData, 0, mMinbuffer);
                    try {
                        Thread.sleep(40);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    XLAudioRecordManager.getInstance().writeAudio(audioData, 0, bufferSize);
                }
                if (mAudioRecord != null && mAudioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
                    mAudioRecord.stop();
                    mAudioRecord.release();
                    mAudioRecord = null;
                }
            }
        }).start();
    }

    @Override
    public void onStopRecording() {
        isRecording = false;
    }
}
注意一下,onStopRecording中不能直接stop AudioRecord,而是将录音循环停止,使录音循环作为一个原子操作。

接下来,看一下声网的这个发布者是如何接入的,我们需要设置rtcengine的AudioFrame参数。

mRtcEngine.setRecordingAudioFrameParameters(SAMPLE_RATE, 1, 0, 1024);
            mRtcEngine.registerAudioFrameObserver(audioFrameObserver);
其中AudioFrameObserver定义如下:
    private IAudioFrameObserver audioFrameObserver = new IAudioFrameObserver() {
        @Override
        public boolean onRecordFrame(byte[] bytes, int i, int i1, int i2, int i3) {

            if (isListening) {
                XLAudioRecordManager.getInstance().writeAudio(bytes, 0, bytes.length);
            }
            return true;
        }

        @Override
        public boolean onPlaybackFrame(byte[] bytes, int i, int i1, int i2, int i3) {
            return false;
        }
    };

还有,发布者接口的实现如下:

@Override
    public void onStartRecording() {
        isListening = true;
    }

    @Override
    public void onStopRecording() {
        isListening = false;
    }


最后,介绍一下订阅者讯飞的实现了:

public class IFlyRecognizer extends RecognizerAdapter implements XLAudioRecordManager.XLAudioRecordSubscriberCallback {

    private com.iflytek.cloud.RecognizerListener recognizerListener;
    private SpeechRecognizer speechRecognizer;
    private String userAudioPath = null;
    private static Context mContext;

    public IFlyRecognizer(Context context) {
        mContext = context;
        XLAudioRecordManager.getInstance().subscribe(this);
        speechRecognizer = SpeechRecognizer.createRecognizer(context, null);
        //2.设置听写参数
        speechRecognizer.setParameter(SpeechConstant.DOMAIN, "iat");
        speechRecognizer.setParameter(SpeechConstant.LANGUAGE, "zh_cn");
        speechRecognizer.setParameter(SpeechConstant.ACCENT, "mandarin");
        speechRecognizer.setParameter(SpeechConstant.PARAMS, null);
        speechRecognizer.setParameter(SpeechConstant.SAMPLE_RATE, "16000");
        //设置返回多个结果
        speechRecognizer.setParameter(SpeechConstant.ASR_NBEST, "5");
        // 设置语音前端点:静音超时时间,即用户多长时间不说话则当做超时处理
        speechRecognizer.setParameter(SpeechConstant.VAD_BOS, "8000");
        // 设置语音后端点:后端点静音检测时间,即用户停止说话多长时间内即认为不再输入, 自动停止录音
        speechRecognizer.setParameter(SpeechConstant.VAD_EOS, "1000");
        speechRecognizer.setParameter(SpeechConstant.ASR_PTT, "0");
        speechRecognizer.setParameter(SpeechConstant.AUDIO_SOURCE, "-1");
    }

    @Override
    public void setRecognizerListener(RecognizerListener listener) {
        this.recognizerListener = new IFlyRecognizerListener(listener);
    }

    @Override
    public void startRecognize() {
        //如果获取用户ID失败,则不保存录音文件
//        if (!Config.USER_ID.equals("")) {
//            speechRecognizer.setParameter(SpeechConstant.AUDIO_FORMAT, "wav");
//            speechRecognizer.setParameter(SpeechConstant.ASR_AUDIO_PATH, getAudioPathName());
//        }
        XLAudioRecordManager.getInstance().startRecording();
        speechRecognizer.startListening(recognizerListener);
        if (recognizerListener != null) {
            recognizerListener.onBeginOfSpeech();
        }
    }

    @Override
    public void stopRecognize() {
        speechRecognizer.stopListening();
        XLAudioRecordManager.getInstance().stopRecording();
    }

    @Override
    public void cancelRecognize() {
        speechRecognizer.cancel();
        XLAudioRecordManager.getInstance().stopRecording();
    }

    @Override
    public void onAudio(byte[] audioData, int offset, int length) {
        int res = speechRecognizer.writeAudio(audioData, offset, length);
//        if (res == ErrorCode.SUCCESS) {
//            Log.e("IFlyRecognizer", "写入成功");
//        } else {
//            Log.e("IFlyRecognizer", "写入失败");
//        }
    }
}
可以看到,初始化AUDIO_SOURCE时要设置为-1,这样才可以在onAudio中writeAudio到讯飞的Recognizer中。

好了,声网与讯飞的结合工作差不多讲完了,真心觉得当初学的设计模式对现在的代码编写有潜移默化的作用,希望对大家有所帮助吧。


Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐