iOS Aliyun语音识别&语音合成

发布时间 2023-05-25 15:02:18作者: R1cardo

Aliyun 语音识别&语音合成

导入 SDK

将ZIP包中的nuisdk.framework添加到工程中,并在工程Build PhasesLink Binary With Libraries中添加nuisdk.framework。请确保在编译配置的General > Frameworks, Libraries, and Embedded Content中配置nuisdk.frameworkEmbed & Sign

WX20230519-091932_2x

调用步骤

  • 初始化SDK、录音实例。设置代理,初始化录音机
// 初始化语音转文字
instance?.nui_initialize(initParams.utf8String, logLevel: LOG_LEVEL_ERROR, saveLog: true)
instance?.nui_set_params(sttParams.utf8String)
instance?.delegate = self

voiceRecorder = NlsVoiceRecorder()
voiceRecorder?.delegate = self
  • 根据业务需求配置参数。初始化参数合成和识别都是一样的。
/// SDK初始化参数
    internal var initParams: NSString {
        let bundle = Bundle.main.path(forResource: "Resources", ofType: "bundle")!
        let bundlePath = Bundle(path: bundle)!.resourcePath
        let idString = ASIdentifierManager.shared().advertisingIdentifier.uuidString
        
        voicePath = debugPath()
        
        var dict: [String: String] = [:]
        dict["workspace"] = bundlePath // 必填
        dict["debug_path"] = voicePath
        dict["device_id"] = idString // 必填
        dict["save_wav"] = "true"
        // 从阿里云获取appkey和token进行语音服务访问
        dict["app_key"] = appkey
        if token.isEmpty {
            HGToast("Need token")
            return ""
        }
        dict["token"] = token
        dict["url"] = "wss://nls-gateway.cn-shanghai.aliyuncs.com:443/ws/v1"
        
        // FullMix = 0   // 选用此模式开启本地功能并需要进行鉴权注册
        // FullCloud = 1 // 在线实时语音识别可以选这个
        // FullLocal = 2 // 选用此模式开启本地功能并需要进行鉴权注册
        // AsrMix = 3    // 选用此模式开启本地功能并需要进行鉴权注册
        // AsrCloud = 4  // 在线一句话识别可以选这个
        // AsrLocal = 5  // 选用此模式开启本地功能并需要进行鉴权注册
        dict["service_mode"] = "1" // 必填
        
        var jsonStr = ""
        do {
            let data = try JSONSerialization.data(withJSONObject: dict, options: .prettyPrinted)
            jsonStr = String(data: data, encoding: .utf8) ?? ""
            
        } catch {
            print("error genInitParams:\(error.localizedDescription)")
        }
        
        return jsonStr as NSString
    }
  • 调用nui_dialog_start开始识别。
instance?.nui_dialog_start(MODE_P2T, dialogParam: NSString(string: "").utf8String)
  • 根据音频状态回调audio_state_changed_callback,打开录音机。
func onNuiAudioStateChanged(_ state: NuiAudioState) {
        print("onNuiAudioStateChanged state=\(state.rawValue)")
        if state == STATE_CLOSE || state == STATE_PAUSE {
            voiceRecorder?.stop(true)
        } else if state == STATE_OPEN {
            recordedVoiceData = NSMutableData()
            voiceRecorder?.start()
        }
    }
  • 在user_data_callback回调中提供录音数据。
-(int)onNuiNeedAudioData:(char *)audioData length:(int)len data:(NSMutableData *) recordedVoiceData{
    static int emptyCount = 0;
    @autoreleasepool {
        @synchronized(recordedVoiceData){
            if (recordedVoiceData.length > 0) {
                int recorder_len = 0;
                if (recordedVoiceData.length > len)
                    recorder_len = len;
                else
                    recorder_len = recordedVoiceData.length;
                NSData *tempData = [recordedVoiceData subdataWithRange:NSMakeRange(0, recorder_len)];
                [tempData getBytes:audioData length:recorder_len];
                tempData = nil;
                NSInteger remainLength = recordedVoiceData.length - recorder_len;
                NSRange range = NSMakeRange(recorder_len, remainLength);
                [recordedVoiceData setData:[recordedVoiceData subdataWithRange:range]];
                emptyCount = 0;
                
                return recorder_len;
            } else {
                if (emptyCount++ >= 50) {
                    emptyCount = 0;
                }
                return 0;
            }

        }
    }
    return 0;
}
  • 在EVENT_ASR_PARTIAL_RESULT和EVENT_SENTENCE_END事件回调中获取识别结果。并将结果转为需要的 String。
func onNuiEventCallback(_ nuiEvent: NuiCallbackEvent, dialog: Int, kwsResult wuw: UnsafePointer<CChar>!, asrResult asr_result: UnsafePointer<CChar>!, ifFinish finish: Bool, retCode code: Int32) {
        print("onNuiEventCallback event: \(nuiEvent) finish: \(finish)")
        var result = NSString()
        if nuiEvent == EVENT_ASR_PARTIAL_RESULT || nuiEvent == EVENT_ASR_RESULT {
            result = NSString(utf8String: asr_result) ?? NSString()
            print("RESULT: \(result) finish: \(finish)")
            if let jsonData = String(result).data(using: .utf8) {
                do {
                    let jsonObject = try JSONSerialization.jsonObject(with: jsonData, options: [])
                    if let jsonDict = jsonObject as? [String: Any] {
                        if let payload = jsonDict["payload"] as? [String: Any] {
                            if let res = payload["result"] as? String {
                                DispatchQueue.main.async {
                                    self.onDone?(res)
                                }
                            }
                        }
                    }
                } catch {
                    print("Error parsing JSON: \(error.localizedDescription)")
                }
            }
            
        } else if nuiEvent == EVENT_ASR_ERROR {
            print("EVENT_ASR_ERROR error: \(code)")
            return
        } else if nuiEvent == EVENT_MIC_ERROR {
            print("MIC ERROR")
            voiceRecorder?.stop(true)
            voiceRecorder?.start()
            return
        }
        // finish 为真(可能是发生错误,也可能是完成识别)表示一次任务生命周期结束,可以开始新的识别
        if finish {
            print("STT result: \(String(result))")
        }
    }
  • 调用nui_dialog_cancel结束识别。
instance?.nui_dialog_cancel(false)
  • 结束调用,使用nui_release接口释放SDK资源。
// 释放资源
    deinit {
        instance?.nui_release()
        tts?.nui_tts_release()
    }

TTS 步骤与之类似

主要是获取 data 的方法是:

// 需要合成数据
    func onNuiTtsUserdataCallback(_ info: UnsafeMutablePointer<CChar>!, infoLen info_len: Int32, buffer: UnsafeMutablePointer<CChar>!, len: Int32, taskId task_id: UnsafeMutablePointer<CChar>!) {
        if info_len > 0 {
            print("onNuiTtsUserdataCallback info text \(String(describing: info)). index: \(info_len)")
        }
        if len > 0 {
            voicePlayer.write(buffer, length: len)
            print("哈哈哈哈哈:\(len)")
        }
    }

然后通过voicePlayer播放出来