在记录事件的时候,用户在不方便手写的时候,我们可以利用语音录入,转成文字的形式记录时间,是不是既方便又只能,现在做语音识别的有一些不错的开放平台供我们使用,科大讯飞平台,百度语音平台。科大讯飞的优势在于大段大段的文字识别上,准确率较高。这篇博客也主要讲的是是讯飞语音SDK的使用。下面我们详细看一下科大讯飞。
1.科大讯飞开放平台
http://www.xfyun.cn
2.科大讯飞iOS-API开放平台
第一步:申请账号ID
创建新应用(获得后续的appid以及开通服务)
登录到讯飞开放平台上,在用户菜单栏里创建应用,这里的登录也可以采用第三方方式,在创建应用的界面填写相关的信息即可,然后就会有一个SDK的下载链接,,如果没有直接去SDK选项下载即可。
第二步:导入讯飞SDK框架
下载下来SDK解压后有三个文件夹:doc文件夹:不用多说肯定是开发文档;重要的是接下来的那两个文件夹:一个是lib文件夹:存放科大讯飞SDK类库,这就是我们要导入的SDK;一个是sample的科大讯飞demo演示工程。
下面我们创建一个工程,将lib文件夹下的”iflyMSC.framework”拷贝到工程目录,然后在工程中添加依赖库,如下图所示:
第三步:开始进行语音识别
语音识别分两种,分别用在不同场合,一个是界面提示的语音识别,一个是*面提示的语音识别,这里以有界面提示的语音识别为例先进性讲解。
3.1导入头文件
#import "iflyMSC/IFlyMSC.h"
- 1
- 1
#import "IFlyContact.h" #import "IFlyDataUploader.h" #import "IFlyDebugLog.h" #import "IFlyISVDelegate.h" #import "IFlyISVRecognizer.h" #import "IFlyRecognizerView.h" #import "IFlyRecognizerViewDelegate.h" #import "IFlyResourceUtil.h" #import "IFlySetting.h" #import "IFlySpeechConstant.h" #import "IFlySpeechError.h" #import "IFlySpeechEvaluator.h" #import "IFlySpeechEvaluatorDelegate.h" #import "IFlySpeechEvent.h" #import "IFlySpeechRecognizer.h" #import "IFlySpeechRecognizerDelegate.h" #import "IFlySpeechSynthesizer.h" #import "IFlySpeechSynthesizerDelegate.h" #import "IFlySpeechUnderstander.h" #import "IFlySpeechUtility.h" #import "IFlyTextUnderstander.h" #import "IFlyUserWords.h" #import "IFlyPcmRecorder.h" #import "IFlySpeechEvaluator.h" #import "IFlySpeechEvaluatorDelegate.h" #import "IFlyVoiceWakeuper.h" #import "IFlyVoiceWakeuperDelegate.h"
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
3.2登录讯飞服务器
在使用讯飞的语音解析之前,需要进行用户身份验证,即登录讯飞服务器,即讯飞服务器需要根据你当前用户的APPID才能同意你登录。代码如下:
//登陆语音平台 NSString *initString = [[NSString alloc] initWithFormat:@"appid=%@",@"57e08eb8"]; [IFlySpeechUtility createUtility:initString];
- 1
- 2
- 3
- 1
- 2
- 3
3.3创建有界面提示语音识别对象
// Speech-JiKe // // Created by rimi on 16/9/22. // Copyright © 2016年 LucioSui. All rights reserved. // #import <UIKit/UIKit.h> #import "iflyMSC/iflyMSC.h" @class IFlySpeechRecognizer; @interface ViewController : UIViewController<IFlySpeechRecognizerDelegate,IFlyRecognizerViewDelegate> @property (nonatomic, strong) NSString *filePath;//音频文件路径 @property (nonatomic, strong) IFlySpeechRecognizer *iFlySpeechRecognizer;//不带界面的识别对象 @property (nonatomic, strong) IFlyRecognizerView *iflyRecognizerView;//带界面的识别对象 @property (nonatomic, strong) NSString * result; @property (nonatomic, assign) BOOL isCanceled; @end
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
3.4初始化带界面的识别对象
// 设置识别参数 -(void)initRecognizer { NSLog(@"%s",__func__); if ([IATConfig sharedInstance].haveView == NO) {//*面 //单例模式,无UI的实例 if (_iFlySpeechRecognizer == nil) { _iFlySpeechRecognizer = [IFlySpeechRecognizer sharedInstance]; [_iFlySpeechRecognizer setParameter:@"" forKey:[IFlySpeechConstant PARAMS]]; //设置听写模式 [_iFlySpeechRecognizer setParameter:@"iat" forKey:[IFlySpeechConstant IFLY_DOMAIN]]; } _iFlySpeechRecognizer.delegate = self; if (_iFlySpeechRecognizer != nil) { IATConfig *instance = [IATConfig sharedInstance]; //设置最长录音时间 [_iFlySpeechRecognizer setParameter:instance.speechTimeout forKey:[IFlySpeechConstant SPEECH_TIMEOUT]]; //设置后端点 [_iFlySpeechRecognizer setParameter:instance.vadEos forKey:[IFlySpeechConstant VAD_EOS]]; //设置前端点 [_iFlySpeechRecognizer setParameter:instance.vadBos forKey:[IFlySpeechConstant VAD_BOS]]; //网络等待时间 [_iFlySpeechRecognizer setParameter:@"20000" forKey:[IFlySpeechConstant NET_TIMEOUT]]; //设置采样率,推荐使用16K [_iFlySpeechRecognizer setParameter:instance.sampleRate forKey:[IFlySpeechConstant SAMPLE_RATE]]; if ([instance.language isEqualToString:[IATConfig chinese]]) { //设置语言 [_iFlySpeechRecognizer setParameter:instance.language forKey:[IFlySpeechConstant LANGUAGE]]; //设置方言 [_iFlySpeechRecognizer setParameter:instance.accent forKey:[IFlySpeechConstant ACCENT]]; }else if ([instance.language isEqualToString:[IATConfig english]]) { [_iFlySpeechRecognizer setParameter:instance.language forKey:[IFlySpeechConstant LANGUAGE]]; } //设置是否返回标点符号 [_iFlySpeechRecognizer setParameter:instance.dot forKey:[IFlySpeechConstant ASR_PTT]]; } }else {//有界面 //单例模式,UI的实例 if (_iflyRecognizerView == nil) { //UI显示剧中 _iflyRecognizerView= [[IFlyRecognizerView alloc] initWithCenter:self.view.center]; [_iflyRecognizerView setParameter:@"" forKey:[IFlySpeechConstant PARAMS]]; //设置听写模式 [_iflyRecognizerView setParameter:@"iat" forKey:[IFlySpeechConstant IFLY_DOMAIN]]; } _iflyRecognizerView.delegate = self; if (_iflyRecognizerView != nil) { IATConfig *instance = [IATConfig sharedInstance]; //设置最长录音时间 [_iflyRecognizerView setParameter:instance.speechTimeout forKey:[IFlySpeechConstant SPEECH_TIMEOUT]]; //设置后端点 [_iflyRecognizerView setParameter:instance.vadEos forKey:[IFlySpeechConstant VAD_EOS]]; //设置前端点 [_iflyRecognizerView setParameter:instance.vadBos forKey:[IFlySpeechConstant VAD_BOS]]; //网络等待时间 [_iflyRecognizerView setParameter:@"20000" forKey:[IFlySpeechConstant NET_TIMEOUT]]; //设置采样率,推荐使用16K [_iflyRecognizerView setParameter:instance.sampleRate forKey:[IFlySpeechConstant SAMPLE_RATE]]; if ([instance.language isEqualToString:[IATConfig chinese]]) { //设置语言 [_iflyRecognizerView setParameter:instance.language forKey:[IFlySpeechConstant LANGUAGE]]; //设置方言 [_iflyRecognizerView setParameter:instance.accent forKey:[IFlySpeechConstant ACCENT]]; }else if ([instance.language isEqualToString:[IATConfig english]]) { //设置语言 [_iflyRecognizerView setParameter:instance.language forKey:[IFlySpeechConstant LANGUAGE]]; } //设置是否返回标点符号 [_iflyRecognizerView setParameter:instance.dot forKey:[IFlySpeechConstant ASR_PTT]]; } } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
3.5实现代理方法
#pragma mark 错误的回调函数 - (void) onError:(IFlySpeechError *) error { NSLog(@"%s",__func__); if ([IATConfig sharedInstance].haveView == NO ) { NSString *text ; if (self.isCanceled) { text = @"识别取消"; } else if (error.errorCode == 0 ) { if (_result.length == 0) { text = @"无识别结果"; }else { text = @"识别成功"; } }else { text = [NSString stringWithFormat:@"发生错误:%d %@", error.errorCode,error.errorDesc]; NSLog(@"%@",text); } }else { NSLog(@"errorCode:%d",[error errorCode]); } } //*面,听写结果回调 // results:听写结果 // isLast:表示最后一次 - (void)onResults:(NSArray *) results isLast:(BOOL)isLast { _volumLabel.alpha = 0.0; NSMutableString *resultString = [[NSMutableString alloc] init]; NSDictionary *dic = results[0]; for (NSString *key in dic) { [resultString appendFormat:@"%@",key]; } _result =[NSString stringWithFormat:@"%@",resultString]; NSString * resultFromJson = [ISRDataHelper stringFromJson:resultString]; _textLabel.text = [NSString stringWithFormat:@"%@%@",_textLabel.text,resultFromJson]; if (isLast){ NSLog(@"听写结果(json):%@测试", self.result); } NSLog(@"_result=%@",_result); } // 有界面,听写结果回调 // resultArray:听写结果 // isLast:表示最后一次 - (void)onResult:(NSArray *)resultArray isLast:(BOOL)isLast { _volumLabel.alpha = 0.0; NSMutableString *result = [[NSMutableString alloc] init]; NSDictionary *dic = [resultArray objectAtIndex:0]; for (NSString *key in dic) { [result appendFormat:@"%@",key]; } _textLabel.text = [NSString stringWithFormat:@"%@",result]; }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
3.6开始识别语音
音频文件保存地址
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *cachePath = [paths objectAtIndex:0]; _filePath = [[NSString alloc] initWithFormat:@"%@",[cachePath stringByAppendingPathComponent:@"asr.pcm"]];
- 1
- 2
- 3
- 1
- 2
- 3
只提取了一部分代码,持续更新,后期上DEMO。