阿里巴巴开源语音识别声学建模技术

简介: 本文我们介绍阿里巴巴的语音识别声学建模新技术: 前馈序列记忆神经网络(DFSMN)。目前基于DFSMN的语音识别系统已经在法庭庭审识别、智能客服、视频审核和实时字幕转写、声纹验证、物联网等多个场景成功应用。

编者按:本文作者阿里巴巴机器智能技术实验室高级算法工程师张仕良。文章介绍了阿里巴巴的语音识别声学建模新技术: 前馈序列记忆神经网络(DFSMN),目前基于DFSMN的语音识别系统已经在法庭庭审识别、智能客服、视频审核和实时字幕转写、声纹验证、物联网等多个场景成功应用。本次,我们开源了基于Kaldi语音识别工具实现的DFSMN代码,同时开源了相关训练脚本。 通过开源的代码和训练流程,我们在公开的英文数据集LibriSpeech上可以获得目前最好的性能。

This post presents DFSMN, an improved Feedforward Sequential Memory Networks (FSMN) architecture for large vocabulary continuous speech recognition. We release the source codes and training recipes of DFSMN based on the popular Kaldi speech recognition toolkit and demonstrate that DFSMN can achieve the best performance in the LibriSpeech speech recognition task.

Acoustic Modeling in Speech Recognition

Deep neural networks have become the dominant acoustic models in large vocabulary continuous speech recognition systems. Depending on how the networks are connected, there exist various types of neural network architectures, such as feedforward fully-connected neural networks (FNN), convolutional neural networks (CNN) and recurrent neural networks (RNN).

For acoustic modeling, it is crucial to take advantage of the long term dependency within the speech signal. Recurrent neural networks (RNN) are designed to capture long term dependency within the sequential data using a simple mechanism of recurrent feedback. RNNs can learn to model sequential data over an extended period of time and store the memory in the connections, then carry out rather complicated transformations on the sequential data. As opposed to FNNs that can only learn to map a fixed-size input to a fixed-size output, RNNs can in principle learn to map from one variable-length sequence to another. Therefore, RNNs, especially the short term memory (LSTM), have become the most popular choice in acoustic modeling for speech recognition.

In our previous work, we have proposed a novel neural architecture non-recurrent structure, namely feedforward sequential memory networks (FSMN), which can effectively model long term dependency in sequential data without using any recurrent feedback. FSMN is inspired by the filter design knowledge in digital signal processing that any infinite impulse response (IIR) filter can be well approximated using a high-order finite impulse response (FIR) filter. Because the recurrent layer in RNNs can be conceptually viewed as a first-order IIR filter, it may be precisely approximated by a high-order FIR filter. Therefore, we extend the standard feedforward fully connected neural networks by augmenting some memory blocks, which adopt a tapped-delay line structure as in FIR filters, into the hidden layers. Fig. 1 (a) shows a FSMN with one memory block added into its -th hidden layer and Fig. 1 (b) shows the FIR filter like memory block in FSMN. As a result, the overall FSMN remains as a pure feedforward structure so that it can be learned in a much more efficient and stable way than RNNs. The learnable FIR like memory blocks in FSMNs may be used to encode long context information into a fixed-size representation, which helps the model to capture long-term dependency. Experimental results in the English recognition Switchboard task show that FSMN can outperform the popular BLSTM while faster in training speed.

88aae00501f4767ee5d39c6d8911df045a667c1e

Fig. 1. Illustration of FSMN and its tapped-delay memory block

DFSMN Open Source

fc37726ef3f2979aca4ef712ecc893df0431db8d

Fig. 2. Illustration of Deep-FSMN (DFSMN) with skip connection

In this work, based on our previous FSMN works and recent works on neural networks with very deep architecture, we present an improved FSMN structure namely Deep-FSMN (DFSMN) (as show in Fig. 2) by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. We can successfully build DFSMN with dozens of layers and significantly outperform the previous FSMN.

We implement the DFSMN based on the popular kaldi speech recognition toolkit and release the source code in (https://github.com/tramphero/kaldi). The DFSMN is embedded into the kaldi-nnet1 by adding some DFSMN related components and CUDA kernel functions. We use mini-batch based training instead of the multi-streams which is more stable and efficient.

Improving the State of Art

We have trained the DFSMN in the LibriSpeech corpus, which is a large (1000 hour) corpus of English read speech derived from audiobooks in the LibriVox project, sampled at 16 kHz. We trained DFSMN with two official settings using kaldi recipes: 1) model trained on the “cleaned data” (960-hours-setting); 2) model trained on the speed-perturbed and volume-perturbed “cleaned data” (3000-hours-setting).

For the plain 960-hours-setting, the previous kaldi official release best model is the cross-entropy trained BLSTM. For comparison, we trained the DFSMN with the same front-end processing as well as the decoding configurations as the official-BLSTM using the cross-entropy criterion. The experimental results are as shown in Table 1. For the augmented 3000-hours-setting, the previous best result is achieved by the TDNN trained with lattice-free MMI followed by sMBR based discriminative training. In comparison, we trained DFSMN with cross-entropy followed by one epoch sMBR based discriminative training. The experimental results are as shown in Table 2. For both settings, our DFSMN can achieve the significantly performance improvement compared to the previous best results.

Table 1. Performance (WER in %) of BLSTM and DFSMN trained on cleaned data.

Model

Small LM

Large LM

Official-BLSTM

6.85

5.22

DFSMN

4.73

4.36

Relative Gain

+30.95%

+16.48%

Table 2. Performance (WER in %) of BLSTM and DFSMN trained on speed-perturbed and volume-perturbed cleaned data.

Model

Small LM

Large LM

TDNN

6.15

4.31

DFSMN

5.10

3.96

Relative Gain

+17.07%

+8.12%

How to get our implementation and reproduce our results

We have released two methods to get the implementation and reproduce our results: 1) Github project based on the Kaldi; 2) A PATCH file with the DFSMN related codes and example scripts.

  • Get Github project

git clone https://github.com/tramphero/kaldi

  • Apply PATCH

The PATCH is built based on the Kaldi speech recognition toolkit with commit "04b1f7d6658bc035df93d53cb424edc127fab819". One can apply this PATCH to your own kaldi branch by using the following commands:

#Take a look at what changes are in the patch

git apply --stat Alibaba_MIT_Speech_DFSMN.patch

#Test the patch before you actually apply it

git apply --check Alibaba_MIT_Speech_DFSMN.patch

#If you don’t get any errors, the patch can be applied cleanly.

git am --signoff < Alibaba_MIT_Speech_DFSMN.patch

The training scripts and experimental results for the LibriSpeech task is available at https://github.com/tramphero/kaldi/tree/master/egs/librispeech/s5. There are three DFSMN configurations with different model size: DFSMN_S, DFSMN_M, DFSMN_L.

**********************************************************************************

# ## Training FSMN models on the cleaned-up data

# ## Three configurations of DFSMN with different model size: DFSMN_S, DFSMN_M, DFSMN_L

local/nnet/run_fsmn_ivector.sh DFSMN_S

local/nnet/run_fsmn_ivector.sh DFSMN_M

local/nnet/run_fsmn_ivector.sh DFSMN_L

**********************************************************************************

The DFSMN_S is a small DFSMN with six DFSMN-components while DFSMN_L is a large DFSMN consist of 10 DFSMN-components. For the 960-hours-setting, it takes about 2-3 days to train DFSMN_S only using one M40 GPU. And the detailed experimental results are listed in the RESULTS file.

For more details, take a look at our paper and the open-source project.

相关实践学习
达摩院智能语音交互 - 声纹识别技术
声纹识别是基于每个发音人的发音器官构造不同,识别当前发音人的身份。按照任务具体分为两种: 声纹辨认:从说话人集合中判别出测试语音所属的说话人,为多选一的问题 声纹确认:判断测试语音是否由目标说话人所说,是二选一的问题(是或者不是) 按照应用具体分为两种: 文本相关:要求使用者重复指定的话语,通常包含与训练信息相同的文本(精度较高,适合当前应用模式) 文本无关:对使用者发音内容和语言没有要求,受信道环境影响比较大,精度不高 本课程主要介绍声纹识别的原型技术、系统架构及应用案例等。 讲师介绍: 郑斯奇,达摩院算法专家,毕业于美国哈佛大学,研究方向包括声纹识别、性别、年龄、语种识别等。致力于推动端侧声纹与个性化技术的研究和大规模应用。
目录
相关文章
|
4月前
|
机器学习/深度学习 自然语言处理 算法
未来语音交互新纪元:FunAudioLLM技术揭秘与深度评测
人类自古以来便致力于研究自身并尝试模仿,早在2000多年前的《列子·汤问》中,便记载了巧匠们创造出能言善舞的类人机器人的传说。
12209 116
|
1天前
|
机器学习/深度学习 人工智能 自然语言处理
医疗行业的语音识别技术解析:AI多模态能力平台的应用与架构
AI多模态能力平台通过语音识别技术,实现实时转录医患对话,自动生成结构化数据,提高医疗效率。平台具备强大的环境降噪、语音分离及自然语言处理能力,支持与医院系统无缝集成,广泛应用于门诊记录、多学科会诊和急诊场景,显著提升工作效率和数据准确性。
|
2天前
|
机器学习/深度学习 自然语言处理 搜索推荐
智能语音交互技术:构建未来人机沟通新桥梁####
【10月更文挑战第28天】 本文深入探讨了智能语音交互技术的发展历程、当前主要技术框架、核心算法原理及其在多个领域的应用实例,旨在为读者提供一个关于该技术全面而深入的理解。通过分析其面临的挑战与未来发展趋势,本文还展望了智能语音交互技术如何继续推动人机交互方式的革新,以及它在未来社会中的潜在影响。 ####
13 0
|
3天前
|
机器学习/深度学习 搜索推荐 人机交互
智能语音交互技术的突破与未来展望###
【10月更文挑战第27天】 本文聚焦于智能语音交互技术的最新进展,探讨了其从早期简单命令识别到如今复杂语境理解与多轮对话能力的跨越式发展。通过深入分析当前技术瓶颈、创新解决方案及未来趋势,本文旨在为读者描绘一幅智能语音技术引领人机交互新纪元的蓝图。 ###
8 0
|
1月前
|
自然语言处理 UED 开发者
LLaMA-Omni 低延迟高质量语音交互,开源!
随着GPT-4o的发布,在语音界面的Voice-Chat越来越受到大家的关注,对于低延迟,高准确性模型的speech-to-speech的需求日益增长
|
3月前
|
机器学习/深度学习 人工智能 语音技术
使用深度学习进行语音识别:技术探索与实践
【8月更文挑战第12天】深度学习技术的快速发展为语音识别领域带来了革命性的变化。通过不断优化模型架构和算法,我们可以期待更加准确、高效和智能的语音识别系统的出现。未来,随着技术的不断进步和应用场景的不断拓展,语音识别技术将在更多领域发挥重要作用,为人类带来更加便捷和智能的生活体验。
|
3月前
|
人工智能 算法 人机交互
FunAudioLLM技术深度测评:重塑语音交互的未来
在人工智能的浪潮中,语音技术作为人机交互的重要桥梁,正以前所未有的速度发展。近期,FunAudioLLM以其独特的魅力吸引了业界的广泛关注。本文将以SenseVoice大模型为例,深入探索FunAudioLLM在性能、功能及技术先进性方面的表现,并与国际知名语音大模型进行对比分析,同时邀请各位开发者共同参与,为开源项目贡献一份力量。
86 4
|
4月前
|
人工智能 API 语音技术
PHP对接百度语音识别技术
PHP对接百度语音识别技术
93 1
|
4月前
|
机器学习/深度学习 自然语言处理 大数据
语音识别和语音合成技术
语音识别和语音生成是人工智能的重要分支,旨在实现计算机对人类语音的理解和生成。随着深度学习技术的快速发展,语音识别和生成技术在近年来取得了显著进展,并在多个领域实现了广泛应用。本文将介绍语音识别和生成的基本原理、关键技术及其应用,并探讨其未来的发展趋势。
170 3
|
3月前
|
机器学习/深度学习 自然语言处理 算法
尖叫!FunAudioLLM 技术掀起狂潮,开启语音交互的惊天巨变之门!
【8月更文挑战第8天】随着科技的进步,语音交互已成为日常不可或缺的部分。FunAudioLLM凭借其先进的自然语言处理和深度学习技术,在语音理解和生成方面实现了突破。相较于传统技术,它提升了理解和响应速度。通过简单的Python代码示例,我们可以测试其对如天气查询等指令的快速准确反馈。FunAudioLLM不仅适用于日常交流,还在医疗、教育等领域展现出应用潜力。尽管存在多语言环境下的准确性挑战,其为语音交互领域带来的革新仍值得期待。随着技术的持续发展,FunAudioLLM将为更多领域带来便利和效率。
63 0

热门文章

最新文章