• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2023, Vol. 45 ›› Issue (07): 1292-1299.

• 人工智能与数据挖掘 • 上一篇    下一篇

基于BiLSTM的低资源老挝语文本正则化任务

王剑1,姜林1,2,王琳钦1,2,余正涛1,2,张松1,2,高盛祥1,2   

  1. (1.昆明理工大学信息工程与自动化学院,云南 昆明 650500;
    2.昆明理工大学云南省人工智能重点实验室,云南 昆明  650500)

  • 收稿日期:2021-11-24 修回日期:2022-03-11 接受日期:2023-07-25 出版日期:2023-07-25 发布日期:2023-07-11
  • 基金资助:
    国家自然科学基金(61732005,U21B2027, 61761026, 61972186, 61762056);国家重点研发计划(2019QY1802, 2019QY1801, 2019QY1800);云南省高科技人才项目(201606,202105AC160018);云南省重大科技专项计划(202002AD080001-5,202103AA080015);云南省基础研究计划(202001AS070014,2018FB104)

A low-resource Lao text regularization task based on BiLSTM

WANG Jian1,JIANG Lin1,2,WANG Lin-qin1,2,YU Zheng-tao1,2,ZHANG Song1,2,GAO Sheng-xiang1,2   

  1. (1.Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming 650500;
    2.Yunnan Key Laboratory of Artificial Intelligence,Kunming University of Science and Technology,Kunming 650500,China)
  • Received:2021-11-24 Revised:2022-03-11 Accepted:2023-07-25 Online:2023-07-25 Published:2023-07-11

摘要: 文本正则化TN是语音合成文本前端分析任务中必不可少的工作,老挝语的文本正则化是将老挝语文本中不可读的词NSW转化为可以口头表达的词SFW。目前文本正则化任务尚未在老挝语中开展,主要面临训练数据难获取、部分不可读词存在歧义的问题。针对以上问题,构建了老挝语文本正则化任务的语料,并将老挝语文本正则化任务当作序列标注任务,使用神经网络结合上下文语境预测存在歧义的不可读的老挝语文本,增加自注意力机制加深序列字符间的关系,探究了不同策略引入预训练语言模型的效果,融合各自注意力机制的BiLSTM模型在测试集上达到67.59%的准确率。

关键词: 老挝语, 文本正则化, 神经网络, 自注意力机制

Abstract: Text normalization (TN) is an indispensable work in the front-end analysis task of speech synthesis text. Lao text normalization is to convert non-standard words (NSW) in Lao text into spoken-form words (SFW). At present, the task of text normalization has not yet been carried out in Lao, which mainly faces the problems of difficult acquisition of training data, diversified language expression and text regularization with ambiguity. A text normalization task in Lao is carried out. This task is completed as a sequence tagging task, and neural networks are used to predict NSW with ambiguity in combination with context. The corpus of the Lao text normalization task is constructed, the model results is predicted through the neural network, the self-attention mechanism is increased to deepen the relationship between the sequence characters, and different strategies are explored to introduce the pre-trained language model. An accuracy of 67.59% is achieved on the test set. 

Key words: Lao, text normalization, neural network, self-attention mechanism