• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2023, Vol. 45 ›› Issue (12): 2226-2236.

• 人工智能与数据挖掘 • 上一篇    下一篇

基于卷积注意力机制的双模态音乐流派分类模型MGTN

焦佳辉1,2,马思远1,2,宋玉2,宋伟1   

  1. (1.郑州大学河南省大数据研究院,河南 郑州 450052;2.郑州大学计算机与人工智能学院,河南 郑州 450001)
  • 收稿日期:2022-08-12 修回日期:2022-11-14 接受日期:2023-12-25 出版日期:2023-12-25 发布日期:2023-12-14

Bi-modal music genre classification model MGTN based on convolutional attention mechanism

JIAO Jia-hui1,2,MA Si-yuan1,2,SONG Yu2,SONG Wei1   

  1. (1.Henan Academy of Big Data,Zhengzhou University,Zhengzhou 450052;
    2.School of Computer and Artificial Intelligence,Zhengzhou University,Zhengzhou 450001,China)
  • Received:2022-08-12 Revised:2022-11-14 Accepted:2023-12-25 Online:2023-12-25 Published:2023-12-14

摘要: 在音乐信息检索(MIR)领域,根据音乐流派进行分类是一项具有挑战性的任务。传统的音频特征工程方法需要手动地选择并提取音乐信号特征进行处理,导致特征提取过程复杂,模型性能不稳定,泛化性差。深度学习与频谱图相结合的方法也有着部分数据不适合模型和全局特征提取困难等问题。提出了一种基于卷积注意力机制的音乐流派分类模型MGTN。MGTN融合了输入频谱图与提取音频信号特征构建音频时序数据2种音乐流派分类方法,使得模型提取特征的能力与泛化性大大提升,提供了音乐流派分类的新思路。在GTZAN与Ballroom数据集上的实验结果表明,MGTN模型能够有效地融合2种不同模态的输入数据。在与数十种基准模型进行的对比中,MGTN模型具备较强的优势。

关键词: 音乐流派分类, Transformer模型, 频谱图, 音频特征工程, 注意力机制

Abstract: In the field of music information retrieval (MIR), classification according to music genres is a challenging task. Traditional audio feature engineering methods requires manually selecting and extracting music signal features for processing, resulting in complex feature extraction process, unstable model performance and poor generalization. The method combining deep learning with spectrogram also has some problems such as unsuitable model for some data and difficulty in global feature extraction. This paper proposes a music genre classification model based on convolutional attention mechanism, called MGTN. MGTN combines two music genre classification methods: input spectrogram and audio signal feature extraction, to construct audio time series data, which greatly improves the model's ability to extract features and generalization, and provides a new idea for music genre classification. Experimental results on GTZAN and Ballroom datasets show that the MGTN model can effectively fuse input data from two different modalities. Compared with dozens of benchmark models, the MGTN model has strong advantages.


Key words: music genre classification, Transformer model, spectrogram, audio feature engineering, attention mechanism