• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2023, Vol. 45 ›› Issue (01): 28-36.

• High Performance Computing • Previous Articles     Next Articles

Deep hierarchical attention matrix factorization

LI Jian-hong1,SU Xiao-qian2,WU Cai-hong1   

  1. (1.School of Artificial Intelligence,Anhui University of Science and Technology,Huainan 232001;
    2.School of Safety Science and Engineering,Anhui University of Science and Technology,Huainan 232001,China)
  • Received:2022-08-31 Revised:2022-10-23 Accepted:2023-01-25 Online:2023-01-25 Published:2023-01-25

Abstract: Matrix factorization is widely used in personalized recommendation because of its better ability of rating prediction, so many models based on matrix factorization is designed to improve the performance of recommendation.However, the limited ability of these models to mine users potential preference information results in unsatisfactory recommendation effect. In order to mine preferences of user and obtain better recommendation effect, a Deep Hierarchical Attention Matrix Factorization method (DeepHAMF) is proposed. Firstly, the original data input into the multi-layer perceptron, and the self-attention mechanism is also used to encode the input into the multi-layer perceptron, which aims to capture the original preference information. This part is called self-attention layer. Secondly, the original matrix factorization results and the matrix factorization results after attention operation are fused with the output results of multi-layer perceptron respectively by attention mechanism, so the users prefe- rence information can be fully mined. This part is called self-attention layer. Last but not the least, results of self-attention and hierarchical attention are fitting by the residual network module. Experimental results on public rating data sets show that DeepHAMF outperforms existing rating prediction algorithms. 

Key words: hierarchical attention, self-attention network, residual fusion, matrix factorization ,