Computer Engineering & Science ›› 2023, Vol. 45 ›› Issue (05): 849-858.
• Software Engineering • Previous Articles Next Articles
LIU Nan-yan,WEI Hong-fei,MA Sheng-xiang
Received:
Revised:
Accepted:
Online:
Published:
Abstract: Facial expressions are one of the most important ways for humans to express emotions. Because facial expression changes are affected by the movement of multiple facial organs and facial muscles, in order to effectively extract local dynamic features and solve the problem of partial occlusion of facial expressions, a simple and effective deep learning network that integrates local dynamic features is proposed. By introducing the attention network and using the monitored key points of the face, the network is guided to focus on the unobstructed facial area. In the key frame with time sequence, the dynamic information and spatiotemporal information of key areas such as eyes and mouth are extracted to strengthen the connection between different expression features, so as to obtain effective local dynamic features. Finally, local dynamic features are added as a supplement to the overall network. The accuracy of the fusion network on the CK+, Oulu-CASIA, RAF-DB and AffectNet datasets are 98.08%, 90.59%, 86.02% and 61.28%, respectively, which is higher than other methods.
Key words: dynamic feature, facial occlusion, guide attention network, time series
LIU Nan-yan, WEI Hong-fei, MA Sheng-xiang. Facial expression recognition fusing local dynamic features[J]. Computer Engineering & Science, 2023, 45(05): 849-858.
Add to citation manager EndNote|Ris|BibTeX
URL: http://joces.nudt.edu.cn/EN/
http://joces.nudt.edu.cn/EN/Y2023/V45/I05/849