• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science

Previous Articles     Next Articles

Uncorrelated linear discriminant analysis with L2,1-norm
regularization and its application in face recognition

FU Jun-peng,CHEN Xiu-hong,GE Xiao-qian   

  1. (School of Digital Media,Jiangnan University,Wuxi 214122,China)
  • Received:2015-12-07 Revised:2016-01-29 Online:2017-02-25 Published:2017-02-25

Abstract:

For high-dimensional data reduction, selection of effective features is important for classification. In order to solve the high-dimensional and small sample size problem in face recognition, starting with the feature selection and subspace learning, we propose a new method of uncorrelated linear discriminant analysis based on  L2,1-norm regularization. To add  L2,1-norm penalty term to the objective function, this algorithm firstly decomposes the sample matrix by the SVD. Then it presents a series of transformation, transforming its nonlinear Fisher criterion into linear type. Finally, it adds the  L2,1-norm penalty term to the linear model, and solves the regularization problem to get a set of optimal discriminant vectors. We project training samples and testing samples onto low-dimensional subspace respectively, and use the nearest Euclidean distance classifier to classify the testing samples. Due to the characteristic of  L2,1-norm, which can perform feature selection and subspace learning simultaneously, the recognition performance is greatly improved. Experiments on three standard face databases (ORL, YaleB and PIE) verify the performance of the algorithm, and show the efficiency of dimensionality reduction and the improvement of discriminant ability.

Key words: