For high-dimensional data reduction, selection of effective features is important for classification. In order to solve the high-dimensional and small sample size problem in face recognition, starting with the feature selection and subspace learning, we propose a new method of uncorrelated linear discriminant analysis based on L2,1-norm regularization. To add L2,1-norm penalty term to the objective function, this algorithm firstly decomposes the sample matrix by the SVD. Then it presents a series of transformation, transforming its nonlinear Fisher criterion into linear type. Finally, it adds the L2,1-norm penalty term to the linear model, and solves the regularization problem to get a set of optimal discriminant vectors. We project training samples and testing samples onto low-dimensional subspace respectively, and use the nearest Euclidean distance classifier to classify the testing samples. Due to the characteristic of L2,1-norm, which can perform feature selection and subspace learning simultaneously, the recognition performance is greatly improved. Experiments on three standard face databases (ORL, YaleB and PIE) verify the performance of the algorithm, and show the efficiency of dimensionality reduction and the improvement of discriminant ability.