Computer Engineering & Science ›› 2022, Vol. 44 ›› Issue (05): 894-900.
• Artificial Intelligence and Data Mining • Previous Articles Next Articles
FAN Yun
Received:
Revised:
Accepted:
Online:
Published:
Abstract: KL divergence is adopted widely in the field of machine learning to measure distances between distributions in model loss function. In the sparse autoencoder, the KL divergence is used as the penalty term of the loss function to measure the distance between the neuron output and the sparse parameter, so that the neuron output approaches the sparse parameter, thereby suppressing the activation of the neuron to obtain sparse coding. In WGAN, Wasserstein distance is used to solve the gradient va- nishing and mode collapse problems of GAN, making the training of GAN more stable. The experimental results show that, compared with the sparse autoencoder using KL divergence and JS divergence, the sparse autoencoder using EMD distance as a penalty term can reduce the reconstruction error between real samples and reconstructed samples. As the penalty parameter increases, the encoding becomes more sparse.
Key words: sparse autoencoder, regularization, earth mover distance, KL divergence
FAN Yun. Sparse autoencoder based on earth mover distance[J]. Computer Engineering & Science, 2022, 44(05): 894-900.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: http://joces.nudt.edu.cn/EN/
http://joces.nudt.edu.cn/EN/Y2022/V44/I05/894