Convergent autoencoder approximation of low bending and low distortion manifold embeddings

ESAIM-MATHEMATICAL MODELLING AND NUMERICAL ANALYSIS(2024)

引用 0|浏览0
暂无评分
摘要
Autoencoders are widely used in machine learning for dimension reduction of high-dimensional data. The encoder embeds the input data manifold into a lower-dimensional latent space, while the decoder represents the inverse map, providing a parametrization of the data manifold by the manifold in latent space. We propose and analyze a novel regularization for learning the encoder component of an autoencoder: a loss functional that prefers isometric, extrinsically flat embeddings and allows to train the encoder on its own. To perform the training, it is assumed that the local Riemannian distance and the local Riemannian average can be evaluated for pairs of nearby points on the input manifold. The loss functional is computed via Monte Carlo integration. Our main theorem identifies a geometric loss functional of the embedding map as the Gamma-limit of the sampling-dependent loss functionals. Numerical tests, using image data that encodes different explicitly given data manifolds, show that smooth manifold embeddings into latent space are obtained. Furthermore, due to the promotion of extrinsic flatness, interpolation between not too distant points on the manifold is well approximated by linear interpolation in latent space.
更多
查看译文
关键词
Manifold learning,manifold embedding,autoencoder,latent space,Monte Carlo sampling,Gamma-convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要