TY - GEN

T1 - Towards Evaluating the Representation Learned by Variational AutoEncoders

AU - Ueda, Tatsuya

AU - Vargas, Danilo Vasconcellos

N1 - Funding Information:
This work was supported by JST, ACT-I Grant Number JP-50243 and JSPS KAKENHI Grant Number JP20241216.
Publisher Copyright:
© 2021 The Society of Instrument and Control Engineers-SICE.

PY - 2021/9/8

Y1 - 2021/9/8

N2 - At the heart of a deep neural network is representation learning with complex latent variables. This representation learning has been improved by disentangled representations and the idea of regularization terms. However, adversarial samples show that tasks with DNNs can easily fail due to slight perturbations or transformations of the input. Variational AutoEncoder (VAE) learns P(z\x), the distribution of the latent variable z, rather than P(y\x), the distribution of the output y for the input x. Therefore, VAE is considered to be a good model for learning representations from input data. In other words, the mapping of x is not directly to y, but to the latent variable z. In this paper, we propose an evaluation method to characterize the latent variables that VAE learns. Specifically, latent variables extracted from VAEs trained by two well-known data sets are analyzed by the k-nearest neighbor method(kNN). In doing so, we propose an interpretation of what kind of representation the VAE learns, and share clues about the hyperdimensional space to which the latent variables are mapped.

AB - At the heart of a deep neural network is representation learning with complex latent variables. This representation learning has been improved by disentangled representations and the idea of regularization terms. However, adversarial samples show that tasks with DNNs can easily fail due to slight perturbations or transformations of the input. Variational AutoEncoder (VAE) learns P(z\x), the distribution of the latent variable z, rather than P(y\x), the distribution of the output y for the input x. Therefore, VAE is considered to be a good model for learning representations from input data. In other words, the mapping of x is not directly to y, but to the latent variable z. In this paper, we propose an evaluation method to characterize the latent variables that VAE learns. Specifically, latent variables extracted from VAEs trained by two well-known data sets are analyzed by the k-nearest neighbor method(kNN). In doing so, we propose an interpretation of what kind of representation the VAE learns, and share clues about the hyperdimensional space to which the latent variables are mapped.

UR - http://www.scopus.com/inward/record.url?scp=85117698611&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85117698611&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85117698611

T3 - 2021 60th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2021

SP - 591

EP - 594

BT - 2021 60th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2021

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 60th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2021

Y2 - 8 September 2021 through 10 September 2021

ER -