A new generative model, in which the Variational Autoencoder network is combined with the Transformer architecture, is developed. The proposed model, the Variational Autoencoding Transformer (VAT), is applied to the… Click to show full abstract
A new generative model, in which the Variational Autoencoder network is combined with the Transformer architecture, is developed. The proposed model, the Variational Autoencoding Transformer (VAT), is applied to the task of generating molecules, showing that, with proper training, the VAT model can not only produce similar molecules to input ones with high accuracy but also generate new molecules from a predefined prior almost perfectly. A desirable aspect of our VAT is that no heuristic setting is necessary for optimal performance, which suggests that the model can readily be available to a variety of datasets. As practical directions toward materials/drug discovery, two strategies: a fine‐tuning method for directed molecular generation and a method of mixing molecules in the latent space, are demonstrated.
               
Click one of the above tabs to view related content.