Randomized SMILES strings improve the quality of molecular generative models

被引:223
作者
Arus-Pous, Josep [1 ,3 ]
Johansson, Simon Viet [1 ]
Prykhodko, Oleksii [1 ]
Bjerrum, Esben Jannik [1 ]
Tyrchan, Christian [2 ]
Reymond, Jean-Louis [3 ]
Chen, Hongming [1 ]
Engkvist, Ola [1 ]
机构
[1] AstraZeneca Gothenburg, R&D, Discovery Sci, Hit Discovery, Molndal, Sweden
[2] AstraZeneca Gothenburg, R&D, Med Chem, BioPharmaceut Early RIA, Molndal, Sweden
[3] Univ Bern, Dept Chem & Biochem, Freiestr 3, CH-3012 Bern, Switzerland
基金
欧盟地平线“2020”;
关键词
Deep learning; Generative models; SMILES; Randomized SMILES; Recurrent Neural Networks; Chemical databases; DATABASE; ALGORITHM;
D O I
10.1186/s13321-019-0393-0
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million, 10,000 and 1000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define how well a model has generalized the training set. The generated chemical space is evaluated with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generalize to larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES.
引用
收藏
页数:13
相关论文
共 46 条
[1]  
[Anonymous], ICLR
[2]  
[Anonymous], 2017, ARXIV PREPRINT ARXIV
[3]  
[Anonymous], 2016, UNROLLED GENERATIVE
[4]   Exploring the GDB-13 chemical space using deep generative models [J].
Arus-Pous, Josep ;
Blaschke, Thomas ;
Ulander, Silas ;
Reymond, Jean-Louis ;
Chen, Hongming ;
Engkvist, Ola .
JOURNAL OF CHEMINFORMATICS, 2019, 11 (1)
[5]   Drug Analogs from Fragment-Based Long Short-Term Memory Generative Neural Networks [J].
Awale, Mahendra ;
Sirockin, Finton ;
Stiefl, Nikolaus ;
Reymond, Jean-Louis .
JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2019, 59 (04) :1347-1356
[6]  
Bickerton GR, 2012, NAT CHEM, V4, P90, DOI [10.1038/NCHEM.1243, 10.1038/nchem.1243]
[7]  
Bjerrum EJ, 2017, BIOMOLECULES, DOI DOI 10.3390/BI0M8040131
[8]   Improving Chemical Autoencoder Latent Space and Molecular De Novo Generation Diversity with Heteroencoders [J].
Bjerrum, Esben Jannik ;
Sattarov, Boris .
BIOMOLECULES, 2018, 8 (04)
[9]   Application of Generative Autoencoder in De Novo Molecular Design [J].
Blaschke, Thomas ;
Olivecrona, Marcus ;
Engkvist, Ola ;
Bajorath, Jurgen ;
Chen, Hongming .
MOLECULAR INFORMATICS, 2018, 37 (1-2)
[10]   970 Million Druglike Small Molecules for Virtual Screening in the Chemical Universe Database GDB-13 [J].
Blum, Lorenz C. ;
Reymond, Jean-Louis .
JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 2009, 131 (25) :8732-+