Kwak Bumju, Park Jiwon, Kang Taewon, Jo Jeonghee, Lee Byunghan, Yoon Sungroh
Recommendation Team, Kakao Corporation, Gyeonggi 13529, Republic of Korea.
LG Chem, Seoul 07795, Republic of Korea.
ACS Omega. 2023 Oct 9;8(42):39759-39769. doi: 10.1021/acsomega.3c05753. eCollection 2023 Oct 24.
In recent years, molecular representation learning has emerged as a key area of focus in various chemical tasks. However, many existing models fail to fully consider the geometric information on molecular structures, resulting in less intuitive representations. Moreover, the widely used message passing mechanism is limited to providing the interpretation of experimental results from a chemical perspective. To address these challenges, we introduce a novel transformer-based framework for molecular representation learning, named the geometry-aware transformer (GeoT). The GeoT learns molecular graph structures through attention-based mechanisms specifically designed to offer reliable interpretability as well as molecular property prediction. Consequently, the GeoT can generate attention maps of the interatomic relationships associated with training objectives. In addition, the GeoT demonstrates performance comparable to that of MPNN-based models while achieving reduced computational complexity. Our comprehensive experiments, including an empirical simulation, reveal that the GeoT effectively learns chemical insights into molecular structures, bridging the gap between artificial intelligence and molecular sciences.
近年来,分子表示学习已成为各种化学任务中的一个关键重点领域。然而,许多现有模型未能充分考虑分子结构的几何信息,导致表示不够直观。此外,广泛使用的消息传递机制仅限于从化学角度提供实验结果的解释。为应对这些挑战,我们引入了一种用于分子表示学习的基于新型Transformer的框架,名为几何感知Transformer(GeoT)。GeoT通过专门设计的基于注意力的机制学习分子图结构,以提供可靠的可解释性以及分子性质预测。因此,GeoT可以生成与训练目标相关的原子间关系的注意力图。此外,GeoT在降低计算复杂度的同时,表现出与基于MPNN的模型相当的性能。我们的综合实验,包括实证模拟,表明GeoT有效地学习了对分子结构的化学见解,弥合了人工智能与分子科学之间的差距。