IEEE Trans Nanobioscience. 2023 Oct;22(4):734-743. doi: 10.1109/TNB.2023.3274640. Epub 2023 Oct 3.
Protein-ligand interactions (PLIs) are essential for cellular activities and drug discovery, and due to the complexity and high cost of experimental methods, there is a great demand for computational approaches, such as protein-ligand docking, to decipher PLI patterns. One of the most challenging aspects of protein-ligand docking is to identify near-native conformations from a set of poses, but traditional scoring functions still have limited accuracy. Therefore, new scoring methods are urgently needed for methodological and/or practical implications. We present a novel deep learning-based scoring function for ranking protein-ligand docking poses based on Vision Transformer (ViT), named ViTScore. To recognize near-native poses from a set of poses, ViTScore voxelizes the protein-ligand interactional pocket into a 3D grid labeled by the occupancy contribution of atoms in different physicochemical classes. This allows ViTScore to capture the subtle differences between spatially and energetically favorable near-native poses and unfavorable non-native poses without needing extra information. After that, ViTScore will output the prediction of the root mean square deviation (rmsd) of a docking pose with reference to the native binding pose. ViTScore is extensively evaluated on diverse test sets including PDBbind2019 and CASF2016, and obtains significant improvements over existing methods in terms of RMSE, R and docking power. Moreover, the results demonstrate that ViTScore is a promising scoring function for protein-ligand docking, and it can be used to accurately identify near-native poses from a set of poses. Furthermore, the results suggest that ViTScore is a powerful tool for protein-ligand docking, and it can be used to accurately identify near-native poses from a set of poses. Additionally, ViTScore can be used to identify potential drug targets and to design new drugs with improved efficacy and safety.
蛋白质-配体相互作用(PLIs)对于细胞活动和药物发现至关重要,由于实验方法的复杂性和高成本,因此非常需要计算方法,例如蛋白质-配体对接,以破译 PLI 模式。蛋白质-配体对接最具挑战性的方面之一是从一组构象中识别近天然构象,但传统的评分函数准确性仍然有限。因此,迫切需要新的评分方法来满足方法和/或实际意义。我们提出了一种基于 Vision Transformer(ViT)的新型深度学习打分函数,用于根据 Vision Transformer(ViT)对蛋白质-配体对接构象进行排序,命名为 ViTScore。为了从一组构象中识别近天然构象,ViTScore 将蛋白质-配体相互作用口袋体素化为一个 3D 网格,网格标签由不同物理化学类别的原子占有率贡献标记。这使得 ViTScore 能够在不需要额外信息的情况下,捕捉空间和能量有利的近天然构象与不利的非天然构象之间的细微差异。之后,ViTScore 将输出对接构象与天然结合构象之间均方根偏差(rmsd)的预测值。ViTScore 在包括 PDBbind2019 和 CASF2016 在内的多个测试集中进行了广泛评估,在 RMSE、R 和对接能力方面,与现有方法相比,ViTScore 均取得了显著的改进。此外,结果表明 ViTScore 是一种很有前途的蛋白质-配体对接打分函数,可用于准确识别一组构象中的近天然构象。此外,结果表明 ViTScore 是一种强大的蛋白质-配体对接工具,可用于准确识别一组构象中的近天然构象。此外,ViTScore 可用于识别潜在的药物靶标,并设计具有更高疗效和安全性的新型药物。