Departamento de Química, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, RJ 22453-900, Brazil.
ACS Chem Neurosci. 2024 Jun 5;15(11):2144-2159. doi: 10.1021/acschemneuro.3c00840. Epub 2024 May 9.
The local interpretable model-agnostic explanation (LIME) method was used to interpret two machine learning models of compounds penetrating the blood-brain barrier. The classification models, Random Forest, ExtraTrees, and Deep Residual Network, were trained and validated using the blood-brain barrier penetration dataset, which shows the penetrability of compounds in the blood-brain barrier. LIME was able to create explanations for such penetrability, highlighting the most important substructures of molecules that affect drug penetration in the barrier. The simple and intuitive outputs prove the applicability of this explainable model to interpreting the permeability of compounds across the blood-brain barrier in terms of molecular features. LIME explanations were filtered with a weight equal to or greater than 0.1 to obtain only the most relevant explanations. The results showed several structures that are important for blood-brain barrier penetration. In general, it was found that some compounds with nitrogenous substructures are more likely to permeate the blood-brain barrier. The application of these structural explanations may help the pharmaceutical industry and potential drug synthesis research groups to synthesize active molecules more rationally.
采用局部可解释模型不可知解释(LIME)方法对两种穿透血脑屏障的化合物的机器学习模型进行解释。分类模型随机森林、ExtraTrees 和深度残差网络使用血脑屏障穿透数据集进行训练和验证,该数据集显示了化合物穿透血脑屏障的能力。LIME 能够为这种穿透能力创建解释,突出了影响药物穿透屏障的分子的最重要的子结构。简单直观的输出证明了这种可解释模型在解释化合物通过血脑屏障的渗透性方面的适用性,即从分子特征的角度。使用权重等于或大于 0.1 对 LIME 解释进行过滤,仅获得最相关的解释。结果显示了一些对血脑屏障穿透很重要的结构。总的来说,发现一些含有含氮子结构的化合物更有可能穿透血脑屏障。这些结构解释的应用可能有助于制药行业和潜在的药物合成研究小组更合理地合成活性分子。