• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用贝叶斯逆强化学习控制基因调控网络。

Control of Gene Regulatory Networks Using Bayesian Inverse Reinforcement Learning.

出版信息

IEEE/ACM Trans Comput Biol Bioinform. 2019 Jul-Aug;16(4):1250-1261. doi: 10.1109/TCBB.2018.2830357. Epub 2018 Apr 26.

DOI:10.1109/TCBB.2018.2830357
PMID:29993697
Abstract

Control of gene regulatory networks (GRNs) to shift gene expression from undesirable states to desirable ones has received much attention in recent years. Most of the existing methods assume that the cost of intervention at each state and time point, referred to as the immediate cost function, is fully known. In this paper, we employ the Partially-Observed Boolean Dynamical System (POBDS) signal model for a time sequence of noisy expression measurement from a Boolean GRN and develop a Bayesian Inverse Reinforcement Learning (BIRL) approach to address the realistic case in which the only available knowledge regarding the immediate cost function is provided by the sequence of measurements and interventions recorded in an experimental setting by an expert. The Boolean Kalman Smoother (BKS) algorithm is used for optimally mapping the available gene-expression data into a sequence of Boolean states, and then the BIRL method is efficiently combined with the Q-learning algorithm for quantification of the immediate cost function. The performance of the proposed methodology is investigated by applying a state-feedback controller to two GRN models: a melanoma WNT5A Boolean network and a p53-MDM2 negative feedback loop Boolean network, when the cost of the undesirable states, and thus the identity of the undesirable genes, is learned using the proposed methodology.

摘要

近年来,控制基因调控网络(GRNs)将基因表达从不理想状态转移到理想状态受到了广泛关注。大多数现有的方法假设干预每个状态和时间点的成本(称为即时成本函数)是完全已知的。在本文中,我们采用部分观测布尔动态系统(POBDS)信号模型对来自布尔 GRN 的噪声表达测量的时间序列进行建模,并开发了一种贝叶斯逆强化学习(BIRL)方法来解决即时成本函数的唯一可用知识是通过专家在实验环境中记录的测量和干预序列提供的实际情况。布尔卡尔曼平滑器(BKS)算法用于将可用的基因表达数据最优地映射到布尔状态序列中,然后将 BIRL 方法与 Q-learning 算法有效结合,以量化即时成本函数。通过将状态反馈控制器应用于两个 GRN 模型来研究所提出方法的性能:黑色素瘤 WNT5A 布尔网络和 p53-MDM2 负反馈环布尔网络,当使用所提出的方法学习不理想状态的成本(因此是不理想基因的身份)时。

相似文献

1
Control of Gene Regulatory Networks Using Bayesian Inverse Reinforcement Learning.使用贝叶斯逆强化学习控制基因调控网络。
IEEE/ACM Trans Comput Biol Bioinform. 2019 Jul-Aug;16(4):1250-1261. doi: 10.1109/TCBB.2018.2830357. Epub 2018 Apr 26.
2
Intervention in a family of Boolean networks.布尔网络家族中的干预。
Bioinformatics. 2006 Jan 15;22(2):226-32. doi: 10.1093/bioinformatics/bti765. Epub 2005 Nov 12.
3
Scalable optimal Bayesian classification of single-cell trajectories under regulatory model uncertainty.可扩展的最优贝叶斯分类算法,用于解决调控模型不确定性下的单细胞轨迹分类问题。
BMC Genomics. 2019 Jun 13;20(Suppl 6):435. doi: 10.1186/s12864-019-5720-3.
4
Employing decomposable partially observable Markov decision processes to control gene regulatory networks.利用可分解部分可观测马尔可夫决策过程控制基因调控网络。
Artif Intell Med. 2017 Nov;83:14-34. doi: 10.1016/j.artmed.2017.06.007. Epub 2017 Jul 18.
5
BoolFilter: an R package for estimation and identification of partially-observed Boolean dynamical systems.布尔滤波器:一个用于估计和识别部分观测布尔动力系统的R软件包。
BMC Bioinformatics. 2017 Nov 25;18(1):519. doi: 10.1186/s12859-017-1886-3.
6
Intervention in context-sensitive probabilistic Boolean networks.上下文敏感概率布尔网络中的干预
Bioinformatics. 2005 Apr 1;21(7):1211-8. doi: 10.1093/bioinformatics/bti131. Epub 2004 Nov 5.
7
Modeling Defensive Response of Cells to Therapies: Equilibrium Interventions for Regulatory Networks.建模细胞对疗法的防御反应:调控网络的平衡干预。
IEEE/ACM Trans Comput Biol Bioinform. 2024 Sep-Oct;21(5):1322-1334. doi: 10.1109/TCBB.2024.3383814. Epub 2024 Oct 9.
8
Gene perturbation and intervention in context-sensitive stochastic Boolean networks.上下文敏感随机布尔网络中的基因扰动与干预
BMC Syst Biol. 2014 May 21;8:60. doi: 10.1186/1752-0509-8-60.
9
BTR: training asynchronous Boolean models using single-cell expression data.BTR:使用单细胞表达数据训练异步布尔模型。
BMC Bioinformatics. 2016 Sep 6;17(1):355. doi: 10.1186/s12859-016-1235-y.
10
Mapping multivalued onto Boolean dynamics.将多值映射到布尔动力学。
J Theor Biol. 2011 Feb 7;270(1):177-84. doi: 10.1016/j.jtbi.2010.09.017. Epub 2010 Sep 22.

引用本文的文献

1
Integrating inverse reinforcement learning into data-driven mechanistic computational models: a novel paradigm to decode cancer cell heterogeneity.将逆强化学习整合到数据驱动的机制计算模型中:一种解码癌细胞异质性的新范式。
Front Syst Biol. 2024 Mar 8;4:1333760. doi: 10.3389/fsysb.2024.1333760. eCollection 2024.
2
Dynamic Intervention in Gene Regulatory Networks: A Partially Observed Zero-Sum Markov Game.基因调控网络中的动态干预:部分可观测零和马尔可夫博弈
Control Technol Appl. 2024 Aug;2024:774-781. doi: 10.1109/ccta60707.2024.10666558. Epub 2024 Sep 11.
3
An optimal Bayesian intervention policy in response to unknown dynamic cell stimuli.
一种针对未知动态细胞刺激的最优贝叶斯干预策略。
Inf Sci (N Y). 2024 May;666. doi: 10.1016/j.ins.2024.120440. Epub 2024 Mar 7.
4
Structure-Based Inverse Reinforcement Learning for Quantification of Biological Knowledge.基于结构的逆强化学习用于生物知识量化
2023 IEEE Conf Artif Intell (2023). 2023 Jun;2023:282-284. doi: 10.1109/cai54212.2023.00126. Epub 2023 Aug 2.
5
Learning to Fight Against Cell Stimuli: A Game Theoretic Perspective.从博弈论视角看学习对抗细胞刺激
2023 IEEE Conf Artif Intell (2023). 2023 Jun;2023:285-287. doi: 10.1109/cai54212.2023.00127. Epub 2023 Aug 2.
6
Optimal Recursive Expert-Enabled Inference in Regulatory Networks.调控网络中基于最优递归专家的推理
IEEE Control Syst Lett. 2023;7:1027-1032. doi: 10.1109/lcsys.2022.3229054. Epub 2022 Dec 14.
7
Improving candidate Biosynthetic Gene Clusters in fungi through reinforcement learning.通过强化学习改进真菌中的候选生物合成基因簇。
Bioinformatics. 2022 Aug 10;38(16):3984-3991. doi: 10.1093/bioinformatics/btac420.
8
A primer on machine learning techniques for genomic applications.基因组应用机器学习技术入门。
Comput Struct Biotechnol J. 2021 Jul 31;19:4345-4359. doi: 10.1016/j.csbj.2021.07.021. eCollection 2021.
9
Using optimal control to understand complex metabolic pathways.运用最优控制理解复杂代谢途径。
BMC Bioinformatics. 2020 Oct 21;21(1):472. doi: 10.1186/s12859-020-03808-8.
10
Identifying GPCR-drug interaction based on wordbook learning from sequences.基于序列词表学习的 GPCR 药物相互作用识别。
BMC Bioinformatics. 2020 Apr 20;21(1):150. doi: 10.1186/s12859-020-3488-8.