• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种针对未知动态细胞刺激的最优贝叶斯干预策略。

An optimal Bayesian intervention policy in response to unknown dynamic cell stimuli.

作者信息

Hosseini Seyed Hamid, Imani Mahdi

机构信息

Northeastern University, 360 Huntington Ave, Boston, MA, 02115, United States of America.

出版信息

Inf Sci (N Y). 2024 May;666. doi: 10.1016/j.ins.2024.120440. Epub 2024 Mar 7.

DOI:10.1016/j.ins.2024.120440
PMID:39464381
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11507472/
Abstract

Interventions in gene regulatory networks (GRNs) aim to restore normal functions of cells experiencing abnormal behavior, such as uncontrolled cell proliferation. The dynamic, uncertain, and complex nature of cellular processes poses significant challenges in determining the best interventions. Most existing intervention methods assume that cells are unresponsive to therapies, resulting in stationary and deterministic intervention solutions. However, cells in unhealthy conditions can dynamically respond to therapies through internal stimuli, leading to the recurrence of undesirable conditions. This paper proposes a Bayesian intervention policy that adaptively responds to cell dynamic responses according to the latest available information. The GRNs are modeled using a Boolean network with perturbation (BNp), and the fight between the cell and intervention is modeled as a two-player zero-sum game. Assuming an incomplete knowledge of cell stimuli, a recursive approach is developed to keep track of the posterior distribution of cell responses. The proposed Bayesian intervention policy takes action according to the posterior distribution and a set of Nash equilibrium policies associated with all possible cell responses. Analytical results demonstrate the superiority of the proposed intervention policy against several existing intervention techniques. Meanwhile, the performance of the proposed policy is investigated through comprehensive numerical experiments using the p53-MDM2 negative feedback loop regulatory network and melanoma network. The results demonstrate the empirical convergence of the proposed policy to the optimal Nash equilibrium policy.

摘要

对基因调控网络(GRNs)的干预旨在恢复表现出异常行为(如不受控制的细胞增殖)的细胞的正常功能。细胞过程的动态性、不确定性和复杂性给确定最佳干预措施带来了重大挑战。大多数现有的干预方法都假定细胞对治疗无反应,从而产生固定且确定性的干预解决方案。然而,处于不健康状态的细胞可以通过内部刺激对治疗做出动态反应,导致不良状况的复发。本文提出了一种贝叶斯干预策略,该策略根据最新可用信息对细胞的动态反应进行自适应响应。基因调控网络使用带扰动的布尔网络(BNp)进行建模,细胞与干预之间的对抗被建模为两人零和博弈。假设对细胞刺激的了解不完整,开发了一种递归方法来跟踪细胞反应的后验分布。所提出的贝叶斯干预策略根据后验分布以及与所有可能的细胞反应相关联的一组纳什均衡策略采取行动。分析结果表明,所提出的干预策略相对于几种现有的干预技术具有优越性。同时,通过使用p53-MDM2负反馈回路调控网络和黑色素瘤网络进行全面的数值实验,研究了所提出策略的性能。结果证明了所提出策略向最优纳什均衡策略的经验收敛性。

相似文献

1
An optimal Bayesian intervention policy in response to unknown dynamic cell stimuli.一种针对未知动态细胞刺激的最优贝叶斯干预策略。
Inf Sci (N Y). 2024 May;666. doi: 10.1016/j.ins.2024.120440. Epub 2024 Mar 7.
2
Modeling Defensive Response of Cells to Therapies: Equilibrium Interventions for Regulatory Networks.建模细胞对疗法的防御反应:调控网络的平衡干预。
IEEE/ACM Trans Comput Biol Bioinform. 2024 Sep-Oct;21(5):1322-1334. doi: 10.1109/TCBB.2024.3383814. Epub 2024 Oct 9.
3
An experimental design framework for Markovian gene regulatory networks under stationary control policy.平稳控制策略下马尔可夫基因调控网络的实验设计框架。
BMC Syst Biol. 2018 Dec 21;12(Suppl 8):137. doi: 10.1186/s12918-018-0649-8.
4
Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.用于调控网络因果推断的强化学习数据获取
Proc Am Control Conf. 2023 May-Jun;2023:3957-3964. doi: 10.23919/acc55779.2023.10155867. Epub 2023 Jul 3.
5
Control of Gene Regulatory Networks Using Bayesian Inverse Reinforcement Learning.使用贝叶斯逆强化学习控制基因调控网络。
IEEE/ACM Trans Comput Biol Bioinform. 2019 Jul-Aug;16(4):1250-1261. doi: 10.1109/TCBB.2018.2830357. Epub 2018 Apr 26.
6
Bayesian Lookahead Perturbation Policy for Inference of Regulatory Networks.贝叶斯前瞻微扰策略在调控网络推断中的应用。
IEEE/ACM Trans Comput Biol Bioinform. 2024 Sep-Oct;21(5):1504-1517. doi: 10.1109/TCBB.2024.3402220. Epub 2024 Oct 9.
7
Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.用于两人零和马尔可夫博弈的在线极小极大Q网络学习
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1228-1241. doi: 10.1109/TNNLS.2020.3041469. Epub 2022 Feb 28.
8
Stochastic Boolean networks: an efficient approach to modeling gene regulatory networks.随机布尔网络:一种建模基因调控网络的有效方法。
BMC Syst Biol. 2012 Aug 28;6:113. doi: 10.1186/1752-0509-6-113.
9
Intervention in gene regulatory networks via greedy control policies based on long-run behavior.基于长期行为的贪婪控制策略对基因调控网络的干预。
BMC Syst Biol. 2009 Jun 15;3:61. doi: 10.1186/1752-0509-3-61.
10
Intervention in a family of Boolean networks.布尔网络家族中的干预。
Bioinformatics. 2006 Jan 15;22(2):226-32. doi: 10.1093/bioinformatics/bti765. Epub 2005 Nov 12.

引用本文的文献

1
Deep Reinforcement Learning Data Collection for Bayesian Inference of Hidden Markov Models.用于隐马尔可夫模型贝叶斯推理的深度强化学习数据收集
IEEE Trans Artif Intell. 2025 May;6(5):1217-1232. doi: 10.1109/tai.2024.3515939. Epub 2024 Dec 12.
2
Dynamic Intervention in Gene Regulatory Networks: A Partially Observed Zero-Sum Markov Game.基因调控网络中的动态干预:部分可观测零和马尔可夫博弈
Control Technol Appl. 2024 Aug;2024:774-781. doi: 10.1109/ccta60707.2024.10666558. Epub 2024 Sep 11.
3
Kernel-Based Particle Filtering for Scalable Inference in Partially Observed Boolean Dynamical Systems.

本文引用的文献

1
Structure-Based Inverse Reinforcement Learning for Quantification of Biological Knowledge.基于结构的逆强化学习用于生物知识量化
2023 IEEE Conf Artif Intell (2023). 2023 Jun;2023:282-284. doi: 10.1109/cai54212.2023.00126. Epub 2023 Aug 2.
2
Learning to Fight Against Cell Stimuli: A Game Theoretic Perspective.从博弈论视角看学习对抗细胞刺激
2023 IEEE Conf Artif Intell (2023). 2023 Jun;2023:285-287. doi: 10.1109/cai54212.2023.00127. Epub 2023 Aug 2.
3
Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.
基于核的粒子滤波在部分观测布尔动力系统中的可扩展推理
IFAC Pap OnLine. 2024;58(15):1-6. doi: 10.1016/j.ifacol.2024.08.495. Epub 2024 Sep 19.
用于调控网络因果推断的强化学习数据获取
Proc Am Control Conf. 2023 May-Jun;2023:3957-3964. doi: 10.23919/acc55779.2023.10155867. Epub 2023 Jul 3.
4
Optimal Recursive Expert-Enabled Inference in Regulatory Networks.调控网络中基于最优递归专家的推理
IEEE Control Syst Lett. 2023;7:1027-1032. doi: 10.1109/lcsys.2022.3229054. Epub 2022 Dec 14.
5
Inference of regulatory networks through temporally sparse data.通过时间上稀疏的数据推断调控网络。
Front Control Eng. 2022;3. doi: 10.3389/fcteg.2022.1017256. Epub 2022 Dec 13.
6
Review and assessment of Boolean approaches for inference of gene regulatory networks.基因调控网络推理的布尔方法综述与评估
Heliyon. 2022 Aug 9;8(8):e10222. doi: 10.1016/j.heliyon.2022.e10222. eCollection 2022 Aug.
7
The MITF regulatory network in melanoma.黑色素瘤中的 MITF 调控网络。
Pigment Cell Melanoma Res. 2022 Sep;35(5):517-533. doi: 10.1111/pcmr.13053. Epub 2022 Jul 9.
8
Signal pathways of melanoma and targeted therapy.黑色素瘤的信号通路与靶向治疗。
Signal Transduct Target Ther. 2021 Dec 20;6(1):424. doi: 10.1038/s41392-021-00827-6.
9
CABEAN: a software for the control of asynchronous Boolean networks.CABEAN:一种用于异步布尔网络控制的软件。
Bioinformatics. 2021 May 5;37(6):879-881. doi: 10.1093/bioinformatics/btaa752.
10
Gene Regulatory Network Inference: Connecting Plant Biology and Mathematical Modeling.基因调控网络推断:连接植物生物学与数学建模
Front Genet. 2020 May 25;11:457. doi: 10.3389/fgene.2020.00457. eCollection 2020.