Berlow Noah, Pal Ranadip
Electrical and Computer Engineering Department, Texas Tech University, Lubbock, TX 79409, USA.
Annu Int Conf IEEE Eng Med Biol Soc. 2011;2011:7610-3. doi: 10.1109/IEMBS.2011.6091875.
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
基因调控网络(GRNs)常被建模为马尔可夫链,它提供了网络从一种状态转变为另一种状态的转移概率。从有噪声且有限的实验数据推断马尔可夫链的逆问题是一个不适定问题,通常会产生多种模型可能性而非唯一解。在本文中,我们探讨了对由一族马尔可夫链表示的基因调控网络进行干预的问题。干预的目的是改变基因调控网络的稳态概率分布,因为稳态被认为是表型的代表。我们考虑具有最佳预期行为的鲁棒平稳控制策略。通过使用顺序方法生成控制策略,并利用计算高效的技术在秩一扰动后更新马尔可夫链的平稳概率分布,降低了寻找鲁棒平稳控制策略所涉及的极高计算复杂度。