Institute of Mathematical Sciences, Imperial College London, London, UK.
Bioinformatics. 2011 Mar 15;27(6):874-6. doi: 10.1093/bioinformatics/btr015. Epub 2011 Jan 11.
Mathematical modelling is central to systems and synthetic biology. Using simulations to calculate statistics or to explore parameter space is a common means for analysing these models and can be computationally intensive. However, in many cases, the simulations are easily parallelizable. Graphics processing units (GPUs) are capable of efficiently running highly parallel programs and outperform CPUs in terms of raw computing power. Despite their computational advantages, their adoption by the systems biology community is relatively slow, since differences in hardware architecture between GPUs and CPUs complicate the porting of existing code.
We present a Python package, cuda-sim, that provides highly parallelized algorithms for the repeated simulation of biochemical network models on NVIDIA CUDA GPUs. Algorithms are implemented for the three popular types of model formalisms: the LSODA algorithm for ODE integration, the Euler-Maruyama algorithm for SDE simulation and the Gillespie algorithm for MJP simulation. No knowledge of GPU computing is required from the user. Models can be specified in SBML format or provided as CUDA code. For running a large number of simulations in parallel, up to 360-fold decrease in simulation runtime is attained when compared to single CPU implementations.
数学建模是系统和合成生物学的核心。使用模拟来计算统计数据或探索参数空间是分析这些模型的常用方法,并且可能计算量很大。但是,在许多情况下,模拟很容易并行化。图形处理单元(GPU)能够有效地运行高度并行的程序,并且在原始计算能力方面优于 CPU。尽管具有计算优势,但它们在系统生物学界的采用速度相对较慢,因为 GPU 和 CPU 之间的硬件体系结构差异使现有代码的移植变得复杂。
我们提出了一个 Python 包 cuda-sim,它为在 NVIDIA CUDA GPU 上重复模拟生化网络模型提供了高度并行的算法。为三种流行的模型形式化算法:用于 ODE 集成的 LSODA 算法、用于 SDE 模拟的 Euler-Maruyama 算法和用于 MJP 模拟的 Gillespie 算法。用户不需要了解 GPU 计算知识。模型可以以 SBML 格式指定,也可以作为 CUDA 代码提供。对于并行运行大量模拟,与单 CPU 实现相比,模拟运行时间可减少多达 360 倍。