Suppr超能文献

利用图形处理器上的并行遗传算法优化离子通道模型。

Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

机构信息

The Mina and Everard Goodman Faculty of Life Sciences and the Leslie and Susan Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 52900, Israel.

出版信息

J Neurosci Methods. 2012;206(2):183-94. doi: 10.1016/j.jneumeth.2012.02.024. Epub 2012 Mar 8.

Abstract

We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs.

摘要

我们最近通过将随机搜索算法与使用多种电压钳协议测量的离子电流相结合,成功地对电压门控离子通道模型进行了半自动约束。尽管在数值上是成功的,但这种方法在计算上的要求非常高,在高性能 Linux 集群上进行优化通常需要几天的时间。为了解决这个计算瓶颈,我们使用 NVIDIA 的 CUDA 将我们的优化算法转换为在图形处理单元(GPU)上运行。在 NVIDIA 的 Fermi 图形计算引擎上并行化该过程,相对于在 80 个节点的 Linux 集群上运行的应用程序,速度提高了约 180 倍,大大减少了模拟时间。该应用程序允许用户在单个廉价的桌面“超级计算机”上优化离子通道动力学模型,极大地减少了构建与神经元生理学相关模型的时间和成本。我们还证明了算法并行化的关键点对于其性能至关重要。我们通过求解 ODE(常微分方程)来大大减少内存传输到 GPU 和从 GPU 传输的次数,从而显著减少了计算时间。这种方法可以应用于加速需要迭代求解 ODE 的其他数据密集型应用程序。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验