Papadopoulos Agathoklis, Kostoglou Kyriaki, Mitsis Georgios D, Theocharides Theocharis
Annu Int Conf IEEE Eng Med Biol Soc. 2015;2015:3283-6. doi: 10.1109/EMBC.2015.7319093.
The use of a GPGPU programming paradigm (running CUDA-enabled algorithms on GPU cards) in biomedical engineering and biology-related applications have shown promising results. GPU acceleration can be used to speedup computation-intensive models, such as the mathematical modeling of biological systems, which often requires the use of nonlinear modeling approaches with a large number of free parameters. In this context, we developed a CUDA-enabled version of a model which implements a nonlinear identification approach that combines basis expansions and polynomial-type networks, termed Laguerre-Volterra networks and can be used in diverse biological applications. The proposed software implementation uses the GPGPU programming paradigm to take advantage of the inherent parallel characteristics of the aforementioned modeling approach to execute the calculations on the GPU card of the host computer system. The initial results of the GPU-based model presented in this work, show performance improvements over the original MATLAB model.
在生物医学工程和生物学相关应用中使用通用图形处理器(GPGPU)编程范式(在支持CUDA的图形处理器卡上运行算法)已显示出令人鼓舞的结果。GPU加速可用于加速计算密集型模型,例如生物系统的数学建模,这通常需要使用具有大量自由参数的非线性建模方法。在此背景下,我们开发了一个模型的支持CUDA版本,该模型实现了一种非线性识别方法,该方法结合了基扩展和多项式型网络,称为拉盖尔 - 沃尔泰拉网络,可用于多种生物学应用。所提出的软件实现使用GPGPU编程范式,以利用上述建模方法固有的并行特性,在主机系统的GPU卡上执行计算。本文提出的基于GPU的模型的初步结果表明,其性能优于原始的MATLAB模型。