Kolehmainen V, Arridge S R, Kaipio J P, Schweiger M, Somersalo E, Tarvainen T, Vauhkonen M
Dept. of Phys., Univ. Kuopio, Finland.
Conf Proc IEEE Eng Med Biol Soc. 2006;2006:2659-62. doi: 10.1109/IEMBS.2006.260738.
Model reduction is often required in optical diffusion tomography (ODT), typically due to limited available computation time or computer memory. In practice, this often means that we are bound to use sparse meshes in the model for the forward problem. Conversely, if we are given more and more accurate measurements, we have to employ increasingly accurate forward problem solvers in order to exploit the information in the measurements. In this paper we apply the approximation error theory to ODT. We show that if the approximation errors are estimated and employed, it is possible to use mesh densities that would be unacceptable with a conventional measurement model.
在光学扩散断层扫描(ODT)中,通常由于可用计算时间或计算机内存有限,常常需要进行模型简化。在实际应用中,这通常意味着我们在正向问题的模型中必然要使用稀疏网格。相反,如果我们有越来越精确的测量数据,就必须采用越来越精确的正向问题求解器,以便利用测量数据中的信息。在本文中,我们将逼近误差理论应用于ODT。我们表明,如果估计并采用逼近误差,那么就有可能使用传统测量模型无法接受的网格密度。