Lee Ho Hin, Tang Yucheng, Bao Shunxing, Yang Qi, Xu Xin, Schey Kevin L, Spraggins Jeffrey M, Huo Yuankai, Landman Bennett A
Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212.
Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA.
Proc SPIE Int Soc Opt Eng. 2023 Feb;12464. doi: 10.1117/12.2653753. Epub 2023 Apr 3.
With the confounding effects of demographics across large-scale imaging surveys, substantial variation is demonstrated with the volumetric structure of orbit and eye anthropometry. Such variability increases the level of difficulty to localize the anatomical features of the eye organs for populational analysis. To adapt the variability of eye organs with stable registration transfer, we propose an unbiased eye atlas template followed by a hierarchical coarse-to-fine approach to provide generalized eye organ context across populations. Furthermore, we retrieved volumetric scans from 1842 healthy patients for generating an eye atlas template with minimal biases. Briefly, we select 20 subject scans and use an iterative approach to generate an initial unbiased template. We then perform metric-based registration to the remaining samples with the unbiased template and generate coarse registered outputs. The coarse registered outputs are further leveraged to train a deep probabilistic network, which aims to refine the organ deformation in unsupervised setting. Computed tomography (CT) scans of 100 de-identified subjects are used to generate and evaluate the unbiased atlas template with the hierarchical pipeline. The refined registration shows the stable transfer of the eye organs, which were well-localized in the high-resolution (0.5 ) atlas space and demonstrated a significant improvement of 2.37% Dice for inverse label transfer performance. The subject-wise qualitative representations with surface rendering successfully demonstrate the transfer details of the organ context and showed the applicability of generalizing the morphological variation across patients.
在大规模成像调查中,由于人口统计学的混杂效应,眼眶和眼部人体测量学的体积结构表现出显著差异。这种变异性增加了在人群分析中定位眼部器官解剖特征的难度。为了通过稳定的配准转移来适应眼部器官的变异性,我们提出了一个无偏差的眼部图谱模板,随后采用分层的粗到精方法,以提供跨人群的广义眼部器官背景。此外,我们从1842名健康患者中检索了体积扫描数据,以生成偏差最小的眼部图谱模板。简而言之,我们选择20个受试者扫描数据,并使用迭代方法生成初始无偏差模板。然后,我们使用无偏差模板对其余样本进行基于度量的配准,并生成粗配准输出。粗配准输出进一步用于训练深度概率网络,该网络旨在在无监督设置下优化器官变形。使用100名身份不明受试者的计算机断层扫描(CT)数据,通过分层管道生成并评估无偏差图谱模板。改进后的配准显示了眼部器官的稳定转移,这些器官在高分辨率(0.5 )图谱空间中定位良好,并且在反向标签转移性能方面,骰子系数显著提高了2.37%。通过表面渲染的受试者特异性定性表示成功地展示了器官背景的转移细节,并显示了概括患者形态变异的适用性。