School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China.
School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, People's Republic of China.
Phys Med Biol. 2024 Aug 14;69(17). doi: 10.1088/1361-6560/ad67a6.
. To enable the registration network to be trained only once, achieving fast regularization hyperparameter selection during the inference phase, and to improve registration accuracy and deformation field regularity.. Hyperparameter tuning is an essential process for deep learning deformable image registration (DLDIR). Most DLDIR methods usually perform a large number of independent experiments to select the appropriate regularization hyperparameters, which are time-consuming and resource-consuming. To address this issue, we propose a novel dynamic hyperparameter block, which comprises a distributed mapping network, dynamic convolution, attention feature extraction layer, and instance normalization layer. The dynamic hyperparameter block encodes the input feature vectors and regularization hyperparameters into learnable feature variables and dynamic convolution parameters which changes the feature statistics of the high-dimensional features layer feature variables, respectively. In addition, the proposed method replaced the single-level structure residual blocks in LapIRN with a hierarchical multi-level architecture for the dynamic hyperparameter block in order to improve registration performance.. On the OASIS dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 28.01%, 9.78%and improved Dice similarity coefficient by 1.17%, 1.17%, compared with LapIRN and CIR, respectively. On the DIR-Lab dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 10.00%, 5.70%and reduced target registration error by 10.84%, 10.05%, compared with LapIRN and CIR, respectively.. The proposed method can fast achieve the corresponding registration deformation field for arbitrary hyperparameter value during the inference phase. Extensive experiments demonstrate that the proposed method reduces training time compared to DLDIR with fixed regularization hyperparameters while outperforming the state-of-the-art registration methods concerning registration accuracy and deformation smoothness on brain dataset OASIS and lung dataset DIR-Lab.
. 为了使注册网络仅训练一次,在推理阶段实现快速正则化超参数选择,并提高注册准确性和变形场正则性。. 超参数调优是深度学习可变形图像配准(DLDIR)的重要过程。大多数 DLDIR 方法通常需要进行大量独立的实验来选择适当的正则化超参数,这既耗时又耗资源。为了解决这个问题,我们提出了一种新的动态超参数块,它由分布式映射网络、动态卷积、注意力特征提取层和实例归一化层组成。动态超参数块将输入特征向量和正则化超参数编码为可学习的特征变量和动态卷积参数,分别改变高维特征层特征变量的特征统计和动态卷积参数的特征统计。此外,我们的方法用分层多级结构取代了 LapIRN 中单个级别的结构残差块,以提高动态超参数块的注册性能。. 在 OASIS 数据集上,与 LapIRN 和 CIR 相比,我们的方法分别将|Jϕ|⩽0的百分比降低了 28.01%、9.78%,并将 Dice 相似系数提高了 1.17%、1.17%。在 DIR-Lab 数据集上,与 LapIRN 和 CIR 相比,我们的方法分别将|Jϕ|⩽0的百分比降低了 10.00%、5.70%,并将目标注册误差降低了 10.84%、10.05%。. 在推理阶段,我们的方法可以快速获得任意超参数值对应的相应注册变形场。大量实验表明,与具有固定正则化超参数的 DLDIR 相比,我们的方法可以减少训练时间,同时在脑数据集 OASIS 和肺数据集 DIR-Lab 上,与最先进的注册方法相比,提高了注册准确性和变形平滑度。