Cai Ruxin, Liu Yanzhen, Sun Zhibin, Wang Yuneng, Wang Yu, Li Facheng, Jiang Haiyue
Beihang University, School of Biological Science and Medical Engineering, Beijing, China.
Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China.
Int J Med Robot. 2023 Dec;19(6):e2548. doi: 10.1002/rcs.2548. Epub 2023 Jul 14.
To develop an automatic and reliable ultrasonic visual system for robot- or computer-assisted liposuction, we examined the use of deep learning for the segmentation of adipose ultrasound images in clinical and educational settings.
To segment adipose layers, it is proposed to use an Attention Skip-Convolutions ResU-Net (Attention SCResU-Net) consisting of SC residual blocks, attention gates and U-Net architecture. Transfer learning is utilised to compensate for the deficiency of clinical data. The Bama pig and clinical human adipose ultrasound image datasets are utilized, respectively.
The final model obtains a Dice of 99.06 ± 0.95% and an ASD of 0.19 ± 0.18 mm on clinical datasets, outperforming other methods. By fine-tuning the eight deepest layers, accurate and stable segmentation results are obtained.
The new deep-learning method achieves the accurate and automatic segmentation of adipose ultrasound images in real-time, thereby enhancing the safety of liposuction and enabling novice surgeons to better control the cannula.
为了开发用于机器人或计算机辅助吸脂术的自动且可靠的超声可视化系统,我们研究了在临床和教育环境中使用深度学习对脂肪超声图像进行分割的方法。
为了分割脂肪层,建议使用由 SC 残差块、注意力门和 U-Net 架构组成的注意力 skip-convolutions ResU-Net(Attention SCResU-Net)。利用迁移学习来弥补临床数据的不足。分别使用巴马猪和临床人体脂肪超声图像数据集。
最终模型在临床数据集上获得了 99.06±0.95%的 Dice 分数和 0.19±0.18mm 的 ASD,优于其他方法。通过微调八个最深层,可以获得准确且稳定的分割结果。
新的深度学习方法实现了脂肪超声图像的实时精确自动分割,从而提高了吸脂术的安全性,并使新手外科医生能够更好地控制套管。