• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于神经辐射场的多视角内窥场景重建用于手术模拟。

Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation.

机构信息

Yunnan Key Laboratory of Opto-electronic Information Technology, Yunnan Normal University, Kunming, 650500, China.

Department of Thoracic Surgery, The First People's Hospital of Yunnan Province, Kunming, 650500, China.

出版信息

Int J Comput Assist Radiol Surg. 2024 May;19(5):951-960. doi: 10.1007/s11548-024-03080-8. Epub 2024 Feb 27.

DOI:10.1007/s11548-024-03080-8
PMID:38413491
Abstract

PURPOSE

In virtual surgery, the appearance of 3D models constructed from CT images lacks realism, leading to potential misunderstandings among residents. Therefore, it is crucial to reconstruct realistic endoscopic scene using multi-view images captured by an endoscope.

METHODS

We propose an Endoscope-NeRF network for implicit radiance fields reconstruction of endoscopic scene under non-fixed light source, and synthesize novel views using volume rendering. Endoscope-NeRF network with multiple MLP networks and a ray transformer network represents endoscopic scene as implicit field function with color and volume density at continuous 5D vectors (3D position and 2D direction). The final synthesized image is obtained by aggregating all sampling points on each ray of the target camera using volume rendering. Our method considers the effect of distance from the light source to the sampling point on the scene radiance.

RESULTS

Our network is validated on the lung, liver, kidney and heart of pig collected by our device. The results show that the novel views of endoscopic scene synthesized by our method outperform existing methods (NeRF and IBRNet) in terms of PSNR, SSIM, and LPIPS metrics.

CONCLUSION

Our network can effectively learn a radiance field function with generalization ability. Fine-tuning the pre-trained model on a new endoscopic scene to further optimize the neural radiance fields of the scene, which can provide more realistic, high-resolution rendered images for surgical simulation.

摘要

目的

在虚拟手术中,由 CT 图像构建的 3D 模型外观缺乏真实感,导致住院医师之间存在潜在误解。因此,使用内窥镜捕获的多视图图像重建逼真的内窥镜场景至关重要。

方法

我们提出了一种用于非固定光源下内窥镜场景的隐式辐射场重建的内窥镜-NeRF 网络,并使用体绘制合成新视图。内窥镜-NeRF 网络由多个 MLP 网络和射线变换网络组成,将内窥镜场景表示为连续 5D 向量(3D 位置和 2D 方向)的隐式场函数,具有颜色和体积密度。最终的合成图像是通过在目标相机的每条射线的所有采样点上使用体绘制进行聚合而获得的。我们的方法考虑了场景辐射度从光源到采样点的距离对场景辐射度的影响。

结果

我们的网络在我们的设备采集的猪的肺、肝、肾和心脏上进行了验证。结果表明,与现有方法(NeRF 和 IBRNet)相比,我们的方法合成的内窥镜场景新视图在 PSNR、SSIM 和 LPIPS 指标方面表现更好。

结论

我们的网络可以有效地学习具有泛化能力的辐射场函数。在新的内窥镜场景上微调预训练模型,以进一步优化场景的神经辐射场,从而为手术模拟提供更真实、高分辨率的渲染图像。

相似文献

1
Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation.基于神经辐射场的多视角内窥场景重建用于手术模拟。
Int J Comput Assist Radiol Surg. 2024 May;19(5):951-960. doi: 10.1007/s11548-024-03080-8. Epub 2024 Feb 27.
2
ACnerf: enhancement of neural radiance field by alignment and correction of pose to reconstruct new views from a single x-ray.ACnerf:通过对齐和纠正姿势来增强神经辐射场,以便从单个 X 射线重建新视角。
Phys Med Biol. 2024 Feb 8;69(4). doi: 10.1088/1361-6560/ad1d6c.
3
NeRF-OR: neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos.NeRF-OR:用于从稀疏视图RGB-D视频重建手术室场景的神经辐射场
Int J Comput Assist Radiol Surg. 2025 Jan;20(1):147-156. doi: 10.1007/s11548-024-03261-5. Epub 2024 Sep 13.
4
Enhancing endoscopic scene reconstruction with color-aware inverse rendering through neural SDF and radiance fields.通过神经符号距离函数和辐射场进行颜色感知逆渲染来增强内镜场景重建
Biomed Opt Express. 2024 May 23;15(6):3914-3931. doi: 10.1364/BOE.521612. eCollection 2024 Jun 1.
5
Neural Radiance Fields From Sparse RGB-D Images for High-Quality View Synthesis.基于稀疏 RGB-D 图像的神经辐射场进行高质量视图合成。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8713-8728. doi: 10.1109/TPAMI.2022.3232502. Epub 2023 Jun 5.
6
UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene.UPST-NeRF:用于3D场景的神经辐射场通用逼真风格迁移
IEEE Trans Vis Comput Graph. 2025 Apr;31(4):2045-2057. doi: 10.1109/TVCG.2024.3378692. Epub 2025 Feb 27.
7
Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis.用于快速视图合成的级联可泛化神经辐射场
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2758-2769. doi: 10.1109/TPAMI.2023.3335311. Epub 2024 Apr 3.
8
Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion.利用深度引导神经辐射场和改进的深度补全增强视图合成
Sensors (Basel). 2024 Mar 16;24(6):1919. doi: 10.3390/s24061919.
9
Learning Spherical Radiance Field for Efficient 360° Unbounded Novel View Synthesis.学习球形辐射场以实现高效的360°无边界新视图合成。
IEEE Trans Image Process. 2024;33:3722-3734. doi: 10.1109/TIP.2024.3409052. Epub 2024 Jun 13.
10
The Adaption of Recent New Concepts in Neural Radiance Fields and Their Role for High-Fidelity Volume Reconstruction in Medical Images.近期神经辐射场新概念的适应性及其在医学图像高保真体积重建中的作用。
Sensors (Basel). 2024 Sep 12;24(18):5923. doi: 10.3390/s24185923.

引用本文的文献

1
Neural Radiance Fields (NeRF) for 3D Reconstruction of Monocular Endoscopic Video in Sinus Surgery.用于鼻窦手术单目内窥镜视频三维重建的神经辐射场(NeRF)
Otolaryngol Head Neck Surg. 2025 Apr;172(4):1435-1441. doi: 10.1002/ohn.1105. Epub 2025 Jan 10.

本文引用的文献

1
Modeling the irradiation pattern of LEDs at short distances.模拟短距离下发光二极管的辐照模式。
Opt Express. 2021 Mar 1;29(5):6845-6853. doi: 10.1364/OE.419428.
2
Using virtual reality simulation to assess competence in video-assisted thoracoscopic surgery (VATS) lobectomy.使用虚拟现实模拟评估电视辅助胸腔镜手术(VATS)肺叶切除术的操作能力。
Surg Endosc. 2017 Jun;31(6):2520-2528. doi: 10.1007/s00464-016-5254-6. Epub 2016 Sep 21.
3
Advances in natural language processing.自然语言处理的进展。
Science. 2015 Jul 17;349(6245):261-6. doi: 10.1126/science.aaa8685.
4
Image denoising by sparse 3-D transform-domain collaborative filtering.基于稀疏三维变换域协同滤波的图像去噪
IEEE Trans Image Process. 2007 Aug;16(8):2080-95. doi: 10.1109/tip.2007.901238.