• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于测地线注意力的点云补全多阶段细化网络。

Multi-stage refinement network for point cloud completion based on geodesic attention.

作者信息

Chang Yuchen, Wang Kaiping

机构信息

Department of Computer Science, Xi'an University of Architecture and Technology, Xi'an, 710055, Shaanxi Province, China.

出版信息

Sci Rep. 2025 Jan 28;15(1):3570. doi: 10.1038/s41598-025-86704-6.

DOI:10.1038/s41598-025-86704-6
PMID:39875477
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11775121/
Abstract

The attention mechanism has significantly progressed in various point cloud tasks. Benefiting from its significant competence in capturing long-range dependencies, research in point cloud completion has achieved promising results. However, the typically disordered point cloud data features complicated non-Euclidean geometric structures and exhibits unpredictable behavior. Most current attention modules are based on Euclidean or local geometry, which fails to accurately represent the intrinsic non-Euclidean characteristics of point cloud data. Thus, we propose a novel geodesic attention-based multi-stage refinement transformer network, which enables the alignment of feature dimensions among query, key, and value, and long-range geometric dependencies are captured on the manifold. Then, a novel Position Feature Extractor is designed to enhance geometric features and explicitly capture graph-based non-Euclidean properties of point cloud objects. A Recurrent Information Aggregation Unit is further applied to aggregate historical information from the previous stages and current geometric features to guide the network in the current stage. The proposed method exhibits strong competitiveness when compared to current state-of-the-art methods.

摘要

注意力机制在各种点云任务中取得了显著进展。受益于其在捕捉长距离依赖关系方面的显著能力,点云补全研究取得了令人鼓舞的成果。然而,典型的无序点云数据具有复杂的非欧几里得几何结构,并表现出不可预测的行为。当前大多数注意力模块基于欧几里得或局部几何,无法准确表示点云数据的内在非欧几里得特征。因此,我们提出了一种基于测地线注意力的新型多阶段细化Transformer网络,该网络能够实现查询、键和值之间特征维度的对齐,并在流形上捕捉长距离几何依赖关系。然后,设计了一种新型位置特征提取器来增强几何特征,并明确捕捉点云对象基于图的非欧几里得属性。进一步应用循环信息聚合单元来聚合来自前一阶段的历史信息和当前几何特征,以指导当前阶段的网络。与当前的最先进方法相比,所提出的方法具有很强的竞争力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/d6388c369476/41598_2025_86704_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/8762c57abe41/41598_2025_86704_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/f51bd0c356a8/41598_2025_86704_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/7f03879af54e/41598_2025_86704_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/e889adb35ae0/41598_2025_86704_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/52c0b534f43b/41598_2025_86704_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/d6388c369476/41598_2025_86704_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/8762c57abe41/41598_2025_86704_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/f51bd0c356a8/41598_2025_86704_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/7f03879af54e/41598_2025_86704_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/e889adb35ae0/41598_2025_86704_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/52c0b534f43b/41598_2025_86704_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7263/11775121/d6388c369476/41598_2025_86704_Fig5_HTML.jpg

相似文献

1
Multi-stage refinement network for point cloud completion based on geodesic attention.基于测地线注意力的点云补全多阶段细化网络。
Sci Rep. 2025 Jan 28;15(1):3570. doi: 10.1038/s41598-025-86704-6.
2
EGNet: 3D Semantic Segmentation Through Point-Voxel-Mesh Data for Euclidean-Geodesic Feature Fusion.EGNet:通过点-体素-网格数据进行欧几里得-测地线特征融合的3D语义分割
Sensors (Basel). 2024 Dec 22;24(24):8196. doi: 10.3390/s24248196.
3
Point Cloud Completion Via Skeleton-Detail Transformer.基于骨架-细节变压器的点云补全
IEEE Trans Vis Comput Graph. 2023 Oct;29(10):4229-4242. doi: 10.1109/TVCG.2022.3185247. Epub 2023 Sep 1.
4
DNet: Dynamic Neighborhood Feature Learning in Point Cloud.DNet:点云中的动态邻域特征学习
Sensors (Basel). 2021 Mar 26;21(7):2327. doi: 10.3390/s21072327.
5
Full Transformer Framework for Robust Point Cloud Registration With Deep Information Interaction.基于深度信息交互的鲁棒点云配准全变压器框架
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13368-13382. doi: 10.1109/TNNLS.2023.3267333. Epub 2024 Oct 7.
6
RotInv-PCT: Rotation-Invariant Point Cloud Transformer via feature separation and aggregation.RotInv-PCT:基于特征分离与聚合的旋转不变点云变换器
Neural Netw. 2025 May;185:107223. doi: 10.1016/j.neunet.2025.107223. Epub 2025 Feb 4.
7
CSDN: Cross-Modal Shape-Transfer Dual-Refinement Network for Point Cloud Completion.CSDN:用于点云补全的跨模态形状转移双细化网络。
IEEE Trans Vis Comput Graph. 2024 Jul;30(7):3545-3563. doi: 10.1109/TVCG.2023.3236061. Epub 2024 Jun 27.
8
An attention-based bilateral feature fusion network for 3D point cloud.一种用于三维点云的基于注意力的双边特征融合网络。
Rev Sci Instrum. 2024 Jun 1;95(6). doi: 10.1063/5.0189991.
9
MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code.MASPC_Transform:一种基于多头注意力分离和位置编码的植物点云分割网络。
Sensors (Basel). 2022 Nov 27;22(23):9225. doi: 10.3390/s22239225.
10
PCDNF: Revisiting Learning-Based Point Cloud Denoising via Joint Normal Filtering.PCDNF:通过联合法线滤波重新审视基于学习的点云去噪
IEEE Trans Vis Comput Graph. 2024 Aug;30(8):5419-5436. doi: 10.1109/TVCG.2023.3292464. Epub 2024 Jul 1.

本文引用的文献

1
PMP-Net++: Point Cloud Completion by Transformer-Enhanced Multi-Step Point Moving Paths.PMP-Net++:基于Transformer增强多步点移动路径的点云补全
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):852-867. doi: 10.1109/TPAMI.2022.3159003. Epub 2022 Dec 5.
2
An End-to-End Shape-Preserving Point Completion Network.一种端到端保形点补全网络。
IEEE Comput Graph Appl. 2021 May-Jun;41(3):20-33. doi: 10.1109/MCG.2021.3065533. Epub 2021 May 7.