• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于三维点云配准的空间可变形变压器

Spatial deformable transformer for 3D point cloud registration.

作者信息

Xiong Fengguang, Kong Yu, Xie Shuaikang, Kuang Liqun, Han Xie

机构信息

Shanxi Provincial Key Laboratory of Machine Vision and Virtual Reality, Taiyuan, 030051, China.

School of Computer Science and Technology, North University of China, Taiyuan, 030051, China.

出版信息

Sci Rep. 2024 Mar 6;14(1):5560. doi: 10.1038/s41598-024-56217-9.

DOI:10.1038/s41598-024-56217-9
PMID:38448683
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10917764/
Abstract

Deformable attention only focuses on a small group of key sample-points around the reference point and make itself be able to capture dynamically the local features of input feature map without considering the size of the feature map. Its introduction into point cloud registration will be quicker and easier to extract local geometric features from point cloud than attention. Therefore, we propose a point cloud registration method based on Spatial Deformable Transformer (SDT). SDT consists of a deformable self-attention module and a cross-attention module where the deformable self-attention module is used to enhance local geometric feature representation and the cross-attention module is employed to enhance feature discriminative capability of spatial correspondences. The experimental results show that compared to state-of-the-art registration methods, SDT has a better matching recall, inlier ratio, and registration recall on 3DMatch and 3DLoMatch scene, and has a better generalization ability and time efficiency on ModelNet40 and ModelLoNet40 scene.

摘要

可变形注意力仅聚焦于参考点周围的一小群关键采样点,并使其能够动态捕捉输入特征图的局部特征,而无需考虑特征图的大小。将其引入点云配准中,相比于注意力,能够更快且更容易地从点云中提取局部几何特征。因此,我们提出了一种基于空间可变形Transformer(SDT)的点云配准方法。SDT由一个可变形自注意力模块和一个交叉注意力模块组成,其中可变形自注意力模块用于增强局部几何特征表示,交叉注意力模块用于增强空间对应关系的特征判别能力。实验结果表明,与现有最先进的配准方法相比,SDT在3DMatch和3DLoMatch场景上具有更好的匹配召回率、内点率和配准召回率,并且在ModelNet40和ModelLoNet40场景上具有更好的泛化能力和时间效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/7f8b5c715fd6/41598_2024_56217_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/31c961391582/41598_2024_56217_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/1957130ec277/41598_2024_56217_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/ae2b190e50ce/41598_2024_56217_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/f8ccb0a46f2a/41598_2024_56217_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/b37860a13fbb/41598_2024_56217_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/017947931f82/41598_2024_56217_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/9b4dbc70d6b5/41598_2024_56217_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/365e7c59ed1a/41598_2024_56217_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/7f8b5c715fd6/41598_2024_56217_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/31c961391582/41598_2024_56217_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/1957130ec277/41598_2024_56217_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/ae2b190e50ce/41598_2024_56217_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/f8ccb0a46f2a/41598_2024_56217_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/b37860a13fbb/41598_2024_56217_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/017947931f82/41598_2024_56217_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/9b4dbc70d6b5/41598_2024_56217_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/365e7c59ed1a/41598_2024_56217_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf99/10917764/7f8b5c715fd6/41598_2024_56217_Fig9_HTML.jpg

相似文献

1
Spatial deformable transformer for 3D point cloud registration.用于三维点云配准的空间可变形变压器
Sci Rep. 2024 Mar 6;14(1):5560. doi: 10.1038/s41598-024-56217-9.
2
RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention.RRGA-Net:基于图卷积注意力的鲁棒点云配准
Sensors (Basel). 2023 Dec 6;23(24):9651. doi: 10.3390/s23249651.
3
EGST: Enhanced Geometric Structure Transformer for Point Cloud Registration.EGST:用于点云配准的增强几何结构变换器
IEEE Trans Vis Comput Graph. 2024 Sep;30(9):6222-6234. doi: 10.1109/TVCG.2023.3329578. Epub 2024 Jul 31.
4
Full Transformer Framework for Robust Point Cloud Registration With Deep Information Interaction.基于深度信息交互的鲁棒点云配准全变压器框架
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13368-13382. doi: 10.1109/TNNLS.2023.3267333. Epub 2024 Oct 7.
5
TIF-Reg: Point Cloud Registration with Transform-Invariant Features in SE(3).TIF-Reg:具有 SE(3)中的变换不变特征的点云配准。
Sensors (Basel). 2021 Aug 27;21(17):5778. doi: 10.3390/s21175778.
6
NrtNet: An Unsupervised Method for 3D Non-Rigid Point Cloud Registration Based on Transformer.NrtNet:一种基于Transformer的三维非刚性点云配准无监督方法。
Sensors (Basel). 2022 Jul 8;22(14):5128. doi: 10.3390/s22145128.
7
GeoTransformer: Fast and Robust Point Cloud Registration With Geometric Transformer.GeoTransformer:基于几何变换的快速鲁棒点云配准
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9806-9821. doi: 10.1109/TPAMI.2023.3259038. Epub 2023 Jun 30.
8
Robust Point Cloud Registration Framework Based on Deep Graph Matching.基于深度图匹配的鲁棒点云配准框架。
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):6183-6195. doi: 10.1109/TPAMI.2022.3204713. Epub 2023 Apr 3.
9
RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud Registration.RIGA:用于点云配准的旋转不变全局感知描述符
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3796-3812. doi: 10.1109/TPAMI.2023.3349199. Epub 2024 Apr 3.
10
An efficient point cloud semantic segmentation network with multiscale super-patch transformer.一种具有多尺度超补丁变换器的高效点云语义分割网络。
Sci Rep. 2024 Jun 25;14(1):14581. doi: 10.1038/s41598-024-63451-8.

本文引用的文献

1
Research on point cloud hole filling and 3D reconstruction in reflective area.反射区域中的点云孔洞填充与三维重建研究
Sci Rep. 2023 Oct 28;13(1):18524. doi: 10.1038/s41598-023-45648-5.
2
Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving.基于恐惧神经的强化学习在自动驾驶中的安全应用。
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):267-279. doi: 10.1109/TPAMI.2023.3322426. Epub 2023 Dec 5.