• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用三级滤波进行定向增强特征描述以实现增强现实跟踪

Directional intensified feature description using tertiary filtering for augmented reality tracking.

作者信息

S Indhumathi, Clement J Christopher

机构信息

School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.

出版信息

Sci Rep. 2023 Nov 20;13(1):20311. doi: 10.1038/s41598-023-46643-6.

DOI:10.1038/s41598-023-46643-6
PMID:37985678
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10662146/
Abstract

Augmented Reality (AR) is applied in almost every field, and a few, but not limited, are engineering, medical, gaming and internet of things. The application of image tracking is inclusive in all these mentioned fields. AR uses image tracking to localize and register the position of the user/AR device for superimposing the virtual image into the real-world. In general terms, tracking the image enhance the users' experience. However, in the image tracking application, establishing the interface between virtual realm and the physical world has many shortcomings. Many tracking systems are available, but it lacks in robustness and efficiency. The robustness of the tracking algorithm, is the challenging task of implementation. This study aims to enhance the users' experience in AR by describing an image using Directional Intensified Features with Tertiary Filtering. This way of describing the features improve the robustness, which is desired in the image tracking. A feature descriptor is robust, in the sense that it does not compromise, when the image undergoes various transformations. This article, describes the features based on the Directional Intensification using Tertiary Filtering (DITF). The robustness of the algorithm is improved, because of the inherent design of Tri-ocular, Bi-ocular and Dia-ocular filters that can intensify the features in all required directions. The algorithm's robustness is verified with respect to various image transformations. The oxford dataset is used for performance analysis and validation. DITF model is designed to achieve the repeatability score of illumination-variation , blur changes and view-point variation, as 100%, 100% and 99% respectively. The comparative analysis has been performed in terms of precision and re-call. DITF outperforms the state-of-the-art descriptors, namely, BEBLID, BOOST, HOG, LBP, BRISK and AKAZE. An Implementation of DITF source code is available in the following GitHub repository: github.com/Johnchristopherclement/Directional-Intensified-Feature-Descriptor.

摘要

增强现实(AR)几乎应用于各个领域,其中包括但不限于工程、医学、游戏和物联网。图像跟踪的应用涵盖了上述所有领域。AR利用图像跟踪来定位和注册用户/AR设备的位置,以便将虚拟图像叠加到现实世界中。一般来说,跟踪图像可以提升用户体验。然而,在图像跟踪应用中,建立虚拟领域与物理世界之间的接口存在许多缺点。虽然有许多跟踪系统可用,但它们缺乏鲁棒性和效率。跟踪算法的鲁棒性是实施过程中的一项具有挑战性的任务。本研究旨在通过使用三级滤波的方向增强特征来描述图像,从而提升AR中的用户体验。这种描述特征的方式提高了鲁棒性,这在图像跟踪中是非常需要的。一个特征描述符之所以鲁棒,是指当图像经历各种变换时它不会受到影响。本文介绍了基于使用三级滤波的方向增强(DITF)的特征。由于三目、双目和双目滤波的固有设计可以在所有所需方向上增强特征,因此算法的鲁棒性得到了提高。该算法的鲁棒性针对各种图像变换进行了验证。牛津数据集用于性能分析和验证。DITF模型旨在分别实现光照变化、模糊变化和视点变化的重复性得分,分别为100%、100%和99%。已在精度和召回率方面进行了对比分析。DITF优于当前最先进的描述符,即BEBLID、BOOST、HOG、LBP、BRISK和AKAZE。DITF源代码的实现可在以下GitHub仓库中获取:github.com/Johnchristopherclement/Directional-Intensified-Feature-Descriptor。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/95e85640d35b/41598_2023_46643_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/852f1aea3c72/41598_2023_46643_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/dd5f2eb3484c/41598_2023_46643_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/3c5dc8afa88e/41598_2023_46643_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/faadf9a20743/41598_2023_46643_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/88b0a300cb45/41598_2023_46643_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/7cc9eeea5735/41598_2023_46643_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/61c5fb28d14a/41598_2023_46643_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/1e66266be9c7/41598_2023_46643_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ae0e93afad88/41598_2023_46643_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/536a41985a25/41598_2023_46643_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/9f7c5c4dbfb8/41598_2023_46643_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/59a2fd6f6de5/41598_2023_46643_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/d5feddef375c/41598_2023_46643_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/e8d4c090b684/41598_2023_46643_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ca19ddf84a43/41598_2023_46643_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ee4f7c411583/41598_2023_46643_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/7f1aaa186687/41598_2023_46643_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/2ee8e332c155/41598_2023_46643_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/9626f1be8498/41598_2023_46643_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/6a56e85cdd2c/41598_2023_46643_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/6af113229888/41598_2023_46643_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/95e85640d35b/41598_2023_46643_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/852f1aea3c72/41598_2023_46643_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/dd5f2eb3484c/41598_2023_46643_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/3c5dc8afa88e/41598_2023_46643_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/faadf9a20743/41598_2023_46643_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/88b0a300cb45/41598_2023_46643_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/7cc9eeea5735/41598_2023_46643_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/61c5fb28d14a/41598_2023_46643_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/1e66266be9c7/41598_2023_46643_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ae0e93afad88/41598_2023_46643_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/536a41985a25/41598_2023_46643_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/9f7c5c4dbfb8/41598_2023_46643_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/59a2fd6f6de5/41598_2023_46643_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/d5feddef375c/41598_2023_46643_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/e8d4c090b684/41598_2023_46643_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ca19ddf84a43/41598_2023_46643_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/ee4f7c411583/41598_2023_46643_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/7f1aaa186687/41598_2023_46643_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/2ee8e332c155/41598_2023_46643_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/9626f1be8498/41598_2023_46643_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/6a56e85cdd2c/41598_2023_46643_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/6af113229888/41598_2023_46643_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b240/10662146/95e85640d35b/41598_2023_46643_Fig20_HTML.jpg

相似文献

1
Directional intensified feature description using tertiary filtering for augmented reality tracking.使用三级滤波进行定向增强特征描述以实现增强现实跟踪
Sci Rep. 2023 Nov 20;13(1):20311. doi: 10.1038/s41598-023-46643-6.
2
Convex-based lightweight feature descriptor for Augmented Reality Tracking.基于凸优化的轻量级增强现实跟踪特征描述符。
PLoS One. 2024 Jul 18;19(7):e0305199. doi: 10.1371/journal.pone.0305199. eCollection 2024.
3
Augmented reality registration algorithm based on T-AKAZE features.基于T-AKAZE特征的增强现实配准算法。
Appl Opt. 2021 Dec 10;60(35):10901-10913. doi: 10.1364/AO.440738.
4
Study on Virtual Experience Marketing Model Based on Augmented Reality: Museum Marketing (Example).基于增强现实的虚拟体验营销模式研究:博物馆营销(案例)。
Comput Intell Neurosci. 2022 May 19;2022:2485460. doi: 10.1155/2022/2485460. eCollection 2022.
5
Distinctive accuracy measurement of binary descriptors in mobile augmented reality.移动增强现实中二进制描述符的精确度量。
PLoS One. 2019 Jan 3;14(1):e0207191. doi: 10.1371/journal.pone.0207191. eCollection 2019.
6
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.使用自适应视觉惯性融合的移动增强/虚拟现实实时运动跟踪
Sensors (Basel). 2017 May 5;17(5):1037. doi: 10.3390/s17051037.
7
Multiple object localization and tracking based on boosted efficient binary local image descriptor.基于增强型高效二进制局部图像描述符的多目标定位与跟踪
MethodsX. 2023 Sep 4;11:102354. doi: 10.1016/j.mex.2023.102354. eCollection 2023 Dec.
8
Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.学习优化的局部差分二进制用于移动设备上的可扩展增强现实。
IEEE Trans Vis Comput Graph. 2014 Jun;20(6):852-65. doi: 10.1109/TVCG.2013.260.
9
Compressive Binary Patterns: Designing a Robust Binary Face Descriptor with Random-Field Eigenfilters.压缩二值模式:使用随机场特征滤波器设计鲁棒的二值人脸描述符
IEEE Trans Pattern Anal Mach Intell. 2019 Mar;41(3):758-767. doi: 10.1109/TPAMI.2018.2800008. Epub 2018 Jan 31.
10
Improved Camshift object tracking algorithm in occluded scenes based on AKAZE and Kalman.
Multimed Tools Appl. 2022;81(2):2145-2159. doi: 10.1007/s11042-021-11673-7. Epub 2021 Oct 20.

引用本文的文献

1
Convex-based lightweight feature descriptor for Augmented Reality Tracking.基于凸优化的轻量级增强现实跟踪特征描述符。
PLoS One. 2024 Jul 18;19(7):e0305199. doi: 10.1371/journal.pone.0305199. eCollection 2024.

本文引用的文献

1
SAR image matching based on rotation-invariant description.基于旋转不变描述的合成孔径雷达(SAR)图像匹配
Sci Rep. 2023 Sep 4;13(1):14510. doi: 10.1038/s41598-023-41592-6.
2
Accurate 3D hand mesh recovery from a single RGB image.从单张 RGB 图像准确恢复 3D 手部网格。
Sci Rep. 2022 Jun 30;12(1):11043. doi: 10.1038/s41598-022-14380-x.
3
Fast and Robust Exudate Detection in Retinal Fundus Images Using Extreme Learning Machine Autoencoders and Modified KAZE Features.基于极端学习机自动编码器和改进 KAZE 特征的眼底图像快速稳健渗出物检测。
J Digit Imaging. 2022 Jun;35(3):496-513. doi: 10.1007/s10278-022-00587-x. Epub 2022 Feb 9.