文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

用于智能无障碍的先进目标检测:一种结合海洋捕食者算法的Yolov10,以帮助视障人士。

Advanced object detection for smart accessibility: a Yolov10 with marine predator algorithm to aid visually challenged people.

作者信息

Adam Mahir Mohammed Sharif, AlEisa Hussah Nasser, Zanin Samah Al, Marzouk Radwa

机构信息

Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj, Saudi Arabia.

Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

出版信息

Sci Rep. 2025 Jul 1;15(1):20759. doi: 10.1038/s41598-025-04959-5.


DOI:10.1038/s41598-025-04959-5
PMID:40595928
Abstract

A significant challenge for many visually impaired people is they cannot be entirely independent and are restricted by their vision. They face problems with such actions and object detection should be an essential feature they can rely on a regular basis. Object detection is applied to discover objects in the real world from an image of the world, like chairs, bicycles, tables, or doors, that are normal in the scenes of a blind predicated on their places. Computer vision (CV) involves the automated extraction, understanding, and analysis of valuable information from a sequence of images or a single image. Machine learning (ML) and deep learning (DL) are significant and robust learning architectures broadly established, especially for CV applications. This study proposes a novel Advanced Object Detection for Smart Accessibility using the Marine Predator Algorithm to aid visually challenged people (AODSA-MPAVCP) model. The main intention of the AODSA-MPAVCP model is to enhance object detection techniques using advanced models for disabled people. Initially, the image pre-processing stage applies adaptive bilateral filtering (ABF) to eliminate the unwanted noise in input image data. Furthermore, the proposed AODSA-MPAVCP model utilizes the YOLOv10 model for object detection. Moreover, the feature extraction process employs the VGG19 method to transform raw data into meaningful and informative features. The deep belief network (DBN) technique is used for the classification process. Finally, the marine predator algorithm (MPA)-based hyperparameter selection process is performed to optimize the classification results of the DBN technique. The experimental evaluation of the AODSA-MPAVCP approach is examined under the Indoor object detection dataset. The performance validation of the AODSA-MPAVCP approach portrayed a superior accuracy value of 99.63% over existing models.

摘要

对许多视障人士来说,一个重大挑战是他们无法完全独立,行动受到视力限制。他们在诸如此类的行动中面临问题,物体检测应该是他们可以经常依赖的一项基本功能。物体检测用于从世界图像中发现现实世界中的物体,如椅子、自行车、桌子或门,这些在盲人场景中根据其位置来看是常见的。计算机视觉(CV)涉及从一系列图像或单幅图像中自动提取、理解和分析有价值的信息。机器学习(ML)和深度学习(DL)是广泛确立的重要且强大的学习架构,尤其适用于计算机视觉应用。本研究提出了一种使用海洋捕食者算法辅助视障人士的新型智能无障碍高级物体检测(AODSA - MPAVCP)模型。AODSA - MPAVCP模型的主要目的是使用先进模型增强针对残疾人的物体检测技术。最初,图像预处理阶段应用自适应双边滤波(ABF)来消除输入图像数据中的不必要噪声。此外,所提出的AODSA - MPAVCP模型利用YOLOv10模型进行物体检测。而且,特征提取过程采用VGG19方法将原始数据转换为有意义且信息丰富的特征。深度信念网络(DBN)技术用于分类过程。最后,执行基于海洋捕食者算法(MPA)的超参数选择过程以优化DBN技术的分类结果。在室内物体检测数据集下对AODSA - MPAVCP方法进行了实验评估。AODSA - MPAVCP方法的性能验证显示出比现有模型更高的准确率值,为99.63%。

相似文献

[1]
Advanced object detection for smart accessibility: a Yolov10 with marine predator algorithm to aid visually challenged people.

Sci Rep. 2025-7-1

[2]
Gesture recognition for hearing impaired people using an ensemble of deep learning models with improving beluga whale optimization-based hyperparameter tuning.

Sci Rep. 2025-7-1

[3]
An advanced fire detection system for assisting visually challenged people using recurrent neural network and sea-horse optimizer algorithm.

Sci Rep. 2025-7-1

[4]
A deep dive into artificial intelligence with enhanced optimization-based security breach detection in internet of health things enabled smart city environment.

Sci Rep. 2025-7-2

[5]
VIIDA and InViDe: computational approaches for generating and evaluating inclusive image paragraphs for the visually impaired.

Disabil Rehabil Assist Technol. 2025-7

[6]
An efficient privacy-preserving multilevel fusion-based feature engineering framework for UAV-enabled land cover classification using remote sensing images.

Sci Rep. 2025-7-3

[7]
SODU2-NET: a novel deep learning-based approach for salient object detection utilizing U-NET.

PeerJ Comput Sci. 2025-5-19

[8]
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.

Cochrane Database Syst Rev. 2022-5-20

[9]
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.

Front Oncol. 2025-6-18

[10]
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.

Br J Dermatol. 2024-7-16

本文引用的文献

[1]
A multi-patch-based deep learning model with VGG19 for breast cancer classifications in the pathology images.

Digit Health. 2025-1-17

[2]
Deep learning based object detection and surrounding environment description for visually impaired people.

Heliyon. 2023-6-7

[3]
IoT Enabled Intelligent Stick for Visually Impaired People for Obstacle Recognition.

Sensors (Basel). 2022-11-18

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索