• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

光照对机器人伙伴物体学习的影响

On the Illumination Influence for Object Learning on Robot Companions.

作者信息

Keller Ingo, Lohan Katrin S

机构信息

Department of Mathematical and Computer Science, Heriot-Watt University, Edinburgh, United Kingdom.

EMS Institute for Development of Mechatronic Systems, NTB University of Applied Sciences in Technology, Buchs, Switzerland.

出版信息

Front Robot AI. 2020 Jan 21;6:154. doi: 10.3389/frobt.2019.00154. eCollection 2019.

DOI:10.3389/frobt.2019.00154
PMID:33501169
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805833/
Abstract

Most collaborative tasks require interaction with everyday objects (e.g., utensils while cooking). Thus, robots must perceive everyday objects in an effective and efficient way. This highlights the necessity of understanding environmental factors and their impact on visual perception, such as illumination changes throughout the day on robotic systems in the real world. In object recognition, two of these factors are changes due to illumination of the scene and differences in the sensors capturing it. In this paper, we will present data augmentations for object recognition that enhance a deep learning architecture. We will show how simple linear and non-linear illumination models and feature concatenation can be used to improve deep learning-based approaches. The aim of this work is to allow for more realistic Human-Robot Interaction scenarios with a small amount of training data in combination with incremental interactive object learning. This will benefit the interaction with the robot to maximize object learning for long-term and location-independent learning in unshaped environments. With our model-based analysis, we showed that changes in illumination affect recognition approaches that use Deep Convolutional Neural Network to encode features for object recognition. Using data augmentation, we were able to show that such a system can be modified toward a more robust recognition without retraining the network. Additionally, we have shown that using simple brightness change models can help to improve the recognition across all training set sizes.

摘要

大多数协作任务都需要与日常物品进行交互(例如,烹饪时使用器具)。因此,机器人必须以有效且高效的方式感知日常物品。这凸显了理解环境因素及其对视觉感知的影响的必要性,比如现实世界中机器人系统一整天内的光照变化。在物体识别中,其中两个因素是场景光照引起的变化以及捕捉场景的传感器之间的差异。在本文中,我们将展示用于物体识别的数据增强方法,这些方法能增强深度学习架构。我们将展示简单的线性和非线性光照模型以及特征拼接如何用于改进基于深度学习的方法。这项工作的目的是在结合增量交互式物体学习的少量训练数据的情况下,实现更逼真的人机交互场景。这将有利于与机器人的交互,以在无特定形状的环境中实现长期且与位置无关的学习,从而最大限度地进行物体学习。通过我们基于模型的分析,我们表明光照变化会影响使用深度卷积神经网络对物体识别特征进行编码的识别方法。通过数据增强,我们能够证明这样的系统无需重新训练网络就能朝着更稳健的识别方向进行修改。此外,我们还表明使用简单的亮度变化模型有助于在所有训练集规模下提高识别效果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/5b65f73dabf2/frobt-06-00154-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/5982aae02db4/frobt-06-00154-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/18552f1aba0f/frobt-06-00154-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/3011899a8c56/frobt-06-00154-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/bbe2d9227d0b/frobt-06-00154-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/39f396f1a1d2/frobt-06-00154-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/0367e9368912/frobt-06-00154-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/8942a7984700/frobt-06-00154-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/67e210e3fef1/frobt-06-00154-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/5b65f73dabf2/frobt-06-00154-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/5982aae02db4/frobt-06-00154-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/18552f1aba0f/frobt-06-00154-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/3011899a8c56/frobt-06-00154-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/bbe2d9227d0b/frobt-06-00154-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/39f396f1a1d2/frobt-06-00154-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/0367e9368912/frobt-06-00154-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/8942a7984700/frobt-06-00154-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/67e210e3fef1/frobt-06-00154-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4797/7805833/5b65f73dabf2/frobt-06-00154-g0009.jpg

相似文献

1
On the Illumination Influence for Object Learning on Robot Companions.光照对机器人伙伴物体学习的影响
Front Robot AI. 2020 Jan 21;6:154. doi: 10.3389/frobt.2019.00154. eCollection 2019.
2
Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud-Robot System.基于单类支持向量机的云机器人系统未知物体检测
Sensors (Basel). 2022 Feb 10;22(4):1352. doi: 10.3390/s22041352.
3
A Robot Object Recognition Method Based on Scene Text Reading in Home Environments.基于家庭环境中场景文本阅读的机器人目标识别方法。
Sensors (Basel). 2021 Mar 9;21(5):1919. doi: 10.3390/s21051919.
4
A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.一种用于多模态图像标注的被动学习传感器架构:社交机器人的应用
Sensors (Basel). 2017 Feb 11;17(2):353. doi: 10.3390/s17020353.
5
Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human-Robot Collaboration.利用三维注视跟踪实现双手操作过程中的动作识别以增强人机协作
Front Robot AI. 2018 Apr 4;5:25. doi: 10.3389/frobt.2018.00025. eCollection 2018.
6
Robust tactile object recognition in open-set scenarios using Gaussian prototype learning.在开放集场景中使用高斯原型学习进行稳健的触觉物体识别。
Front Neurosci. 2022 Dec 28;16:1070645. doi: 10.3389/fnins.2022.1070645. eCollection 2022.
7
3D Recognition Based on Sensor Modalities for Robotic Systems: A Survey.基于传感器模态的机器人系统 3D 识别:综述。
Sensors (Basel). 2021 Oct 27;21(21):7120. doi: 10.3390/s21217120.
8
Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States.适应低级和高级机器人 - 对象交互状态的动作生成
Front Neurorobot. 2019 Jul 24;13:56. doi: 10.3389/fnbot.2019.00056. eCollection 2019.
9
Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning.基于地图的深度强化学习实现分布式非通信多机器人避碰
Sensors (Basel). 2020 Aug 27;20(17):4836. doi: 10.3390/s20174836.
10
Learning efficient haptic shape exploration with a rigid tactile sensor array.使用刚性触觉传感器阵列学习高效的触觉形状探索。
PLoS One. 2020 Jan 2;15(1):e0226880. doi: 10.1371/journal.pone.0226880. eCollection 2020.

引用本文的文献

1
CCMT: Dataset for crop pest and disease detection.CCMT:作物病虫害检测数据集。
Data Brief. 2023 Jun 12;49:109306. doi: 10.1016/j.dib.2023.109306. eCollection 2023 Aug.
2
GC3558: An open-source annotated dataset of Ghana currency images for classification modeling.GC3558:一个用于分类建模的加纳货币图像开源注释数据集。
Data Brief. 2022 Sep 17;45:108616. doi: 10.1016/j.dib.2022.108616. eCollection 2022 Dec.

本文引用的文献

1
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search.用于视觉搜索的手绘部分彩色草图的数据增强辅助深度学习。
PLoS One. 2017 Aug 31;12(8):e0183838. doi: 10.1371/journal.pone.0183838. eCollection 2017.
2
Striking individual differences in color perception uncovered by 'the dress' photograph.“那条裙子”照片揭示了色彩感知中显著的个体差异。
Curr Biol. 2015 Jun 29;25(13):R545-6. doi: 10.1016/j.cub.2015.04.053. Epub 2015 May 14.
3
Color constancy.颜色恒常性。
Vision Res. 2011 Apr 13;51(7):674-700. doi: 10.1016/j.visres.2010.09.006. Epub 2010 Sep 16.