Suppr超能文献

用于非视觉环境中稳健物体识别的星鼻状触觉嗅觉仿生传感阵列。

A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments.

机构信息

State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China.

School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China.

出版信息

Nat Commun. 2022 Jan 10;13(1):79. doi: 10.1038/s41467-021-27672-z.

Abstract

Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.

摘要

物体识别是人类和其他动物的基本生存技能之一。迄今为止,人工智能 (AI) 辅助的高性能物体识别主要是基于视觉的,这得益于传感和计算能力的快速发展。在这里,我们报告了一种受星鼻鼹自然感觉融合系统启发的触觉-嗅觉传感阵列,它可以实时获取各种物体的局部形貌、硬度和气味,而无需视觉输入。触觉-嗅觉信息由生物启发的嗅觉-触觉关联机器学习算法处理,本质上模仿了星鼻鼹神经系统中的生物融合过程。为了实现在黑暗或掩埋等具有挑战性的环境中的救援任务中的人员识别,我们的触觉-嗅觉智能传感系统可以在消防部门测试现场的模拟救援场景中以 96.9%的准确率对 11 个典型物体进行分类。这种触觉-嗅觉仿生传感系统不需要视觉输入,并且对环境干扰具有更高的容忍度,突出了其在其他方法失效的困难环境中进行稳健物体识别的巨大潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b5c0/8748716/af827864e6c3/41467_2021_27672_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验