Suppr超能文献

用于手臂假肢视觉伺服中目标识别的基于混合现场可编程门阵列-中央处理器的架构

Hybrid FPGA-CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis.

作者信息

Fejér Attila, Nagy Zoltán, Benois-Pineau Jenny, Szolgay Péter, de Rugy Aymar, Domenger Jean-Philippe

机构信息

Laboratoire Bordelais de Recherche en Informatique, University of Bordeaux, CEDEX, 33405 Talence, France.

Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary.

出版信息

J Imaging. 2022 Feb 12;8(2):44. doi: 10.3390/jimaging8020044.

Abstract

The present paper proposes an implementation of a hybrid hardware-software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and the task is to recognize the object to grasp. The lightweight architecture for gaze-driven object recognition has to be implemented as a wearable device with low power consumption (less than 5.6 W). The algorithmic chain comprises gaze fixations estimation and filtering, generation of candidates, and recognition, with two backbone convolutional neural networks (CNN). The time-consuming parts of the system, such as SIFT (Scale Invariant Feature Transform) detector and the backbone CNN feature extractor, are implemented in FPGA, and a new reduction layer is introduced in the object-recognition CNN to reduce the computational burden. The proposed implementation is compatible with the real-time control of the prosthetic arm.

摘要

本文提出了一种用于假肢手臂视觉伺服的混合硬件-软件系统的实现方案。我们专注于该系统中最关键的视觉分析部分。假肢系统包括一个佩戴在眼镜上的眼动追踪器和一个摄像机,任务是识别要抓取的物体。用于凝视驱动目标识别的轻量级架构必须作为一种低功耗(小于5.6瓦)的可穿戴设备来实现。算法链包括凝视注视估计与滤波、候选生成以及识别,使用了两个骨干卷积神经网络(CNN)。系统中耗时的部分,如尺度不变特征变换(SIFT)检测器和骨干CNN特征提取器,在现场可编程门阵列(FPGA)中实现,并且在目标识别CNN中引入了一个新的缩减层以减轻计算负担。所提出的实现方案与假肢手臂的实时控制兼容。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验