Suppr超能文献

模拟 VLSI 算法对基于线性子空间的图像识别影响的建模、分析和评估。

Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

机构信息

Department of Electrical Engineering and Center for Optics and Photonics, Universidad de Concepción, Concepción, Chile.

出版信息

Neural Netw. 2014 Jul;55:72-82. doi: 10.1016/j.neunet.2014.03.011. Epub 2014 Apr 2.

Abstract

Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods.

摘要

典型的图像识别系统分两个阶段运行

特征提取,以降低输入空间的维度,以及基于提取的特征进行分类。模拟超大规模集成电路(VLSI)是一种有吸引力的技术,可用于为便携式嵌入式设备实现这些计算密集型任务的紧凑和低功耗实现。然而,器件失配限制了使用该技术制造的电路的分辨率。传统的布局技术旨在提高晶体管级别的分辨率,以减少失配,但不考虑预期的应用。将失配参数与应用级别的特定效果相关联,将允许设计人员根据预定义的性能/成本权衡应用聚焦失配补偿技术。本文对特征提取和分类电路中失配模拟算法的影响进行建模、分析和评估。对于特征提取,我们提出了具有片上学习功能的模拟自适应线性组合器,用于最小均方(LMS)和广义海伯算法(GHA)。使用模拟电路的数学抽象,我们确定了在学习过程中自然补偿的失配参数,并提出了经济有效的准则来减少其余部分的影响。对于分类,我们推导出实现最近邻(NN)方法和径向基函数(RBF)网络所需的电路的模拟模型,并使用它们来模拟具有标准人脸和手写数字数据库的模拟分类器。形式分析和实验表明,我们如何利用自适应结构和输入空间的属性在应用级别补偿器件失配的影响,从而减少传统布局技术的设计开销。结果也可以直接扩展到使用线性子空间方法的多个应用领域。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验