Wheeldon Adrian, Serb Alexander
Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Edinburgh, United Kingdom.
Front Neuroinform. 2023 Feb 16;17:1074653. doi: 10.3389/fninf.2023.1074653. eCollection 2023.
Latent representations are a necessary component of cognitive artificial intelligence (AI) systems. Here, we investigate the performance of various sequential clustering algorithms on latent representations generated by autoencoder and convolutional neural network (CNN) models. We also introduce a new algorithm, called Collage, which brings views and concepts into sequential clustering to bridge the gap with cognitive AI. The algorithm is designed to reduce memory requirements, numbers of operations (which translate into hardware clock cycles) and thus improve energy, speed and area performance of an accelerator for running said algorithm. Results show that plain autoencoders produce latent representations which have large inter-cluster overlaps. CNNs are shown to solve this problem, however introduce their own problems in the context of generalized cognitive pipelines.
潜在表示是认知人工智能(AI)系统的必要组成部分。在此,我们研究了各种序列聚类算法在由自动编码器和卷积神经网络(CNN)模型生成的潜在表示上的性能。我们还引入了一种名为“拼贴”的新算法,该算法将视图和概念引入序列聚类,以弥合与认知AI的差距。该算法旨在降低内存需求、操作数量(转化为硬件时钟周期),从而提高运行该算法的加速器的能量、速度和面积性能。结果表明,普通自动编码器产生的潜在表示具有较大的簇间重叠。然而,CNN被证明可以解决这个问题,但在广义认知管道的背景下也引入了它们自己的问题。