Khan Fatima Hameed, Pasha Muhammad Adeel, Masud Shahid
Department of Electrical Engineering, Lahore University of Management Sciences (LUMS), Lahore, Punjab 54792, Pakistan.
Micromachines (Basel). 2021 Jun 6;12(6):665. doi: 10.3390/mi12060665.
Artificial intelligence (AI) has successfully made its way into contemporary industrial sectors such as automobiles, defense, industrial automation 4.0, healthcare technologies, agriculture, and many other domains because of its ability to act autonomously without continuous human interventions. However, this capability requires processing huge amounts of learning data to extract useful information in real time. The buzz around AI is not new, as this term has been widely known for the past half century. In the 1960s, scientists began to think about machines acting more like humans, which resulted in the development of the first natural language processing computers. It laid the foundation of AI, but there were only a handful of applications until the 1990s due to limitations in processing speed, memory, and computational power available. Since the 1990s, advancements in computer architecture and memory organization have enabled microprocessors to deliver much higher performance. Simultaneously, improvements in the understanding and mathematical representation of AI gave birth to its subset, referred to as machine learning (ML). ML includes different algorithms for independent learning, and the most promising ones are based on brain-inspired techniques classified as artificial neural networks (ANNs). ANNs have subsequently evolved to have deeper and larger structures and are often characterized as deep neural networks (DNN) and convolution neural networks (CNN). In tandem with the emergence of multicore processors, ML techniques started to be embedded in a range of scenarios and applications. Recently, application-specific instruction-set architecture for AI applications has also been supported in different microprocessors. Thus, continuous improvement in microprocessor capabilities has reached a stage where it is now possible to implement complex real-time intelligent applications like computer vision, object identification, speech recognition, data security, spectrum sensing, etc. This paper presents an overview on the evolution of AI and how the increasing capabilities of microprocessors have fueled the adoption of AI in a plethora of application domains. The paper also discusses the upcoming trends in microprocessor architectures and how they will further propel the assimilation of AI in our daily lives.
人工智能(AI)凭借其无需持续人工干预即可自主运行的能力,已成功进入当代工业领域,如汽车、国防、工业自动化4.0、医疗技术、农业以及许多其他领域。然而,这种能力需要处理大量的学习数据以实时提取有用信息。围绕人工智能的热议并非新鲜事,因为这个术语在过去半个世纪已广为人知。20世纪60年代,科学家们开始思考让机器更像人类一样行动,这促成了第一台自然语言处理计算机的开发。它奠定了人工智能的基础,但由于当时处理速度、内存和可用计算能力的限制,直到20世纪90年代,其应用仍寥寥无几。自20世纪90年代以来,计算机架构和内存组织的进步使微处理器能够提供更高的性能。同时,对人工智能理解和数学表示的改进催生了其分支——机器学习(ML)。机器学习包括不同的独立学习算法,其中最有前途的是基于受大脑启发技术的算法,这些技术被归类为人工神经网络(ANNs)。人工神经网络随后不断发展,具有更深、更大的结构,通常被称为深度神经网络(DNN)和卷积神经网络(CNN)。随着多核处理器的出现,机器学习技术开始被嵌入到一系列场景和应用中。最近,不同的微处理器也支持针对人工智能应用的专用指令集架构。因此,微处理器能力的持续提升已达到一个阶段,现在可以实现诸如计算机视觉、目标识别、语音识别、数据安全、频谱感知等复杂的实时智能应用。本文概述了人工智能的发展历程,以及微处理器能力的不断提升如何推动人工智能在众多应用领域的应用。本文还讨论了微处理器架构的未来趋势,以及它们将如何进一步推动人工智能融入我们的日常生活。