Hirsch Matías, Mateos Cristian, Majchrzak Tim A
ISISTAN (UNICEN-CONICET), Tandil 7000, Buenos Aires, Argentina.
Faculty of Computer Science, Ruhr University, 44801 Bochum, Germany.
Sensors (Basel). 2025 May 2;25(9):2875. doi: 10.3390/s25092875.
The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance.
轻量级预训练模型和人工智能执行框架的可用性不断提高,使得边缘人工智能变得无处不在。特别是,深度学习(DL)模型正在计算机视觉(CV)中用于在各种需要快速推理的应用领域执行目标识别和图像分类任务。关于边缘人工智能任务执行平台,一些方法对云资源有很强的依赖性,以补充本地节点提供的计算能力。其他方法则水平分布工作负载,即通过利用附近边缘节点的能力。这些努力大多只在包含类似单板计算机(SBC)的边缘节点的实际环境中进行试验,但很少有人考虑智能手机等移动硬件。鉴于智能手机在全球的巨大普及以及智能手机集群可用于提供计算能力的无限场景,本文旨在回答以下问题:基于智能手机的边缘人工智能对于实时CV推理来说是一种有竞争力的方法吗?为了从经验上回答这个问题,我们使用了三个预训练的DL模型和八个异构边缘节点,包括五个低/中端智能手机和三个SBC,并比较了在三种图像流处理场景下使用工作负载所取得的性能。实验借助一个专为重现电池驱动的边缘计算测试而设计的工具集进行。我们比较了使用多个智能手机集群测试平台或仅使用SBC所实现的延迟和能源效率。此外,对于电池驱动的设置,我们纳入了衡量工作负载执行如何影响智能手机电池电量的指标。根据我们实验中显示的计算能力,我们得出结论,基于智能手机集群的边缘人工智能有助于在需要实时性能的应用场景中为边缘人工智能的扩展提供有价值的资源。