Hwang Igeon, Oh Taeyeon
Seoul AI School, aSSIST University, 46 Ewhayeodae-gil, Seodaemun-gu, Seoul, 03767, Republic of Korea.
Sci Rep. 2025 Apr 21;15(1):13724. doi: 10.1038/s41598-025-98545-4.
This study develops a neural style transfer (NST) model optimized for real-time execution on mobile devices through on-device AI, eliminating reliance on cloud servers. By embedding AI models directly into mobile hardware, this approach reduces operational costs and enhances user privacy. However, designing deep learning models for mobile deployment presents a trade-off between computational efficiency and visual quality, as reducing model size often leads to performance degradation. To address this challenge, we propose a set of lightweight NST models incorporating depthwise separable convolutions, residual bottlenecks, and optimized upsampling techniques inspired by MobileNet and ResNet architectures. Five model variations are designed and evaluated based on parameters, floating-point operations, memory usage, and image transformation quality. Experimental results demonstrate that our optimized models achieve a balance between efficiency and performance, enabling high-quality real-time style transfer on resource-constrained mobile environments. These findings highlight the feasibility of deploying NST applications on mobile devices, paving the way for advancements in real-time artistic image processing in mobile photography, augmented reality, and creative applications.
本研究开发了一种神经风格迁移(NST)模型,该模型通过设备端人工智能针对移动设备上的实时执行进行了优化,消除了对云服务器的依赖。通过将人工智能模型直接嵌入移动硬件,这种方法降低了运营成本并增强了用户隐私。然而,为移动部署设计深度学习模型需要在计算效率和视觉质量之间进行权衡,因为减小模型大小通常会导致性能下降。为应对这一挑战,我们提出了一组轻量级NST模型,这些模型结合了深度可分离卷积、残差瓶颈以及受MobileNet和ResNet架构启发的优化上采样技术。基于参数、浮点运算、内存使用和图像变换质量设计并评估了五种模型变体。实验结果表明,我们优化后的模型在效率和性能之间取得了平衡,能够在资源受限的移动环境中实现高质量的实时风格迁移。这些发现凸显了在移动设备上部署NST应用的可行性,为移动摄影、增强现实和创意应用中的实时艺术图像处理进步铺平了道路。