İşgÜder Egemen, Durmaz İncel Özlem
Faculty of EEMCS, Pervasive Systems Research Group, University of Twente, Enschede, The Netherlands.
J Comput Biol. 2025 Jun;32(6):558-572. doi: 10.1089/cmb.2024.0631. Epub 2025 Apr 23.
Wearable and mobile devices equipped with motion sensors offer important insights into user behavior. Machine learning and, more recently, deep learning techniques have been applied to analyze sensor data. Typically, the focus is on a single task, such as human activity recognition (HAR), and the data is processed centrally on a server or in the cloud. However, the same sensor data can be leveraged for multiple tasks, and distributed machine learning methods can be employed without the need for transmitting data to a central location. In this study, we introduce the FedOpenHAR framework, which explores federated transfer learning in a multitask setting for both sensor-based HAR and device position identification tasks. This approach utilizes transfer learning by training task-specific and personalized layers in a federated manner. The OpenHAR framework, which includes ten smaller datasets, is used for training the models. The main challenge is developing robust models that are applicable to both tasks across different datasets, which may contain only a subset of label types. Multiple experiments are conducted in the Flower federated learning environment using the DeepConvLSTM architecture. Results are presented for both federated and centralized training under various parameters and constraints. By employing transfer learning and training task-specific and personalized federated models, we achieve a higher accuracy (72.4%) compared to a fully centralized training approach (64.5%), and similar accuracy to a scenario where each client performs individual training in isolation (72.6%). However, the advantage of FedOpenHAR over individual training is that, when a new client joins with a new label type (representing a new task), it can begin training from the already existing common layer. Furthermore, if a new client wants to classify a new class in one of the existing tasks, FedOpenHAR allows training to begin directly from the task-specific layers.
配备运动传感器的可穿戴设备和移动设备能为用户行为提供重要见解。机器学习以及最近的深度学习技术已被应用于分析传感器数据。通常,重点在于单一任务,比如人类活动识别(HAR),并且数据在服务器或云端进行集中处理。然而,相同的传感器数据可用于多个任务,并且可以采用分布式机器学习方法,无需将数据传输到中心位置。在本研究中,我们引入了FedOpenHAR框架,该框架在多任务设置中探索联合迁移学习,用于基于传感器的HAR和设备位置识别任务。这种方法通过以联合方式训练特定任务和个性化层来利用迁移学习。OpenHAR框架包含十个较小的数据集,用于训练模型。主要挑战在于开发适用于不同数据集中两个任务的稳健模型,这些数据集可能只包含标签类型的一个子集。在Flower联合学习环境中使用DeepConvLSTM架构进行了多个实验。给出了在各种参数和约束下联合训练和集中训练的结果。通过采用迁移学习并训练特定任务和个性化的联合模型,我们实现了比完全集中训练方法(64.5%)更高的准确率(72.4%),并且与每个客户端单独进行孤立训练的场景(72.6%)具有相似的准确率。然而,FedOpenHAR相对于单独训练的优势在于,当一个新客户端带着新的标签类型(代表新任务)加入时,它可以从已有的公共层开始训练。此外,如果一个新客户端想要在现有任务之一中对新类别进行分类,FedOpenHAR允许直接从特定任务层开始训练。