Radanliev Petar
Department of Computer Science, University of Oxford, Oxford, United Kingdom.
Alan Turing Institute, London, United Kingdom.
Front Digit Health. 2025 Jun 17;7:1431246. doi: 10.3389/fdgth.2025.1431246. eCollection 2025.
The integration of artificial intelligence (AI) and machine learning (ML) into wearable sensor technologies has substantially advanced health data science, enabling continuous monitoring, personalised interventions, and predictive analytics. However, the fast advancement of these technologies has raised critical ethical and regulatory concerns, particularly around data privacy, algorithmic bias, informed consent, and the opacity of automated decision-making. This study undertakes a systematic examination of these challenges, highlighting the risks posed by unregulated data aggregation, biased model training, and inadequate transparency in AI-powered health applications. Through an analysis of current privacy frameworks and empirical assessment of publicly available datasets, the study identifies significant disparities in model performance across demographic groups and exposes vulnerabilities in both technical design and ethical governance. To address these issues, this article introduces a data-driven methodological framework that embeds transparency, accountability, and regulatory alignment across all stages of AI development. The framework operationalises ethical principles through concrete mechanisms, including explainable AI, bias mitigation techniques, and consent-aware data processing pipelines, while aligning with legal standards such as the GDPR, the UK Data Protection Act, and the EU AI Act. By incorporating transparency as a structural and procedural requirement, the framework presented in this article offers a replicable model for the responsible development of AI systems in wearable healthcare. In doing so, the study advocates for a regulatory paradigm that balances technological innovation with the protection of individual rights, fostering fair, secure, and trustworthy AI-driven health monitoring.
将人工智能(AI)和机器学习(ML)集成到可穿戴传感器技术中,极大地推动了健康数据科学的发展,实现了持续监测、个性化干预和预测分析。然而,这些技术的快速发展引发了关键的伦理和监管问题,特别是围绕数据隐私、算法偏见、知情同意以及自动化决策的不透明性。本研究对这些挑战进行了系统审视,强调了不受监管的数据聚合、有偏见的模型训练以及人工智能驱动的健康应用中透明度不足所带来的风险。通过对当前隐私框架的分析以及对公开可用数据集的实证评估,该研究发现不同人口群体在模型性能上存在显著差异,并揭示了技术设计和伦理治理方面的漏洞。为解决这些问题,本文引入了一个数据驱动的方法框架,该框架在人工智能开发的各个阶段都嵌入了透明度、问责制和监管一致性。该框架通过具体机制将伦理原则付诸实践,包括可解释人工智能、偏见缓解技术和基于同意的数据处理管道,同时符合《通用数据保护条例》(GDPR)、《英国数据保护法》和《欧盟人工智能法案》等法律标准。通过将透明度作为一项结构和程序要求,本文提出的框架为可穿戴医疗保健领域负责任地开发人工智能系统提供了一个可复制的模型。在此过程中,该研究倡导一种平衡技术创新与个人权利保护的监管范式,促进公平、安全和值得信赖的人工智能驱动的健康监测。