Suppr超能文献

对机器学习中当前隐私保护方法的批判。

A critique of current approaches to privacy in machine learning.

作者信息

van Daalen Florian, Jacquemin Marine, van Soest Johan, Stahl Nina, Townend David, Dekker Andre, Bermejo Inigo

机构信息

Radiation Oncology (MAASTRO) GROW School for Oncology and Reproduction, Maastricht University Medical Centre, Maastricht, Netherlands.

Department of Health Promotion, Care and Public Health Research Institute (CAPHRI), Maastricht University, Maastricht, Netherlands.

出版信息

Ethics Inf Technol. 2025;27(3):32. doi: 10.1007/s10676-025-09843-4. Epub 2025 Jun 20.

Abstract

Access to large datasets, the rise of the Internet of Things (IoT) and the ease of collecting personal data, have led to significant breakthroughs in machine learning. However, they have also raised new concerns about privacy data protection. Controversies like the Facebook-Cambridge Analytica scandal highlight unethical practices in today's digital landscape. Historical privacy incidents have led to the development of technical and legal solutions to protect data subjects' right to privacy. However, within machine learning, these problems have largely been approached from a mathematical point of view, ignoring the larger context in which privacy is relevant. This technical approach has benefited data-controllers and failed to protect individuals adequately. Moreover, it has aligned with Big Tech organizations' interests and allowed them to further push the discussion in a direction that is favorable to their interests. This paper reflects on current privacy approaches in machine learning and explores how various big organizations guide the public discourse, and how this harms data subjects. It also critiques the current data protection regulations, as they allow superficial compliance without addressing deeper ethical issues. Finally, it argues that redefining privacy to focus on harm to data subjects rather than on data breaches would benefit data subjects as well as society at large.

摘要

获取大型数据集、物联网(IoT)的兴起以及个人数据收集的便捷性,推动了机器学习领域的重大突破。然而,它们也引发了对隐私数据保护的新担忧。诸如脸书-剑桥分析公司丑闻之类的争议凸显了当今数字领域中的不道德行为。历史上的隐私事件促使了技术和法律解决方案的发展,以保护数据主体的隐私权。然而,在机器学习领域,这些问题很大程度上是从数学角度来处理的,忽略了隐私相关的更广泛背景。这种技术方法使数据控制者受益,却未能充分保护个人。此外,它与大型科技组织的利益相一致,使它们能够进一步将讨论朝着有利于自身利益的方向推进。本文反思了机器学习中当前的隐私处理方法,探讨了各大组织如何引导公众舆论,以及这对数据主体造成的伤害。它还批评了当前的数据保护法规,因为这些法规允许表面上的合规,却未解决更深层次的伦理问题。最后,它认为重新定义隐私,将重点放在对数据主体的伤害而非数据泄露上,将使数据主体以及整个社会受益。

相似文献

1
A critique of current approaches to privacy in machine learning.对机器学习中当前隐私保护方法的批判。
Ethics Inf Technol. 2025;27(3):32. doi: 10.1007/s10676-025-09843-4. Epub 2025 Jun 20.

本文引用的文献

4
Digital Advertising to Children.儿童数字广告。
Pediatrics. 2020 Jul;146(1). doi: 10.1542/peds.2020-1681. Epub 2020 Jun 22.
6
Scientists rise up against statistical significance.科学家们奋起反对统计显著性。
Nature. 2019 Mar;567(7748):305-307. doi: 10.1038/d41586-019-00857-9.
8
The extent and consequences of p-hacking in science.科学中的 p-值操纵的程度和后果。
PLoS Biol. 2015 Mar 13;13(3):e1002106. doi: 10.1371/journal.pbio.1002106. eCollection 2015 Mar.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验