Gao Runyang, Yu Danghui, Gao Biao, Hua Heng, Hui Zhaoyang, Gao Jingquan, Yin Cha
Faculty of Law and Justice, University of New South Wales, Sydney, NSW, Australia.
Teaching and Research Support Center, Naval Medical University, Shanghai, China.
Front Artif Intell. 2025 Apr 7;8:1546064. doi: 10.3389/frai.2025.1546064. eCollection 2025.
The widespread application of artificial intelligence in academic writing has triggered a series of pressing legal challenges.
This study systematically examines critical issues, including copyright protection, academic integrity, and comparative research methods. We establishes a risk assessment matrix to quantitatively analyze various risks in AI-assisted academic writing from three dimensions: impact, probability, and mitigation cost, thereby identifying high-risk factors.
The findings reveal that AI-assisted writing challenges fundamental principles of traditional copyright law, with judicial practice tending to position AI as a creative tool while emphasizing human agency. Regarding academic integrity, new risks, such as "credibility illusion" and "implicit plagiarism," have become prominent in AI-generated content, necessitating adaptive regulatory mechanisms. Research data protection and personal information security face dual challenges in data security that require technological and institutional innovations.
Based on these findings, we propose a three-dimensional regulatory framework of "transparency, accountability, technical support" and present systematic policy recommendations from institutional design, organizational structure, and international cooperation perspectives. The research results deepen understanding of legal attributes of AI creation, promote theoretical innovation in digital era copyright and academic ethics, and provide practical guidance for academic institutions in formulating AI usage policies.
人工智能在学术写作中的广泛应用引发了一系列紧迫的法律挑战。
本研究系统地审视了关键问题,包括版权保护、学术诚信和比较研究方法。我们建立了一个风险评估矩阵,从影响、概率和缓解成本三个维度对人工智能辅助学术写作中的各种风险进行定量分析,从而识别高风险因素。
研究结果表明,人工智能辅助写作挑战了传统版权法的基本原则,司法实践倾向于将人工智能定位为一种创作工具,同时强调人类的作用。在学术诚信方面,“可信度错觉”和“隐性剽窃”等新风险在人工智能生成的内容中变得突出,需要适应性的监管机制。研究数据保护和个人信息安全在数据安全方面面临双重挑战,需要技术和制度创新。
基于这些发现,我们提出了一个“透明度、问责制、技术支持”的三维监管框架,并从制度设计、组织结构和国际合作的角度提出了系统的政策建议。研究结果加深了对人工智能创作法律属性的理解,促进了数字时代版权和学术伦理的理论创新,并为学术机构制定人工智能使用政策提供了实践指导。