• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用自由文本解释提高大语言模型在抑郁症检测中的可迁移性。

: enhancing the transferability of large language models for depression detection using free-text explanations.

作者信息

Priyadarshana Y H P P, Liang Zilu, Piumarta Ian

机构信息

Kyoto University of Advanced Science (KUAS), Kyoto, Japan.

出版信息

Front Artif Intell. 2025 May 21;8:1564828. doi: 10.3389/frai.2025.1564828. eCollection 2025.

DOI:10.3389/frai.2025.1564828
PMID:40469073
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12133835/
Abstract

Few-shot prompting in large language models (LLMs) significantly improves performance across various tasks, including both in-domain and previously unseen natural language tasks, by learning from limited in-context examples. How these examples enhance transferability and contribute to achieving state-of-the-art (SOTA) performance in downstream tasks remains unclear. To address this, we propose , a novel LLM transferability framework designed to clarify the selection of the most relevant examples using synthetic free-text explanations. Our novel hybrid method ranks LLM-generated explanations by selecting the most semantically relevant examples closest to the input query while balancing diversity. The top-ranked explanations, along with few-shot examples, are then used to enhance LLMs' knowledge transfer in multi-party conversational modeling for previously unseen depression detection tasks. Evaluations using the IMHI corpus demonstrate that consistently produces high-quality free-text explanations. Extensive experiments on depression detection tasks, including depressed utterance classification (DUC) and depressed speaker identification (DSI), show that achieves SOTA performance. The evaluation results indicate significant improvements, with up to 20.59% in recall for DUC and 21.58% in F1 scores for DSI, using 5-shot examples with top-ranked explanations in the RSDD and eRisk 18 T2 corpora. These findings underscore 's potential as an effective screening tool for digital mental health applications.

摘要

大语言模型(LLMs)中的少样本提示通过从有限的上下文示例中学习,显著提高了各种任务的性能,包括领域内和以前未见过的自然语言任务。这些示例如何增强可迁移性并有助于在下游任务中实现当前最优(SOTA)性能仍不清楚。为了解决这个问题,我们提出了一种新颖的LLM可迁移性框架,旨在使用合成自由文本解释来阐明最相关示例的选择。我们新颖的混合方法通过选择最接近输入查询的语义最相关示例同时平衡多样性,对LLM生成的解释进行排名。然后,排名靠前的解释与少样本示例一起,用于增强LLMs在多方对话建模中的知识迁移,以处理以前未见过的抑郁症检测任务。使用IMHI语料库进行的评估表明,该框架始终能生成高质量的自由文本解释。在抑郁症检测任务上进行的广泛实验,包括抑郁话语分类(DUC)和抑郁说话者识别(DSI),表明该框架实现了SOTA性能。评估结果显示出显著的改进,在RSDD和eRisk 18 T2语料库中,使用带有排名靠前解释的5样本示例时,DUC的召回率提高了20.59%,DSI的F1分数提高了21.58%。这些发现强调了该框架作为数字心理健康应用有效筛查工具的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/5cb03874fc67/frai-08-1564828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/f7bde7410297/frai-08-1564828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/69131f1aebf5/frai-08-1564828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/b9b57b48e702/frai-08-1564828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/a90d2edae71f/frai-08-1564828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/a4de9dbb114d/frai-08-1564828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/30af3d4468d3/frai-08-1564828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/cbef8c6ec194/frai-08-1564828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/1ccf312c5113/frai-08-1564828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/5cb03874fc67/frai-08-1564828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/f7bde7410297/frai-08-1564828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/69131f1aebf5/frai-08-1564828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/b9b57b48e702/frai-08-1564828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/a90d2edae71f/frai-08-1564828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/a4de9dbb114d/frai-08-1564828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/30af3d4468d3/frai-08-1564828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/cbef8c6ec194/frai-08-1564828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/1ccf312c5113/frai-08-1564828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c9f/12133835/5cb03874fc67/frai-08-1564828-g009.jpg

相似文献

1
: enhancing the transferability of large language models for depression detection using free-text explanations.利用自由文本解释提高大语言模型在抑郁症检测中的可迁移性。
Front Artif Intell. 2025 May 21;8:1564828. doi: 10.3389/frai.2025.1564828. eCollection 2025.
2
Dynamic few-shot prompting for clinical note section classification using lightweight, open-source large language models.使用轻量级开源大语言模型进行临床笔记章节分类的动态少样本提示
J Am Med Inform Assoc. 2025 Jul 1;32(7):1164-1173. doi: 10.1093/jamia/ocaf084.
3
An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study.零样本临床自然语言处理中大型语言模型提示策略的实证评估:算法开发与验证研究
JMIR Med Inform. 2024 Apr 8;12:e55318. doi: 10.2196/55318.
4
Evaluating large language models for health-related text classification tasks with public social media data.利用公共社交媒体数据评估用于健康相关文本分类任务的大型语言模型。
J Am Med Inform Assoc. 2024 Oct 1;31(10):2181-2189. doi: 10.1093/jamia/ocae210.
5
Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data.心理语言模型:通过在线文本数据利用大语言模型进行心理健康预测。
Proc ACM Interact Mob Wearable Ubiquitous Technol. 2024 Mar;8(1). doi: 10.1145/3643540. Epub 2024 Mar 6.
6
Enhancing semantical text understanding with fine-tuned large language models: A case study on Quora Question Pair duplicate identification.使用微调的大语言模型增强语义文本理解:以Quora问题对重复识别为例的研究
PLoS One. 2025 Jan 10;20(1):e0317042. doi: 10.1371/journal.pone.0317042. eCollection 2025.
7
RT: a Retrieving and Chain-of-Thought framework for few-shot medical named entity recognition.RT:一种用于少样本医学命名实体识别的检索和思维链框架。
J Am Med Inform Assoc. 2024 Sep 1;31(9):1929-1938. doi: 10.1093/jamia/ocae095.
8
A comprehensive evaluation of large Language models on benchmark biomedical text processing tasks.对基准生物医学文本处理任务中大型语言模型的全面评估。
Comput Biol Med. 2024 Mar;171:108189. doi: 10.1016/j.compbiomed.2024.108189. Epub 2024 Feb 20.
9
Generative Large Language Models in Electronic Health Records for Patient Care Since 2023: A Systematic Review.2023年以来电子健康记录中用于患者护理的生成式大语言模型:一项系统综述
medRxiv. 2024 Aug 19:2024.08.11.24311828. doi: 10.1101/2024.08.11.24311828.
10
Leveraging Medical Knowledge Graphs Into Large Language Models for Diagnosis Prediction: Design and Application Study.将医学知识图谱融入大语言模型进行诊断预测:设计与应用研究
JMIR AI. 2025 Feb 24;4:e58670. doi: 10.2196/58670.

本文引用的文献

1
Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data.心理语言模型:通过在线文本数据利用大语言模型进行心理健康预测。
Proc ACM Interact Mob Wearable Ubiquitous Technol. 2024 Mar;8(1). doi: 10.1145/3643540. Epub 2024 Mar 6.
2
Explainable depression symptom detection in social media.社交媒体中可解释的抑郁症状检测
Health Inf Sci Syst. 2024 Sep 6;12(1):47. doi: 10.1007/s13755-024-00303-9. eCollection 2024 Dec.
3
A lexicon-based approach to examine depression detection in social media: the case of Twitter and university community.
一种基于词汇表的方法用于研究社交媒体中的抑郁症检测:以推特和大学社区为例。
Humanit Soc Sci Commun. 2022;9(1):325. doi: 10.1057/s41599-022-01313-2. Epub 2022 Sep 21.
4
The DSM-5: Classification and criteria changes.DSM-5:分类和标准的改变。
World Psychiatry. 2013 Jun;12(2):92-8. doi: 10.1002/wps.20050.