Suppr超能文献

临床MEDLINE检索的效果如何?临床终端用户与图书馆员检索的比较研究。

How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches.

作者信息

McKibbon K A, Haynes R B, Dilks C J, Ramsden M F, Ryan N C, Baker L, Flemming T, Fitzgerald D

机构信息

Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada.

出版信息

Comput Biomed Res. 1990 Dec;23(6):583-93. doi: 10.1016/0010-4809(90)90042-b.

Abstract

The objective of this study was to determine the quality of MEDLINE searches done by physicians, physician trainees, and expert searchers (clinicians and librarians). Its design was an analytic survey with independent replication in a setting of self-service online searching from medical wards, an intensive care unit, a coronary care unit, an emergency room, and an ambulatory clinic in a 300-bed teaching hospital. Participating were all M.D. clinical clerks, house, and attending staff responsible for patients in the above settings. Intervention for all participants consisted of a 2-h small group class and 1-h practice session on MEDLINE searching (GRATEFUL MED) before free access to MEDLINE. Search questions from 104 randomly selected novice searches were given to 1 of 13 clinicians with prior search experience and 1 of 3 librarians to run independent searches (triplicated searches). Measurements and main results from these unique citations of the triplicated searches were sent to expert clinicians to rate for relevance (7-point scale). Recall (number of relevant citations retrieved from an individual search divided by the total number of relevant citations from all searches on the same topic) and precision (proportion of relevant citations retrieved in each search) were calculated. Librarians were significantly better than novices for both. Librarians had equivalent recall to, and better precision than, experienced end-users. Unexpectedly, only 20% of relevant citations were retrieved by more than one search of the set of three, with the conclusion that novice searchers on MEDLINE via GRATEFUL MED after brief training have relatively low recall and precision. Recall improves with experience but precision remains suboptimal. Further research is needed to determine the "learning curve," evaluate training interventions, and explore the non-overlapping retrieval of relevant citations by different searchers.

摘要

本研究的目的是确定医生、医生实习生以及专家检索人员(临床医生和图书馆员)进行的医学文献数据库(MEDLINE)检索的质量。其设计为一项分析性调查,在一家拥有300张床位的教学医院的内科病房、重症监护病房、冠心病监护病房、急诊室和门诊诊所进行自助在线检索的环境中进行独立重复。参与人员为上述环境中负责患者的所有医学博士临床职员、住院医生和主治医生。对所有参与者的干预措施包括在免费访问MEDLINE之前,进行2小时的小组课程学习和1小时的MEDLINE检索(GRATEFUL MED)实践课程。从104个随机选择的新手检索问题中,分别交给13名有检索经验的临床医生中的1名和3名图书馆员中的1名进行独立检索(重复检索)。将这些重复检索中独特引用的测量结果和主要结果发送给专家临床医生,以评估其相关性(7分制)。计算召回率(从单个检索中检索到的相关引用数量除以同一主题所有检索中相关引用的总数)和精确率(每次检索中检索到的相关引用的比例)。图书馆员在这两方面都明显优于新手。图书馆员的召回率与有经验的最终用户相当,精确率则更高。出乎意料的是,在这一组的三次检索中,只有20%的相关引用被不止一次检索到,得出的结论是,经过简短培训后通过GRATEFUL MED在MEDLINE上进行检索的新手检索者的召回率和精确率相对较低。召回率会随着经验的增加而提高,但精确率仍不理想。需要进一步研究以确定“学习曲线”,评估培训干预措施,并探索不同检索者对相关引用的非重叠检索情况。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验