Hsu Elizabeth R, Williams Duane E, Dijoseph Leo G, Schnell Joshua D, Finstad Samantha L, Lee Jerry S H, Greenspan Emily J, Corrigan James G
Office of Science Planning and Assessment, National Cancer Institute, Bethesda, MD 20892, USA Thomson Reuters, Rockville, MD 20850, USA and Center for Strategic Scientific Initiatives, National Cancer Institute, Bethesda, MD 20892, USA.
Res Eval. 2013 Dec;22(5):272-284. doi: 10.1093/reseval/rvt024.
Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency's portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessment and gap analysis tools leveraging administrative and grant application data that can be used for early and continued analysis. We piloted such methods on the National Cancer Institute's Provocative Questions (PQ) initiative to address key questions regarding diversity of applicants; whether applicants were proposing new avenues of research; and whether grant applications were filling portfolio gaps. For the latter two questions, we defined measurements called focus shift and relevance, respectively, based on text similarity scoring. We demonstrate that two types of applicants were attracted by the PQs at rates greater than or on par with the general National Cancer Institute applicant pool: those with clinical degrees and new investigators. Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing. Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements. We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.
生物医学研究的资助者常常面临挑战,难以理解一项新的资助计划如何契合该机构的项目组合以及更广泛的研究界。虽然传统评估依赖于主题专家的回顾性审查,但现在利用行政和资助申请数据来设计项目组合评估和差距分析工具是可行的,这些工具可用于早期和持续分析。我们在美国国立癌症研究所的“引发性问题”(PQ)计划上试点了此类方法,以解决有关申请人多样性、申请人是否提出新的研究途径以及资助申请是否填补项目组合差距的关键问题。对于后两个问题,我们分别基于文本相似度评分定义了称为“焦点转移”和“相关性”的衡量标准。我们证明,两类申请人被PQ吸引的比例高于或等同于国立癌症研究所的一般申请人群体:拥有临床学位的人和新研究者。焦点转移分数往往相对较低,申请人没有偏离先前的研究太远,但发现大多数申请与所申请的PQ相关。对比较文本的敏感性以及无法区分细微的科学细微差别是我们基于文本相似度的自动化方法的主要局限性,这可能会使相关性和焦点转移测量产生偏差。我们还讨论了相关性和焦点转移测量的潜在用途,包括结果评估的设计,不过在广泛应用之前,需要进一步试验和完善,以更全面地理解这些测量方法。