Campbell John L, Abel Gary
University of Exeter Medical School, Exeter, UK.
Primary Care Unit, University of Cambridge, Cambridge, UK.
BMJ Open. 2016 Jun 2;6(6):e011958. doi: 10.1136/bmjopen-2016-011958.
To inform the rational deployment of assessor resource in the evaluation of applications to the UK Advisory Committee on Clinical Excellence Awards (ACCEA).
ACCEA are responsible for a scheme to financially reward senior doctors in England and Wales who are assessed to be working over and above the standard expected of their role.
Anonymised applications of consultants and senior academic GPs for awards were considered by members of 14 regional subcommittees and 2 national assessing committees during the 2014-2015 round of applications.
It involved secondary analysis of complete anonymised national data set.
We analysed scores for each of 1916 applications for a clinical excellence award across 4 levels of award. Scores were provided by members of 16 subcommittees. We assessed the reliability of assessments and described the variance in the assessment of scores.
Members of regional subcommittees assessed 1529 new applications and 387 renewal applications. Average scores increased with the level of application being made. On average, applications were assessed by 9.5 assessors. The highest contributions to the variance in individual assessors' assessments of applications were attributable to assessors or to residual variance. The applicant accounted for around a quarter of the variance in scores for new bronze applications, with this proportion decreasing for higher award levels. Reliability in excess of 0.7 can be attained where 4 assessors score bronze applications, with twice as many assessors being required for higher levels of application.
Assessment processes pertaining in the competitive allocation of public funds need to be credible and efficient. The present arrangements for assessing and scoring applications are defensible, depending on the level of reliability judged to be required in the assessment process. Some relatively minor reconfiguration in approaches to scoring might usefully be considered in future rounds of assessment.
为英国临床卓越奖咨询委员会(ACCEA)评估申请时合理配置评审资源提供参考。
ACCEA负责一项计划,对英格兰和威尔士被评估为工作表现超出其岗位预期标准的资深医生给予经济奖励。
在2014 - 2015年申请轮次中,14个地区小组委员会和2个国家评估委员会的成员审议了顾问医生和资深学术全科医生的匿名获奖申请。
对完整的匿名国家数据集进行二次分析。
我们分析了1916份临床卓越奖申请在4个奖项级别上的得分。分数由16个小组委员会的成员提供。我们评估了评估的可靠性,并描述了分数评估中的差异。
地区小组委员会成员评估了1529份新申请和387份续期申请。平均分数随着申请的奖项级别提高而增加。平均而言,每份申请由9.5名评审员评估。个体评审员对申请评估差异的最大贡献归因于评审员本身或剩余差异。对于新的铜奖申请,申请人约占分数差异的四分之一,随着奖项级别提高,这一比例下降。对于铜奖申请,4名评审员打分时可靠性可超过0.7,对于更高奖项级别的申请,所需评审员数量是其两倍。
涉及公共资金竞争性分配的评估过程需要可信且高效。目前评估和打分申请的安排是合理的,这取决于评估过程中所需的可靠性水平。在未来几轮评估中,可能有必要考虑对打分方式进行一些相对较小的调整。