Research Centre for Palliative Care, Death and Dying, College of Nursing and Health Sciences, Flinders University, Bedford Park, Adelaide, SA 5042, Australia.
School of Health, Medical and Applied Sciences, CQUniversity Australia, Wayville, Adelaide, SA 5034, Australia.
Int J Environ Res Public Health. 2023 Mar 5;20(5):4608. doi: 10.3390/ijerph20054608.
User-based evaluation by end users is an essential step in designing useful interfaces. Inspection methods can offer an alternate approach when end-user recruitment is problematic. A Learning Designers' usability scholarship could offer usability evaluation expertise adjunct to multidisciplinary teams in academic settings. The feasibility of Learning Designers as 'expert evaluators' is assessed within this study. Two groups, healthcare professionals and Learning Designers, applied a hybrid evaluation method to generate usability feedback from a palliative care toolkit prototype. Expert data were compared to end-user errors detected from usability testing. Interface errors were categorised, meta-aggregated and severity calculated. The analysis found that reviewers detected = 333 errors, with = 167 uniquely occurring within the interface. Learning Designers identified errors at greater frequencies (60.66% total interface errors, mean (M) = 28.86 per expert) than other evaluator groups (healthcare professionals 23.12%, M = 19.25 and end users 16.22%, M = 9.0). Patterns in severity and error types were also observed between reviewer groups. The findings suggest that Learning Designers are skilled in detecting interface errors, which benefits developers assessing usability when access to end users is limited. Whilst not offering rich narrative feedback generated by user-based evaluations, Learning Designers complement healthcare professionals' content-specific knowledge as a 'composite expert reviewer' with the ability to generate meaningful feedback to shape digital health interfaces.
由最终用户进行基于用户的评估是设计有用界面的重要步骤。当最终用户的招募存在问题时,检查方法可以提供一种替代方法。学习设计师的可用性奖学金可以为学术环境中的多学科团队提供可用性评估专业知识。本研究评估了学习设计师作为“专家评估者”的可行性。两组人员,医疗保健专业人员和学习设计师,应用混合评估方法从姑息治疗工具包原型中生成可用性反馈。将专家数据与从可用性测试中检测到的最终用户错误进行比较。对界面错误进行分类、综合和严重程度计算。分析发现,审查员检测到了 333 个错误,其中 167 个错误是界面独有的。学习设计师的错误检测频率更高(总界面错误的 60.66%,每个专家的平均值(M)为 28.86),高于其他评估者群体(医疗保健专业人员 23.12%,M = 19.25 和最终用户 16.22%,M = 9.0)。在审查员群体之间也观察到了严重程度和错误类型的模式。研究结果表明,学习设计师在检测界面错误方面很熟练,当无法接触最终用户时,这有助于开发人员评估可用性。虽然没有提供用户评估生成的丰富叙述性反馈,但学习设计师作为具有生成有意义反馈以塑造数字健康界面能力的“综合专家审查员”,补充了医疗保健专业人员的特定于内容的知识。