Division of DIgital Psychaitry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA.
Division of DIgital Psychaitry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
BMJ Open. 2021 Mar 19;11(3):e047001. doi: 10.1136/bmjopen-2020-047001.
Despite an estimated 300 000 mobile health apps on the market, there remains no consensus around helping patients and clinicians select safe and effective apps. In 2018, our team drew on existing evaluation frameworks to identify salient categories and create a new framework endorsed by the American Psychiatric Association (APA). We have since created a more expanded and operational framework Mhealth Index and Navigation Database (MIND) that aligns with the APA categories but includes objective and auditable questions (105). We sought to survey the existing space, conducting a review of all mobile health app evaluation frameworks published since 2018, and demonstrate the comprehensiveness of this new model by comparing it to existing and emerging frameworks.
We conducted a scoping review of mobile health app evaluation frameworks.
References were identified through searches of PubMed, EMBASE and PsychINFO with publication date between January 2018 and October 2020.
Papers were selected for inclusion if they meet the predetermined eligibility criteria-presenting an evaluation framework for mobile health apps with patient, clinician or end user-facing questions.
Two reviewers screened the literature separately and applied the inclusion criteria. The data extracted from the papers included: author and dates of publication, source affiliation, country of origin, name of framework, study design, description of framework, intended audience/user and framework scoring system. We then compiled a collection of more than 1701 questions across 79 frameworks. We compared and grouped these questions using the MIND framework as a reference. We sought to identify the most common domains of evaluation while assessing the comprehensiveness and flexibility-as well as any potential gaps-of MIND.
New app evaluation frameworks continue to emerge and expand. Since our 2019 review of the app evaluation framework space, more frameworks include questions around privacy (43) and clinical foundation (57), reflecting an increased focus on issues of app security and evidence base. The majority of mapped frameworks overlapped with at least half of the MIND categories. The results of this search have informed a database (apps.digitalpsych.org) that users can access today.
As the number of app evaluation frameworks continues to rise, it is becoming difficult for users to select both an appropriate evaluation tool and to find an appropriate health app. This review provides a comparison of what different app evaluation frameworks are offering, where the field is converging and new priorities for improving clinical guidance.
尽管市场上有大约 30 万个移动健康应用程序,但在帮助患者和临床医生选择安全有效的应用程序方面仍未达成共识。2018 年,我们的团队借鉴现有的评估框架,确定了显著类别,并创建了一个新框架,得到了美国精神病学协会(APA)的认可。此后,我们创建了一个更扩展和可操作的框架 Mhealth Index 和 Navigation Database(MIND),该框架与 APA 类别一致,但包括客观和可审核的问题(105)。我们旨在调查现有空间,对 2018 年以来发布的所有移动健康应用程序评估框架进行综述,并通过将其与现有和新兴框架进行比较来展示这种新模式的全面性。
我们对移动健康应用程序评估框架进行了范围界定综述。
参考文献是通过在 PubMed、EMBASE 和 PsychINFO 中进行搜索确定的,搜索时间为 2018 年 1 月至 2020 年 10 月。
如果符合以下预定入选标准,则将论文纳入研究:呈现用于移动健康应用程序的评估框架,具有面向患者、临床医生或最终用户的问题。
两名审查员分别筛选文献并应用入选标准。从论文中提取的数据包括:作者和发表日期、来源机构、原籍国、框架名称、研究设计、框架描述、目标受众/用户以及框架评分系统。然后,我们收集了 79 个框架中的 1701 多个问题。我们使用 MIND 框架作为参考比较和分组这些问题。我们试图在评估 MIND 的全面性和灵活性的同时,确定评估的最常见领域,以及任何潜在的差距。
新的应用程序评估框架继续出现和扩展。自 2019 年我们对应用程序评估框架领域进行综述以来,更多的框架包括有关隐私(43)和临床基础(57)的问题,这反映出对应用程序安全性和证据基础问题的关注增加。映射的框架中,大多数与至少一半的 MIND 类别重叠。这次搜索的结果为我们的数据库(apps.digitalpsych.org)提供了信息,用户现在可以访问该数据库。
随着应用程序评估框架数量的持续增加,用户选择合适的评估工具和找到合适的健康应用程序变得越来越困难。本综述比较了不同应用程序评估框架的提供内容,展示了该领域的融合以及改善临床指导的新重点。