Rossander Anna, Karlsson Daniel
Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
Swedish eHealth Agency, Stockholm, Sweden.
JMIR Med Inform. 2023 Jul 31;11:e46477. doi: 10.2196/46477.
There is a flora of health care information models but no consensus on which to use. This leads to poor information sharing and duplicate modelling work. The amount and type of differences between models has, to our knowledge, not been evaluated.
This work aims to explore how information structured with various information models differ in practice. Our hypothesis is that differences between information models are overestimated. This work will also assess the usability of competency questions as a method for evaluation of information models within health care.
In this study, 4 information standards, 2 standards for secondary use, and 2 electronic health record systems were included as material. Competency questions were developed for a random selection of recommendations from a clinical guideline. The information needed to answer the competency questions was modelled according to each included information model, and the results were analyzed. Differences in structure and terminology were quantified for each combination of standards.
In this study, 36 competency questions were developed and answered. In general, similarities between the included information models were larger than the differences. The demarcation between information model and terminology was overall similar; on average, 45% of the included structures were identical between models. Choices of terminology differed within and between models; on average, 11% was usable in interaction with each other. The information models included in this study were able to represent most information required for answering the competency questions.
Different but same same; in practice, different information models structure much information in a similar fashion. To increase interoperability within and between systems, it is more important to move toward structuring information with any information model rather than finding or developing a perfect information model. Competency questions are a feasible way of evaluating how information models perform in practice.
医疗保健信息模型种类繁多,但对于使用哪种模型尚未达成共识。这导致信息共享不佳以及重复的建模工作。据我们所知,模型之间差异的数量和类型尚未得到评估。
本研究旨在探讨使用各种信息模型构建的信息在实际应用中的差异。我们的假设是,信息模型之间的差异被高估了。本研究还将评估能力问题作为评估医疗保健领域信息模型的一种方法的可用性。
在本研究中,纳入了4种信息标准、2种二次使用标准和2个电子健康记录系统作为材料。从临床指南中随机选择一些建议,并针对这些建议制定能力问题。根据每个纳入的信息模型对回答能力问题所需的信息进行建模,并对结果进行分析。对每种标准组合的结构和术语差异进行量化。
在本研究中,共制定并回答了36个能力问题。总体而言,纳入的信息模型之间的相似性大于差异。信息模型和术语之间的界限总体相似;平均而言,模型之间45%的纳入结构是相同的。模型内部和模型之间的术语选择存在差异;平均而言,11%的术语在相互交互中可用。本研究中纳入的信息模型能够表示回答能力问题所需的大部分信息。
不同却又相同;在实际应用中,不同的信息模型以相似的方式构建大量信息。为了提高系统内部和系统之间的互操作性,采用任何信息模型来构建信息比寻找或开发完美信息模型更为重要。能力问题是评估信息模型在实际应用中表现的一种可行方法。