Souvignet Julien, Asfari Hadyl, Lardon Jérémy, Del Tedesco Emilie, Declerck Gunnar, Bousquet Cédric
a INSERM, U1142, LIMICS, F-75006, Paris, France; Sorbonne Universités, UPMC Univ Paris 06, UMR_S 1142, LIMICS, F-75006, Paris, France; Université Paris 13, Sorbonne Paris Cité, LIMICS, (UMR_S 1142) , F-93430 , Villetaneuse , France.
b Department of Public Health and medical informatics , CHU University of Saint Etienne , Saint-Etienne , France.
Expert Opin Drug Saf. 2016 Sep;15(9):1153-61. doi: 10.1080/14740338.2016.1206075. Epub 2016 Jul 15.
To propose a method to build customized sets of MedDRA terms for the description of a medical condition. We illustrate this method with upper gastrointestinal bleedings (UGIB).
We created a broad list of MedDRA terms related to UGIB and defined a gold standard with the help of experts. MedDRA terms were formally described in a semantic resource named OntoADR. We report the use of two semantic queries that automatically select candidate terms for UGIB. Query 1 is a combination of two SNOMED CT concepts describing both morphology 'Hemorrhage' and finding site 'Upper digestive tract structure'. Query 2 complements Query 1 by taking into account MedDRA terms associated to SNOMED CT concepts describing clinical manifestations 'Melena' or 'Hematemesis'.
We compared terms in queries and our gold standard achieving a recall of 71.0% and a precision of 81.4% for query 1 (F1 score 0.76); and a recall of 96.7% and a precision of 77.0% for query 2 (F1 score 0.86).
Our results demonstrate the feasibility of applying knowledge engineering techniques for building customized sets of MedDRA terms. Additional work is necessary to improve precision and recall, and confirm the interest of the proposed strategy.
提出一种构建用于描述医疗状况的定制版MedDRA术语集的方法。我们以上消化道出血(UGIB)为例说明该方法。
我们创建了一份与UGIB相关的MedDRA术语清单,并在专家帮助下定义了一个金标准。MedDRA术语在名为OntoADR的语义资源中进行了正式描述。我们报告了使用两个语义查询自动为UGIB选择候选术语的情况。查询1是两个SNOMED CT概念的组合,分别描述形态学“出血”和发现部位“上消化道结构”。查询2通过考虑与描述临床表现“黑便”或“呕血”的SNOMED CT概念相关的MedDRA术语来补充查询1。
我们将查询中的术语与金标准进行了比较,查询1的召回率为71.0%,精确率为81.4%(F1分数为0.76);查询2的召回率为96.7%,精确率为77.0%(F1分数为0.86)。
我们的结果证明了应用知识工程技术构建定制版MedDRA术语集的可行性。需要进一步开展工作来提高精确率和召回率,并确认所提出策略的价值。