Iwauchi Kota, Tanaka Hiroki, Okazaki Kosuke, Matsuda Yasuhiro, Uratani Mitsuhiro, Morimoto Tsubasa, Nakamura Satoshi
Augmented Human Communication Laboratory, Nara Institute of Science and Technology, Ikoma, Nara, Japan.
Department of Psychiatry, Nara Medical University School of Medicine, Kashihara, Nara, Japan.
Front Digit Health. 2023 Feb 16;5:952433. doi: 10.3389/fdgth.2023.952433. eCollection 2023.
Experienced psychiatrists identify people with autism spectrum disorder (ASD) and schizophrenia (Sz) through interviews based on diagnostic criteria, their responses, and various neuropsychological tests. To improve the clinical diagnosis of neurodevelopmental disorders such as ASD and Sz, the discovery of disorder-specific biomarkers and behavioral indicators with sufficient sensitivity is important. In recent years, studies have been conducted using machine learning to make more accurate predictions. Among various indicators, eye movement, which can be easily obtained, has attracted much attention and various studies have been conducted for ASD and Sz. Eye movement specificity during facial expression recognition has been studied extensively in the past, but modeling taking into account differences in specificity among facial expressions has not been conducted. In this paper, we propose a method to detect ASD or Sz from eye movement during the Facial Emotion Identification Test (FEIT) while considering differences in eye movement due to the facial expressions presented. We also confirm that weighting using the differences improves classification accuracy. Our data set sample consisted of 15 adults with ASD and Sz, 16 controls, and 15 children with ASD and 17 controls. Random forest was used to weight each test and classify the participants as control, ASD, or Sz. The most successful approach used heat maps and convolutional neural networks (CNN) for eye retention. This method classified Sz in adults with 64.5% accuracy, ASD in adults with up to 71.0% accuracy, and ASD in children with 66.7% accuracy. Classifying of ASD result was significantly different (p<.05) by the binomial test with chance rate. The results show a 10% and 16.7% improvement in accuracy, respectively, compared to a model that does not take facial expressions into account. In ASD, this indicates that modeling is effective, which weights the output of each image.
经验丰富的精神科医生通过基于诊断标准的访谈、患者的反应以及各种神经心理学测试来识别自闭症谱系障碍(ASD)和精神分裂症(Sz)患者。为了改善对ASD和Sz等神经发育障碍的临床诊断,发现具有足够敏感性的疾病特异性生物标志物和行为指标非常重要。近年来,人们利用机器学习进行了研究,以做出更准确的预测。在各种指标中,易于获取的眼动受到了广泛关注,针对ASD和Sz开展了各种研究。过去对面部表情识别过程中的眼动特异性进行了广泛研究,但尚未进行考虑面部表情之间特异性差异的建模。在本文中,我们提出了一种在面部情绪识别测试(FEIT)期间从眼动中检测ASD或Sz的方法,同时考虑到所呈现面部表情引起的眼动差异。我们还证实,利用这些差异进行加权可提高分类准确率。我们的数据集样本包括15名患有ASD和Sz的成年人、16名对照者,以及15名患有ASD的儿童和17名对照者。使用随机森林对每个测试进行加权,并将参与者分类为对照者、ASD患者或Sz患者。最成功的方法是使用热图和卷积神经网络(CNN)来进行眼动停留分析。该方法对成年Sz患者的分类准确率为64.5%,成年ASD患者的分类准确率高达71.0%,儿童ASD患者的分类准确率为66.7%。通过二项式检验和机遇率分析,ASD的分类结果存在显著差异(p<0.05)。结果表明,与不考虑面部表情的模型相比,准确率分别提高了10%和16.7%。在ASD中,这表明建模是有效的,该模型对每个图像的输出进行了加权。