Villatoro-Tello Esaú, Parida Shantipriya, Kumar Sajit, Motlicek Petr
Universidad Autónoma Metropolitana, Unidad Cuajimalpa, Mexico City, Mexico.
Idiap Research Institute, Rue Marconi 19, 1920, Martigny, Switzerland.
Cognit Comput. 2021;13(5):1154-1171. doi: 10.1007/s12559-021-09901-1. Epub 2021 Jul 17.
According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obtained by Operant Motives, unconscious intrinsic desires measured by the Operant Motive Test (OMT). The OMT test requires participants to write freely descriptions associated with a set of provided images and questions. In this work, we explore different recent machine learning techniques and various text representation techniques for facing the problem of the OMT classification task. We focused on advanced language representations (e.g, BERT, XLM, and DistilBERT) and deep Supervised Autoencoders for solving the OMT task. We performed an exhaustive analysis and compared their performance against fully connected neural networks and traditional support vector classifiers. Our comparative study highlights the importance of BERT which outperforms the traditional machine learning techniques by a relative improvement of 7.9%. In addition, we performed an analysis of how the BERT attention mechanism is being modified. Our findings indicate that the writing style features acquire higher importance at the moment of accurately identifying the different OMT categories. This is the first time that a study to determine the performance of different transformer-based architectures in the OMT task is performed. Similarly, our work propose, for the first time, using deep supervised autoencoders in the OMT classification task. Our experiments demonstrate that transformer-based methods exhibit the best empirical results, obtaining a relative improvement of 7.9% over the competitive baseline suggested as part of the GermEval 2020 challenge. Additionally, we show that features associated with the writing style are more important than content-based words. Some of these findings show strong connections to previously reported behavioral research on the implicit psychometrics theory.
根据心理学文献,内隐动机有助于对行为、后续成功和长期发展进行表征。与人格特质相反,内隐动机通常被认为是相当稳定的人格特征。通常,内隐动机通过操作性动机来获得,操作性动机是通过操作性动机测试(OMT)测量的无意识内在欲望。OMT测试要求参与者自由撰写与一组提供的图像和问题相关的描述。在这项工作中,我们探索了不同的近期机器学习技术和各种文本表示技术,以应对OMT分类任务的问题。我们专注于先进的语言表示(例如,BERT、XLM和DistilBERT)以及深度监督自动编码器来解决OMT任务。我们进行了详尽的分析,并将它们的性能与全连接神经网络和传统支持向量分类器进行了比较。我们的比较研究突出了BERT的重要性,它比传统机器学习技术的相对改进率为7.9%。此外,我们分析了BERT注意力机制是如何被修改的。我们的研究结果表明,在准确识别不同的OMT类别时,写作风格特征变得更加重要。这是首次进行研究以确定不同基于Transformer的架构在OMT任务中的性能。同样,我们的工作首次提出在OMT分类任务中使用深度监督自动编码器。我们的实验表明,基于Transformer的方法展现出了最佳的实证结果,比作为2020年GermEval挑战赛一部分所建议的竞争基线相对提高了7.9%。此外,我们表明与写作风格相关的特征比基于内容的词汇更重要。其中一些发现与先前报道的关于内隐心理测量理论的行为研究有很强的联系。