Research Scholar, Faculty of Information and Communication Engineering, UCE-BIT Campus, Tiruchirappalli, Anna University, Chennai, Tamilnadu, India.
Assistant Professor, Department of Computer Applications, UCE-BIT Campus, Tiruchirappalli, Anna University, Chennai, Tamilnadu, India.
Sci Rep. 2024 Sep 27;14(1):22270. doi: 10.1038/s41598-024-73452-2.
In the rapidly evolving field of artificial intelligence, the importance of multimodal sentiment analysis has never been more evident, especially amid the ongoing COVID-19 pandemic. Our research addresses the critical need to understand public sentiment across various dimensions of this crisis by integrating data from multiple modalities, such as text, images, audio, and videos sourced from platforms like Twitter. Conventional methods, which primarily focus on text analysis, often fall short in capturing the nuanced intricacies of emotional states, necessitating a more comprehensive approach. To tackle this challenge, our proposed framework introduces a novel hybrid model, IChOA-CNN-LSTM, which leverages Convolutional Neural Networks (CNNs) for precise image feature extraction, Long Short-Term Memory (LSTM) networks for sequential data analysis, and an Improved Chimp Optimization Algorithm for effective feature fusion. Remarkably, our model achieves an impressive accuracy rate of 97.8%, outperforming existing approaches in the field. Additionally, by integrating the GeoCoV19 dataset, we facilitate a comprehensive analysis that spans linguistic and geographical boundaries, enriching our understanding of global pandemic discourse and providing critical insights for informed decision-making in public health crises. Through this holistic approach and innovative techniques, our research significantly advances multimodal sentiment analysis, offering a robust framework for deciphering the complex interplay of emotions during unprecedented global challenges like the COVID-19 pandemic.
在人工智能快速发展的领域,多模态情感分析的重要性从未如此明显,尤其是在当前的 COVID-19 大流行期间。我们的研究通过整合来自 Twitter 等平台的多种模态的数据,如文本、图像、音频和视频,解决了理解这场危机各个方面的公众情绪的关键需求。传统方法主要侧重于文本分析,往往无法捕捉到情感状态的细微复杂之处,因此需要更全面的方法。为了应对这一挑战,我们提出的框架引入了一种新颖的混合模型 IChOA-CNN-LSTM,该模型利用卷积神经网络 (CNNs) 进行精确的图像特征提取、长短时记忆 (LSTM) 网络进行序列数据分析,以及改进的黑猩猩优化算法进行有效的特征融合。值得注意的是,我们的模型实现了令人印象深刻的 97.8%的准确率,优于该领域的现有方法。此外,通过整合 GeoCoV19 数据集,我们进行了全面的分析,跨越了语言和地理边界,丰富了我们对全球大流行病话语的理解,并为公共卫生危机中的明智决策提供了关键见解。通过这种整体方法和创新技术,我们的研究极大地推动了多模态情感分析,为解析 COVID-19 等前所未有的全球挑战期间情感的复杂相互作用提供了一个强大的框架。