You Guibing, Guo Kelei, Gao Jie, Feng Hanjie, Zou Wei
School of Physical Education and Health, Zhaoqing University, Zhaoqing, China.
Financial Department of Zhaoqing University, Zhaoqing, China.
PLoS One. 2025 Jul 16;20(7):e0327459. doi: 10.1371/journal.pone.0327459. eCollection 2025.
Sports event revenue prediction is a complex, multimodal task that requires effective integration of diverse data sources. Traditional models struggle to combine real-time data streams with historical time-series data, resulting in limited prediction accuracy. To address this challenge, we propose F-TransR, a Transformer-based multimodal revenue prediction model. F-TransR introduces key innovations, including a real-time data stream processing module, a historical time-series modeling module, a novel multimodal fusion mechanism, and a cross-modal interaction modeling module. These modules enable the model to effectively integrate and capture dynamic interactions between multimodal features and temporal dependencies, which previous models fail to handle efficiently. Experimental results demonstrate that F-TransR significantly outperforms state-of-the-art models, including Informer, Autoformer, FEDformer, MTNet, and CrossFormer, on the Kaggle Sports Analytics and Reddit Comments datasets. On the Kaggle dataset, MSE and MAPE are reduced by 6.4% and 2.9%, respectively, while [Formula: see text] increases to 0.938. On the Reddit dataset, MSE and MAPE decrease by 6.6% and 5.3%, respectively, and [Formula: see text] improves to 0.854. Compared to existing methods, F-TransR not only improves the interaction efficiency of multimodal features but also demonstrates strong robustness and scalability, providing substantial support for multimodal revenue prediction in real-world applications.
体育赛事收入预测是一项复杂的多模态任务,需要有效整合各种不同的数据源。传统模型难以将实时数据流与历史时间序列数据相结合,导致预测准确性有限。为应对这一挑战,我们提出了F-TransR,一种基于Transformer的多模态收入预测模型。F-TransR引入了关键创新,包括实时数据流处理模块、历史时间序列建模模块、新颖的多模态融合机制和跨模态交互建模模块。这些模块使该模型能够有效地整合和捕捉多模态特征之间的动态交互以及时间依赖性,而这是先前模型无法有效处理的。实验结果表明,在Kaggle体育分析和Reddit评论数据集上,F-TransR显著优于包括Informer、Autoformer、FEDformer、MTNet和CrossFormer在内的现有模型。在Kaggle数据集上,均方误差(MSE)和平均绝对百分比误差(MAPE)分别降低了6.4%和2.9%,同时决定系数([公式:见原文])提高到0.938。在Reddit数据集上,MSE和MAPE分别下降了6.6%和5.3%,[公式:见原文]提高到0.854。与现有方法相比,F-TransR不仅提高了多模态特征的交互效率,还表现出强大的鲁棒性和可扩展性,为实际应用中的多模态收入预测提供了有力支持。