Fan Zunguan, Feng Yifan, Wang Kang, Li Xiaoli
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
School of Software, Tsinghua University, Beijing 100084, China.
Entropy (Basel). 2024 Mar 8;26(3):239. doi: 10.3390/e26030239.
Efficient flotation beneficiation heavily relies on accurate flotation condition recognition based on monitored froth video. However, the recognition accuracy is hindered by limitations of extracting temporal features from froth videos and establishing correlations between complex multi-modal high-order data. To address the difficulties of inadequate temporal feature extraction, inaccurate online condition detection, and inefficient flotation process operation, this paper proposes a novel flotation condition recognition method named the multi-modal temporal hypergraph neural network (MTHGNN) to extract and fuse multi-modal temporal features. To extract abundant dynamic texture features from froth images, the MTHGNN employs an enhanced version of the local binary pattern algorithm from three orthogonal planes (LBP-TOP) and incorporates additional features from the three-dimensional space as supplements. Furthermore, a novel multi-view temporal feature aggregation network (MVResNet) is introduced to extract temporal aggregation features from the froth image sequence. By constructing a temporal multi-modal hypergraph neural network, we encode complex high-order temporal features, establish robust associations between data structures, and flexibly model the features of froth image sequence, thus enabling accurate flotation condition identification through the fusion of multi-modal temporal features. The experimental results validate the effectiveness of the proposed method for flotation condition recognition, providing a foundation for optimizing flotation operations.
高效浮选选矿严重依赖于基于监测泡沫视频的准确浮选条件识别。然而,从泡沫视频中提取时间特征以及在复杂的多模态高阶数据之间建立相关性的局限性阻碍了识别精度。为了解决时间特征提取不足、在线条件检测不准确以及浮选过程操作效率低下的难题,本文提出了一种名为多模态时间超图神经网络(MTHGNN)的新型浮选条件识别方法,以提取和融合多模态时间特征。为了从泡沫图像中提取丰富的动态纹理特征,MTHGNN采用了来自三个正交平面的局部二值模式算法的增强版本(LBP-TOP),并将三维空间中的附加特征作为补充。此外,引入了一种新型的多视图时间特征聚合网络(MVResNet),以从泡沫图像序列中提取时间聚合特征。通过构建时间多模态超图神经网络,我们对复杂的高阶时间特征进行编码,在数据结构之间建立稳健的关联,并灵活地对泡沫图像序列的特征进行建模,从而通过多模态时间特征的融合实现准确的浮选条件识别。实验结果验证了所提方法用于浮选条件识别的有效性,为优化浮选操作提供了基础。