Yang Qian, Zhu Jinsen, Du Hongyu, Xia Haoran, Shao Sisi, Yuan Chen, Wang Wengtao, Ji Yimu
School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, 210023, China.
Institute of High Performance Computing and Bigdata, Nanjing University of Posts and Telecommunications, Nanjing, 210023, China.
Sci Rep. 2025 Jul 2;15(1):23420. doi: 10.1038/s41598-025-06234-z.
Aspect extraction is a critical step in constructing knowledge graphs and involves extracting aspect information from unstructured text. Current methods typically employ attention-based techniques such as global or local attention mechanisms, each with significant limitations. Global mechanisms are prone to introducing noise, while local mechanisms face challenges in determining the optimal window size. To address these issues, we propose a novel aspect extraction approach utilizing a multi-scale local attention mechanism (MLA). This method leverages a pre-trained model to convert text into vector representations. Feature extraction is then performed with gated recurrent units, followed by representation learning at various window sizes through the MLA. Features are selected using max pooling and decoded by a fully connected neural network combined with a conditional random field to generate precise aspect labels. Experimental validation on the Zhejiang Cup e-commerce review mining dataset demonstrates that our proposed method outperforms existing models in aspect extraction performance.
方面提取是构建知识图谱的关键步骤,涉及从非结构化文本中提取方面信息。当前方法通常采用基于注意力的技术,如全局或局部注意力机制,但每种方法都有显著局限性。全局机制容易引入噪声,而局部机制在确定最佳窗口大小时面临挑战。为了解决这些问题,我们提出了一种利用多尺度局部注意力机制(MLA)的新颖方面提取方法。该方法利用预训练模型将文本转换为向量表示。然后使用门控循环单元进行特征提取,接着通过MLA在各种窗口大小下进行表示学习。使用最大池化选择特征,并通过结合条件随机场的全连接神经网络进行解码,以生成精确的方面标签。在浙江杯电子商务评论挖掘数据集上的实验验证表明,我们提出的方法在方面提取性能上优于现有模型。