Yang Xinting, Zhang Zehua
Library, Lanzhou University, Lanzhou, Gansu Province, China.
Department of Physics Science and Technology, Lanzhou University, Lanzhou, Gansu Province, China.
PeerJ Comput Sci. 2025 Jun 5;11:e2934. doi: 10.7717/peerj-cs.2934. eCollection 2025.
Accurate book genre classification is essential for library organization, information retrieval, and personalized recommendations. Traditional classification methods, often reliant on manual categorization and metadata-based approaches, struggle with the complexities of hybrid genres and evolving literary trends. To address these limitations, this study proposes a hybrid deep learning model that integrates visual and textual features for enhanced genre classification. Specifically, we employ InceptionV3, an advanced convolutional neural network architecture, to extract visual features from book cover images and bidirectional encoder representations from transformers (BERT) to analyze textual data from book titles. A scaled dot-product attention mechanism is used to effectively fuse these multimodal features, dynamically weighting their contributions based on contextual relevance. Experimental results on the BookCover30 dataset demonstrate that our proposed model outperforms baseline approaches, achieving a balanced accuracy of 0.7951 and an F1-score of 0.7920, surpassing both standalone image- and text-based classifiers. This study highlights the potential of deep learning in improving automated genre classification, offering a scalable and adaptable solution for libraries and digital platforms. Future research may focus on expanding dataset diversity, optimizing computational efficiency, and addressing biases in classification models.
准确的书籍体裁分类对于图书馆组织、信息检索和个性化推荐至关重要。传统的分类方法通常依赖于人工分类和基于元数据的方法,难以应对混合体裁和不断演变的文学趋势的复杂性。为了解决这些局限性,本研究提出了一种混合深度学习模型,该模型整合视觉和文本特征以增强体裁分类。具体而言,我们使用先进的卷积神经网络架构InceptionV3从书籍封面图像中提取视觉特征,并使用来自Transformer的双向编码器表示(BERT)来分析书籍标题中的文本数据。使用缩放点积注意力机制有效地融合这些多模态特征,根据上下文相关性动态加权它们的贡献。在BookCover30数据集上的实验结果表明,我们提出的模型优于基线方法,实现了0.7951的平衡准确率和0.7920的F1分数,超过了基于图像和文本的独立分类器。本研究突出了深度学习在改进自动体裁分类方面的潜力,为图书馆和数字平台提供了一种可扩展且适应性强的解决方案。未来的研究可能集中在扩大数据集的多样性、优化计算效率以及解决分类模型中的偏差。