用于任意形状文本检测的内核提议网络。

Kernel Proposal Network for Arbitrary Shape Text Detection.

作者信息

Zhang Shi-Xue, Zhu Xiaobin, Hou Jie-Bo, Yang Chun, Yin Xu-Cheng

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):8731-8742. doi: 10.1109/TNNLS.2022.3152596. Epub 2023 Oct 27.

Abstract

Segmentation-based methods have achieved great success for arbitrary shape text detection. However, separating neighboring text instances is still one of the most challenging problems due to the complexity of texts in scene images. In this article, we propose an innovative kernel proposal network (dubbed KPN) for arbitrary shape text detection. The proposed KPN can separate neighboring text instances by classifying different texts into instance-independent feature maps, meanwhile avoiding the complex aggregation process existing in segmentation-based arbitrary shape text detection methods. To be concrete, our KPN will predict a Gaussian center map for each text image, which will be used to extract a series of candidate kernel proposals (i.e., dynamic convolution kernel) from the embedding feature maps according to their corresponding keypoint positions. To enforce the independence between kernel proposals, we propose a novel orthogonal learning loss (OLL) via orthogonal constraints. Specifically, our kernel proposals contain important self-information learned by network and location information by position embedding. Finally, kernel proposals will individually convolve all embedding feature maps for generating individual embedded maps of text instances. In this way, our KPN can effectively separate neighboring text instances and improve the robustness against unclear boundaries. To the best of our knowledge, our work is the first to introduce the dynamic convolution kernel strategy to efficiently and effectively tackle the adhesion problem of neighboring text instances in text detection. Experimental results on challenging datasets verify the impressive performance and efficiency of our method. The code and model are available at https://github.com/GXYM/KPN.

摘要

基于分割的方法在任意形状文本检测方面取得了巨大成功。然而,由于场景图像中文本的复杂性,分离相邻文本实例仍然是最具挑战性的问题之一。在本文中,我们提出了一种创新的内核提议网络(称为KPN)用于任意形状文本检测。所提出的KPN可以通过将不同文本分类到与实例无关的特征图中来分离相邻文本实例,同时避免基于分割的任意形状文本检测方法中存在的复杂聚合过程。具体来说,我们的KPN将为每个文本图像预测一个高斯中心图,该图将根据其相应的关键点位置用于从嵌入特征图中提取一系列候选内核提议(即动态卷积核)。为了加强内核提议之间的独立性,我们通过正交约束提出了一种新颖的正交学习损失(OLL)。具体而言,我们的内核提议包含通过网络学习到的重要自信息和通过位置嵌入得到的位置信息。最后,内核提议将分别对所有嵌入特征图进行卷积,以生成文本实例的单独嵌入图。通过这种方式,我们的KPN可以有效地分离相邻文本实例,并提高对不清晰边界的鲁棒性。据我们所知,我们的工作是首次引入动态卷积核策略来有效且高效地解决文本检测中相邻文本实例的粘连问题。在具有挑战性的数据集上的实验结果验证了我们方法令人印象深刻的性能和效率。代码和模型可在https://github.com/GXYM/KPN获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索