Liu Chongyu, Liu Yuliang, Jin Lianwen, Zhang Shuaitao, Luo Canjie, Wang Yongpan
IEEE Trans Image Process. 2020 Aug 28;PP. doi: 10.1109/TIP.2020.3018859.
Scene text removal has attracted increasing research interests owing to its valuable applications in privacy protection, camera-based virtual reality translation, and image editing. However, existing approaches, which fall short on real applications, are mainly because they were evaluated on synthetic or unrepresentative datasets. To fill this gap and facilitate this research direction, this paper proposes a real-world dataset called SCUT-EnsText that consists of 3,562 diverse images selected from public scene text reading benchmarks, and each image is scrupulously annotated to provide visually plausible erasure targets. With SCUT-EnsText, we design a novel GANbased model termed EraseNet that can automatically remove text located on the natural images. The model is a two-stage network that consists of a coarse-erasure sub-network and a refinement sub-network. The refinement sub-network targets improvement in the feature representation and refinement of the coarse outputs to enhance the removal performance. Additionally, EraseNet contains a segmentation head for text perception and a local-global SN-Patch-GAN with spectral normalization (SN) on both the generator and discriminator for maintaining the training stability and the congruity of the erased regions. A sufficient number of experiments are conducted on both the previous public dataset and the brand-new SCUT-EnsText. Our EraseNet significantly outperforms the existing state-of-the-art methods in terms of all metrics, with remarkably superior higherquality results. The dataset and code will be made available at https://github.com/HCIILAB/SCUT-EnsText.
由于场景文本去除在隐私保护、基于相机的虚拟现实翻译和图像编辑等方面具有重要应用,因此吸引了越来越多的研究兴趣。然而,现有的方法在实际应用中存在不足,主要是因为它们是在合成或不具代表性的数据集上进行评估的。为了填补这一空白并推动这一研究方向,本文提出了一个名为SCUT-EnsText的真实世界数据集,该数据集由从公共场景文本阅读基准中选取的3562张不同图像组成,并且对每张图像都进行了精心标注,以提供视觉上合理的擦除目标。利用SCUT-EnsText,我们设计了一种名为EraseNet的新型基于GAN的模型,它可以自动去除自然图像上的文本。该模型是一个两阶段网络,由一个粗擦除子网络和一个细化子网络组成。细化子网络旨在改进特征表示并细化粗输出,以提高去除性能。此外,EraseNet包含一个用于文本感知的分割头和一个在生成器和判别器上都带有谱归一化(SN)的局部-全局SN-Patch-GAN,以保持训练稳定性和擦除区域的一致性。我们在先前的公共数据集和全新的SCUT-EnsText上都进行了大量实验。我们的EraseNet在所有指标上均显著优于现有的最先进方法,具有明显更优的高质量结果。数据集和代码将在https://github.com/HCIILAB/SCUT-EnsText上提供。