Ma Yukun, Lyu Chengzhen, Li Liangliang, Wei Yajun, Xu Yaowen
School of Software, Henan Institute of Science and Technology, Xinxiang, China.
School of Information Engineering, Henan Institute of Science and Technology, Xinxiang, China.
Front Neurosci. 2024 Apr 12;18:1362286. doi: 10.3389/fnins.2024.1362286. eCollection 2024.
Despite advancements in face anti-spoofing technology, attackers continue to pose challenges with their evolving deceptive methods. This is primarily due to the increased complexity of their attacks, coupled with a diversity in presentation modes, acquisition devices, and prosthetic materials. Furthermore, the scarcity of negative sample data exacerbates the situation by causing domain shift issues and impeding robust generalization. Hence, there is a pressing need for more effective cross-domain approaches to bolster the model's capability to generalize across different scenarios.
This method improves the effectiveness of face anti-spoofing systems by analyzing pseudo-negative sample features, expanding the training dataset, and boosting cross-domain generalization. By generating pseudo-negative features with a new algorithm and aligning these features with the use of KL divergence loss, we enrich the negative sample dataset, aiding the training of a more robust feature classifier and broadening the range of attacks that the system can defend against.
Through experiments on four public datasets (MSU-MFSD, OULU-NPU, Replay-Attack, and CASIA-FASD), we assess the model's performance within and across datasets by controlling variables. Our method delivers positive results in multiple experiments, including those conducted on smaller datasets.
Through controlled experiments, we demonstrate the effectiveness of our method. Furthermore, our approach consistently yields favorable results in both intra-dataset and cross-dataset evaluations, thereby highlighting its excellent generalization capabilities. The superior performance on small datasets further underscores our method's remarkable ability to handle unseen data beyond the training set.
尽管面部反欺骗技术取得了进展,但攻击者仍以其不断演变的欺骗方法带来挑战。这主要是由于他们攻击的复杂性增加,再加上呈现模式、采集设备和假体材料的多样性。此外,负面样本数据的稀缺性通过导致域转移问题和阻碍强大的泛化能力而加剧了这种情况。因此,迫切需要更有效的跨域方法来增强模型在不同场景下的泛化能力。
该方法通过分析伪负面样本特征、扩展训练数据集和增强跨域泛化能力来提高面部反欺骗系统的有效性。通过使用一种新算法生成伪负面特征,并利用KL散度损失对齐这些特征,我们丰富了负面样本数据集,有助于训练更强大的特征分类器,并扩大系统能够抵御的攻击范围。
通过在四个公共数据集(MSU-MFSD、OULU-NPU、Replay-Attack和CASIA-FASD)上进行实验,我们通过控制变量来评估模型在数据集内和跨数据集的性能。我们的方法在多个实验中都取得了积极成果,包括在较小数据集上进行的实验。
通过对照实验,我们证明了我们方法的有效性。此外,我们的方法在数据集内和跨数据集评估中始终产生良好的结果,从而突出了其出色的泛化能力。在小数据集上的卓越性能进一步强调了我们的方法处理训练集之外未见数据的非凡能力。