Ben Tamou Abdelouahid, Benzinou Abdesslam, Nasreddine Kamal
ENIB, UMR CNRS 6285 LabSTICC, 29238 Brest, France.
Univ Brest, UMR CNRS 6285 LabSTICC, 29238 Brest, France.
J Imaging. 2022 Aug 1;8(8):214. doi: 10.3390/jimaging8080214.
In this paper, we address fish species identification in underwater video for marine monitoring applications such as the study of marine biodiversity. Video is the least disruptive monitoring method for fish but requires efficient techniques of image processing and analysis to overcome challenging underwater environments. We propose two Deep Convolutional Neural Network (CNN) approaches for fish species classification in unconstrained underwater environment. In the first approach, we use a traditional transfer learning framework and we investigate a new technique based on training/validation loss curves for targeted data augmentation. In the second approach, we propose a hierarchical CNN classification to classify fish first into family levels and then into species categories. To demonstrate the effectiveness of the proposed approaches, experiments are carried out on two benchmark datasets for automatic fish identification in unconstrained underwater environment. The proposed approaches yield accuracies of 99.86% and 81.53% on the Fish Recognition Ground-Truth dataset and LifeClef 2015 Fish dataset, respectively.
在本文中,我们致力于解决水下视频中的鱼类物种识别问题,以用于海洋监测应用,如海洋生物多样性研究。视频是对鱼类干扰最小的监测方法,但需要高效的图像处理和分析技术来克服具有挑战性的水下环境。我们提出了两种深度卷积神经网络(CNN)方法,用于在无约束水下环境中进行鱼类物种分类。在第一种方法中,我们使用传统的迁移学习框架,并研究一种基于训练/验证损失曲线的新技术,用于有针对性的数据增强。在第二种方法中,我们提出了一种分层CNN分类方法,先将鱼类分类到科级别,然后再分类到物种类别。为了证明所提出方法的有效性,我们在两个用于无约束水下环境中自动鱼类识别的基准数据集上进行了实验。所提出的方法在鱼类识别真值数据集和2015年生命之钥鱼类数据集上的准确率分别为99.86%和81.53%。