Suppr超能文献

一种基于视频序列的自然灾害检测和分割的新型变化检测方法。

A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence.

机构信息

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China.

Technology and Engineering Center for Space Utilization, Chinese Academy of Science, Beijing 100094, China.

出版信息

Sensors (Basel). 2020 Sep 7;20(18):5076. doi: 10.3390/s20185076.

Abstract

Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively.

摘要

变化检测(CD)对于自然灾害的检测、监测和评估至关重要。视频卫星,即最近新发射的卫星类型,能够记录自然灾害期间的运动变化。这给传统的 CD 方法提出了一个新问题,因为它们只能检测具有高度变化辐射和几何信息的区域。基于光流的方法能够快速检测基于像素的运动跟踪;然而,对于 CD 问题,它们很难确定用于区分变化区域和未变化区域的最佳阈值。为了克服上述问题,本文提出了一种新颖的自动变化检测框架:OFATS(基于光流的自适应阈值分割)。结合光流数据的特点,构建了一个新的基于最大类间方差与最小类内方差之比的目标函数,并提出了两个关键步骤:基于深度学习(DL)方法的光流估计的运动检测和基于自适应阈值选择的变化区域分割。使用两组视频序列进行了实验,结果表明,该方法分别能够达到 0.98 和 0.94 的高准确率,F1 值分别为 0.98 和 0.94。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a0fd/7571020/3c8cf45dfac4/sensors-20-05076-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验