Department of Network Engineering and Security, Jordan University of Science and Technology, Irbid 22110, Jordan.
Sensors (Basel). 2022 Jul 22;22(15):5467. doi: 10.3390/s22155467.
Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers' interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately address several constraints, including limited bandwidth, limited energy resources, and interrupted communication links with the command center or server. Furthermore, data redundancy is considered one of the main challenges in visual crowdsensing. In distributed visual crowdsensing systems, photo sharing duplicates and expands the amount of data stored on each sensor node. As a result, if any node can communicate with the server, then more photos of the target region would be available to the server. Methods for recognizing and removing redundant data provide a range of benefits, including decreased transmission costs and energy consumption overall. To handle the interrupted communication with the server and the restricted resources of the sensor nodes, this paper proposes a distributed visual crowdsensing system for full-view area coverage. The target area is divided into virtual sub-regions, each of which is represented by a set of boundary points of interest. Then, based on the criteria for full-view area coverage, a specific data structure theme is developed to represent each photo with a set of features. The geometric context parameters of each photo are utilized to extract the features of each photo based on the full-view area coverage criteria. Finally, data redundancy removal algorithms are implemented based on the proposed clustering scheme to eliminate duplicate photos. As a result, each sensor node may filter redundant photographs in dispersed contexts without requiring high computational complexity, resources, or global awareness of all photos from all sensor nodes inside the target area. Compared to the most recent state-of-the-art, the improvement ratio of the added values of the photos provided by the proposed method is more than 38%. In terms of traffic transfer, the proposed method requires fewer data to be transferred between sensor nodes and between sensor nodes and the command center. The overall reduction in traffic exceeds 20% and the overall savings in energy consumption is more than 25%. It was evident that in the proposed system, sending photos between sensor nodes, as well as between sensor nodes and the command center, consumes less energy than existing approaches due to the considerable amount of photo exchange required. Thus, the proposed technique effectively transfers only the most valuable photos needed.
利用智能手机内置摄像头的视觉众包应用最近引起了研究人员的兴趣。在灾难恢复应用中,充分利用有限的资源从公众那里获取最有用的图像是一项挑战。所提出的解决方案应充分考虑到包括有限的带宽、有限的能源资源以及与指挥中心或服务器的通信链路中断在内的多个约束条件。此外,数据冗余是视觉众包中的一个主要挑战。在分布式视觉众包系统中,照片共享会重复并扩展存储在每个传感器节点上的数据量。因此,如果任何节点都可以与服务器通信,那么服务器就可以获得更多目标区域的照片。识别和去除冗余数据的方法提供了一系列好处,包括降低整体传输成本和能耗。为了处理与服务器的中断通信以及传感器节点的受限资源,本文提出了一种用于全视角区域覆盖的分布式视觉众包系统。目标区域被划分为虚拟子区域,每个子区域由一组感兴趣的边界点表示。然后,根据全视角区域覆盖的标准,开发了一种特定的数据结构主题来用一组特征表示每张照片。基于全视角区域覆盖标准,利用每张照片的几何上下文参数提取每张照片的特征。最后,基于提出的聚类方案实现了数据冗余去除算法,以消除重复照片。结果,每个传感器节点可以在分散的上下文中过滤冗余照片,而无需高计算复杂度、资源或对目标区域内所有传感器节点的所有照片的全局感知。与最新的最先进技术相比,所提出方法提供的照片的附加值提高了 38%以上。在流量传输方面,所提出的方法在传感器节点之间以及传感器节点与指挥中心之间需要传输的数据较少。流量总体减少超过 20%,能源消耗总体节省超过 25%。显然,在提出的系统中,由于需要交换大量的照片,传感器节点之间以及传感器节点与指挥中心之间发送照片所消耗的能量比现有方法少。因此,该技术有效地仅传输所需的最有价值的照片。