Center for Public Security Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
Institute of Public Security, Kashi Institute of Electronics and Information Industry, Kashi 844000, China.
Sensors (Basel). 2024 Sep 11;24(18):5889. doi: 10.3390/s24185889.
Social sensing, using humans as sensors to collect disaster data, has emerged as a timely, cost-effective, and reliable data source. However, research has focused on the textual data. With advances in information technology, multimodal data such as images and videos are now shared on media platforms, aiding in-depth analysis of social sensing systems. This study proposed an analytical framework to extract disaster-related spatiotemporal information from multimodal social media data. Using a pre-trained multimodal neural network and a location entity recognition model, the framework integrates disaster semantics with spatiotemporal information, enhancing situational awareness. A case study of the April 2024 heavy rain event in Guangdong, China, using Weibo data, demonstrates that multimodal content correlates more strongly with rainfall patterns than textual data alone, offering a dynamic perception of disasters. These findings confirm the utility of multimodal social media data and offer a foundation for future research. The proposed framework offers valuable applications for emergency response, disaster relief, risk assessment, and witness discovery, and presents a viable approach for safety risk monitoring and early warning systems.
社会传感利用人类作为传感器来收集灾难数据,已成为一种及时、经济高效且可靠的数据来源。然而,研究主要集中在文本数据上。随着信息技术的进步,图像和视频等多模态数据现在在媒体平台上共享,有助于对社会传感系统进行深入分析。本研究提出了一个分析框架,用于从多模态社交媒体数据中提取与灾难相关的时空信息。该框架使用预训练的多模态神经网络和位置实体识别模型,将灾难语义与时空信息相结合,增强态势感知能力。通过使用微博数据对 2024 年 4 月中国广东暴雨事件进行的案例研究表明,多模态内容与降雨量模式的相关性比仅使用文本数据更强,为灾难提供了动态感知。这些发现证实了多模态社交媒体数据的实用性,并为未来的研究提供了基础。所提出的框架为应急响应、灾害救援、风险评估和证人发现提供了有价值的应用,并为安全风险监测和预警系统提供了可行的方法。