• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有深度空间特征的视觉词袋模型用于地理场景分类

Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification.

作者信息

Feng Jiangfan, Liu Yuanyuan, Wu Lin

机构信息

College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.

出版信息

Comput Intell Neurosci. 2017;2017:5169675. doi: 10.1155/2017/5169675. Epub 2017 Jun 19.

DOI:10.1155/2017/5169675
PMID:28706534
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5494773/
Abstract

With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories.

摘要

随着地理标记图像的广泛使用,越来越多的研究工作集中在地理场景分类上。在地理场景分类中,有效的空间特征选择可以显著提高最终性能。视觉词袋(BoVW)在地理场景分类中的特征选择方面表现良好;然而,只有在提供的特征提取器匹配良好的情况下,它才能有效工作。在本文中,我们使用卷积神经网络(CNN)来优化提出的特征提取器,以便它能够从地理标记图像中学习更合适的视觉词汇。作为地理场景分类的工具,我们的方法在包含各种场景类别的三个数据集中分别比BoVW取得了更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/4f3cd992e136/CIN2017-5169675.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/dce6a6921b8b/CIN2017-5169675.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/dffdf3dbf77a/CIN2017-5169675.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/b85a2d80fec4/CIN2017-5169675.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/d4b1c0b1e0f6/CIN2017-5169675.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/e63dd8d18cae/CIN2017-5169675.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/d11629efe99d/CIN2017-5169675.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/3e90a5c99d23/CIN2017-5169675.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/1a13359d44b9/CIN2017-5169675.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/4e81f76322dd/CIN2017-5169675.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/fc0fa17ba06a/CIN2017-5169675.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/b6abe109c3ec/CIN2017-5169675.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/bef694f2aba4/CIN2017-5169675.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/8848f767d2b6/CIN2017-5169675.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/4f3cd992e136/CIN2017-5169675.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/dce6a6921b8b/CIN2017-5169675.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/dffdf3dbf77a/CIN2017-5169675.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/b85a2d80fec4/CIN2017-5169675.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/d4b1c0b1e0f6/CIN2017-5169675.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/e63dd8d18cae/CIN2017-5169675.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/d11629efe99d/CIN2017-5169675.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/3e90a5c99d23/CIN2017-5169675.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/1a13359d44b9/CIN2017-5169675.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/4e81f76322dd/CIN2017-5169675.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/fc0fa17ba06a/CIN2017-5169675.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/b6abe109c3ec/CIN2017-5169675.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/bef694f2aba4/CIN2017-5169675.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/8848f767d2b6/CIN2017-5169675.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a2d/5494773/4f3cd992e136/CIN2017-5169675.014.jpg

相似文献

1
Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification.具有深度空间特征的视觉词袋模型用于地理场景分类
Comput Intell Neurosci. 2017;2017:5169675. doi: 10.1155/2017/5169675. Epub 2017 Jun 19.
2
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.基于双流深度融合的高分辨率航空场景分类框架。
Comput Intell Neurosci. 2018 Jan 18;2018:8639367. doi: 10.1155/2018/8639367. eCollection 2018.
3
Modeling global geometric spatial information for rotation invariant classification of satellite images.对全球几何空间信息进行建模,以实现卫星图像的旋转不变分类。
PLoS One. 2019 Jul 19;14(7):e0219833. doi: 10.1371/journal.pone.0219833. eCollection 2019.
4
Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features.基于卷积神经网络特征的仿生模式识别图像分类
Comput Intell Neurosci. 2017;2017:3792805. doi: 10.1155/2017/3792805. Epub 2017 Feb 16.
5
Pooling region learning of visual word for image classification using bag-of-visual-words model.基于词袋模型的图像分类中视觉词的区域学习。
PLoS One. 2020 Jun 5;15(6):e0234144. doi: 10.1371/journal.pone.0234144. eCollection 2020.
6
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.
7
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.基于互信息的任务驱动字典学习用于医学图像分类
IEEE Trans Biomed Eng. 2017 Jun;64(6):1380-1392. doi: 10.1109/TBME.2016.2605627. Epub 2016 Sep 1.
8
Deep Attention-Based Spatially Recursive Networks for Fine-Grained Visual Recognition.基于深度注意力的空间递归网络在细粒度视觉识别中的应用
IEEE Trans Cybern. 2019 May;49(5):1791-1802. doi: 10.1109/TCYB.2018.2813971. Epub 2018 Mar 22.
9
A Hybrid Geometric Spatial Image Representation for scene classification.用于场景分类的混合几何空间图像表示。
PLoS One. 2018 Sep 12;13(9):e0203339. doi: 10.1371/journal.pone.0203339. eCollection 2018.
10
Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images.基于纹理特定的视觉词汇包模型和基于空间锥体匹配的方法,用于检索多期增强 CT 图像中的局灶性肝病变。
Int J Comput Assist Radiol Surg. 2018 Jan;13(1):151-164. doi: 10.1007/s11548-017-1671-9. Epub 2017 Nov 5.

本文引用的文献

1
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.
2
Biologically inspired feature manifold for scene classification.基于生物学启发的场景分类特征流形。
IEEE Trans Image Process. 2010 Jan;19(1):174-84. doi: 10.1109/TIP.2009.2032939.