• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自动编码器模型在低对比度水生图像上的深度研究。

A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images.

机构信息

Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 23741, Taiwan.

出版信息

Sensors (Basel). 2021 Jul 21;21(15):4966. doi: 10.3390/s21154966.

DOI:10.3390/s21154966
PMID:34372202
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8347128/
Abstract

Public aquariums and similar institutions often use video as a method to monitor the behavior, health, and status of aquatic organisms in their environments. These video footages take up a sizeable amount of space and require the use of autoencoders to reduce their file size for efficient storage. The autoencoder neural network is an emerging technique which uses the extracted latent space from an input source to reduce the image size for storage, and then reconstructs the source within an acceptable loss range for use. To meet an aquarium's practical needs, the autoencoder must have easily maintainable codes, low power consumption, be easily adoptable, and not require a substantial amount of memory use or processing power. Conventional configurations of autoencoders often provide results that perform beyond an aquarium's needs at the cost of being too complex for their architecture to handle, while few take low-contrast sources into consideration. Thus, in this instance, "keeping it simple" would be the ideal approach to the autoencoder's model design. This paper proposes a practical approach catered to an aquarium's specific needs through the configuration of autoencoder parameters. It first explores the differences between the two of the most widely applied autoencoder approaches, Multilayer Perceptron (MLP) and Convolution Neural Networks (CNN), to identify the most appropriate approach. The paper concludes that while both approaches (with proper configurations and image preprocessing) can reduce the dimensionality and reduce visual noise of the low-contrast images gathered from aquatic video footage, the CNN approach is more suitable for an aquarium's architecture. As an unexpected finding of the experiments conducted, the paper also discovered that by manipulating the formula for the MLP approach, the autoencoder could generate a denoised differential image that contains sharper and more desirable visual information to an aquarium's operation. Lastly, the paper has found that proper image preprocessing prior to the application of the autoencoder led to better model convergence and prediction results, as demonstrated both visually and numerically in the experiment. The paper concludes that by combining the denoising effect of MLP, CNN's ability to manage memory consumption, and proper image preprocessing, the specific practical needs of an aquarium can be adeptly fulfilled.

摘要

公共水族馆和类似机构通常使用视频作为监测其环境中水生生物行为、健康和状态的方法。这些视频片段占用了相当大的空间,需要使用自动编码器来减小文件大小以进行高效存储。自动编码器神经网络是一种新兴技术,它使用从输入源提取的潜在空间来减小图像大小以进行存储,然后在可接受的损失范围内重建源以供使用。为了满足水族馆的实际需求,自动编码器必须具有易于维护的代码、低功耗、易于采用,并且不需要大量的内存使用或处理能力。传统的自动编码器配置通常会提供超出水族馆需求的结果,但代价是对于其架构来说过于复杂,而很少考虑低对比度源。因此,在这种情况下,“保持简单”将是自动编码器模型设计的理想方法。本文提出了一种通过配置自动编码器参数来满足水族馆特定需求的实用方法。它首先探讨了两种应用最广泛的自动编码器方法(多层感知器 (MLP) 和卷积神经网络 (CNN))之间的区别,以确定最合适的方法。本文得出结论,虽然这两种方法(通过适当的配置和图像预处理)都可以降低从水生视频片段中收集的低对比度图像的维数并减少视觉噪声,但 CNN 方法更适合水族馆的架构。作为实验的意外发现,本文还发现,通过操纵 MLP 方法的公式,自动编码器可以生成具有更清晰和更理想视觉信息的去噪差分图像,这对水族馆的运行非常有利。最后,本文发现,在应用自动编码器之前进行适当的图像预处理可以导致更好的模型收敛和预测结果,这在实验中无论是从视觉上还是从数值上都得到了证明。本文得出结论,通过结合 MLP 的去噪效果、CNN 管理内存消耗的能力以及适当的图像预处理,可以很好地满足水族馆的具体实际需求。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/efc27c978874/sensors-21-04966-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/624d18d734f7/sensors-21-04966-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/a8cba873b492/sensors-21-04966-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/de1248435f10/sensors-21-04966-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/e4b519f52fd1/sensors-21-04966-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/7ea0137baf7e/sensors-21-04966-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/d62c4b452f3d/sensors-21-04966-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/cec24d366844/sensors-21-04966-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/01e572574d61/sensors-21-04966-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/0412c7b9a09b/sensors-21-04966-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/35ef4b5e0550/sensors-21-04966-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/efc27c978874/sensors-21-04966-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/624d18d734f7/sensors-21-04966-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/a8cba873b492/sensors-21-04966-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/de1248435f10/sensors-21-04966-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/e4b519f52fd1/sensors-21-04966-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/7ea0137baf7e/sensors-21-04966-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/d62c4b452f3d/sensors-21-04966-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/cec24d366844/sensors-21-04966-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/01e572574d61/sensors-21-04966-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/0412c7b9a09b/sensors-21-04966-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/35ef4b5e0550/sensors-21-04966-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a252/8347128/efc27c978874/sensors-21-04966-g011a.jpg

相似文献

1
A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images.自动编码器模型在低对比度水生图像上的深度研究。
Sensors (Basel). 2021 Jul 21;21(15):4966. doi: 10.3390/s21154966.
2
Dual Autoencoder Network with Separable Convolutional Layers for Denoising and Deblurring Images.具有可分离卷积层的双自动编码器网络用于图像去噪和去模糊
J Imaging. 2022 Sep 13;8(9):250. doi: 10.3390/jimaging8090250.
3
ASSAF: Advanced and Slim StegAnalysis Detection Framework for JPEG images based on deep convolutional denoising autoencoder and Siamese networks.基于深度卷积去噪自动编码器和孪生网络的 JPEG 图像高级瘦身 StegAnalysis 检测框架(ASSAF)。
Neural Netw. 2020 Nov;131:64-77. doi: 10.1016/j.neunet.2020.07.022. Epub 2020 Jul 29.
4
A Convolutional Autoencoder Topology for Classification in High-Dimensional Noisy Image Datasets.用于高维噪声图像数据集分类的卷积自动编码器拓扑结构。
Sensors (Basel). 2021 Nov 20;21(22):7731. doi: 10.3390/s21227731.
5
Enhancing the Breast Histopathology Image Analysis for Cancer Detection Using Variational Autoencoder.基于变分自编码器的乳腺癌病理图像分析增强用于癌症检测。
Int J Environ Res Public Health. 2023 Feb 27;20(5):4244. doi: 10.3390/ijerph20054244.
6
SACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising With Self-Supervised Perceptual Loss Network.SACNN:基于自监督感知损失网络的自注意卷积神经网络用于低剂量 CT 去噪。
IEEE Trans Med Imaging. 2020 Jul;39(7):2289-2301. doi: 10.1109/TMI.2020.2968472. Epub 2020 Jan 21.
7
MAN-C: A masked autoencoder neural cryptography based encryption scheme for CT scan images.MAN-C:一种基于掩码自动编码器神经网络加密技术的CT扫描图像加密方案。
MethodsX. 2024 Apr 28;12:102738. doi: 10.1016/j.mex.2024.102738. eCollection 2024 Jun.
8
Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks.利用深度卷积网络从病理图像中对肺腺癌转录组亚型进行分类。
Int J Comput Assist Radiol Surg. 2018 Dec;13(12):1905-1913. doi: 10.1007/s11548-018-1835-2. Epub 2018 Aug 29.
9
GenCoder: A Novel Convolutional Neural Network Based Autoencoder for Genomic Sequence Data Compression.GenCoder:一种基于卷积神经网络的新型自动编码器,用于基因组序列数据压缩。
IEEE/ACM Trans Comput Biol Bioinform. 2024 May-Jun;21(3):405-415. doi: 10.1109/TCBB.2024.3366240. Epub 2024 Jun 5.
10
Incorporation of residual attention modules into two neural networks for low-dose CT denoising.将残差注意模块整合到两个神经网络中用于低剂量 CT 去噪。
Med Phys. 2021 Jun;48(6):2973-2990. doi: 10.1002/mp.14856. Epub 2021 Apr 23.

引用本文的文献

1
Yoga Meets Intelligent Internet of Things: Recent Challenges and Future Directions.瑜伽与智能物联网:近期挑战与未来方向
Bioengineering (Basel). 2023 Apr 9;10(4):459. doi: 10.3390/bioengineering10040459.