• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度学习和虚拟现实技术在新媒体环境下的电影特效优化。

Film Effect Optimization by Deep Learning and Virtual Reality Technology in New Media Environment.

机构信息

Department of Directing, Qingdao Film Academy, Qingdao City 266000, China.

Department of Media, Arts and Humanities, University of Sussex, Brighton BN1 9RH, UK.

出版信息

Comput Intell Neurosci. 2022 May 20;2022:8918073. doi: 10.1155/2022/8918073. eCollection 2022.

DOI:10.1155/2022/8918073
PMID:35634038
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9142310/
Abstract

Today, new media technology has widely penetrated art forms such as film and television, which has changed the way of visual expression in the new media environment. To better solve the problems of weak immersion, poor interaction, and low degree of simulation, the present work uses deep learning technology and virtual reality (VR) technology to optimize the film playing effect. Firstly, the optimized extremum median filter algorithm is used to optimize the "burr" phenomenon and a low compression ratio of the single video image. Secondly, the Generative Adversarial Network (GAN) in deep learning technology is used to enhance the data of the single video image. Finally, the decision tree algorithm and hierarchical clustering algorithm are used for the color enhancement of VR images. The experimental results show that the contrast of a single-frame image optimized by this system is 4.21, the entropy is 8.66, and the noise ratio is 145.1, which shows that this method can effectively adjust the contrast parameters to prevent the loss of details and reduce the dazzling intensity. The quality and diversity of the specific types of images generated by the proposed GAN are improved compared with the current mainstream GAN method with supervision, which is in line with the subjective evaluation results of human beings. The Frechet Inception Distance value is also significantly improved compared with Self-Attention Generative Adversarial Network. It shows that the sample generated by the proposed method has precise details and rich texture features. The proposed scheme provides a reference for optimizing the interactivity, immersion, and simulation of VR film.

摘要

如今,新媒体技术已经广泛渗透到影视等艺术形式中,改变了新媒体环境下的视觉表现方式。为了更好地解决沉浸感弱、互动性差、模拟程度低的问题,本工作采用深度学习技术和虚拟现实(VR)技术对电影播放效果进行优化。首先,采用优化的极值中值滤波器算法对单视频图像的“毛刺”现象和低压缩比进行优化。其次,利用深度学习技术中的生成式对抗网络(GAN)对单视频图像的数据进行增强。最后,采用决策树算法和层次聚类算法对 VR 图像进行色彩增强。实验结果表明,该系统优化后的单帧图像对比度为 4.21,熵为 8.66,噪声比为 145.1,表明该方法可以有效调整对比度参数,防止细节丢失,降低刺眼强度。所提出的 GAN 生成的特定类型图像的质量和多样性得到了提高,与具有监督的当前主流 GAN 方法相比,与人类的主观评价结果一致。与自注意力生成对抗网络相比,Frechet Inception Distance 值也有显著提高。这表明所提出的方法生成的样本具有精确的细节和丰富的纹理特征。该方案为优化 VR 电影的互动性、沉浸感和模拟度提供了参考。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/83eff0d5b527/CIN2022-8918073.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/8ac36c1ad08f/CIN2022-8918073.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/ea2f4765c945/CIN2022-8918073.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/d15e04567c1d/CIN2022-8918073.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/fd0a3735fd1c/CIN2022-8918073.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/4862b2b0cd49/CIN2022-8918073.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/70d02977a13a/CIN2022-8918073.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/88e84614c04b/CIN2022-8918073.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/509edf911c76/CIN2022-8918073.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/75f9b696d844/CIN2022-8918073.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/0a25a7329558/CIN2022-8918073.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/96919bf936e2/CIN2022-8918073.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/b7ce6c5f0654/CIN2022-8918073.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/4ff5d1133a15/CIN2022-8918073.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/83eff0d5b527/CIN2022-8918073.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/8ac36c1ad08f/CIN2022-8918073.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/ea2f4765c945/CIN2022-8918073.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/d15e04567c1d/CIN2022-8918073.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/fd0a3735fd1c/CIN2022-8918073.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/4862b2b0cd49/CIN2022-8918073.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/70d02977a13a/CIN2022-8918073.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/88e84614c04b/CIN2022-8918073.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/509edf911c76/CIN2022-8918073.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/75f9b696d844/CIN2022-8918073.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/0a25a7329558/CIN2022-8918073.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/96919bf936e2/CIN2022-8918073.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/b7ce6c5f0654/CIN2022-8918073.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/4ff5d1133a15/CIN2022-8918073.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d836/9142310/83eff0d5b527/CIN2022-8918073.014.jpg

相似文献

1
Film Effect Optimization by Deep Learning and Virtual Reality Technology in New Media Environment.深度学习和虚拟现实技术在新媒体环境下的电影特效优化。
Comput Intell Neurosci. 2022 May 20;2022:8918073. doi: 10.1155/2022/8918073. eCollection 2022.
2
Learning ultrasound rendering from cross-sectional model slices for simulated training.从横截面模型切片中学习超声渲染,以进行模拟训练。
Int J Comput Assist Radiol Surg. 2021 May;16(5):721-730. doi: 10.1007/s11548-021-02349-6. Epub 2021 Apr 8.
3
Improving CBCT quality to CT level using deep learning with generative adversarial network.利用生成对抗网络的深度学习技术将 CBCT 质量提高到 CT 水平。
Med Phys. 2021 Jun;48(6):2816-2826. doi: 10.1002/mp.14624. Epub 2021 May 14.
4
Unsupervised arterial spin labeling image superresolution via multiscale generative adversarial network.基于多尺度生成对抗网络的无监督动脉自旋标记图像超分辨率。
Med Phys. 2022 Apr;49(4):2373-2385. doi: 10.1002/mp.15468. Epub 2022 Mar 7.
5
Artifact correction in low-dose dental CT imaging using Wasserstein generative adversarial networks.基于 Wasserstein 生成对抗网络的低剂量牙科 CT 成像伪影校正。
Med Phys. 2019 Apr;46(4):1686-1696. doi: 10.1002/mp.13415. Epub 2019 Feb 14.
6
Deep unsupervised endoscopic image enhancement based on multi-image fusion.基于多图像融合的深度无监督内窥镜图像增强。
Comput Methods Programs Biomed. 2022 Jun;221:106800. doi: 10.1016/j.cmpb.2022.106800. Epub 2022 Apr 26.
7
Generative Adversarial Network for Medical Images (MI-GAN).生成对抗网络在医学图像上的应用(MI-GAN)。
J Med Syst. 2018 Oct 12;42(11):231. doi: 10.1007/s10916-018-1072-9.
8
Texture Image Compression Algorithm Based on Self-Organizing Neural Network.基于自组织神经网络的纹理图像压缩算法。
Comput Intell Neurosci. 2022 Apr 10;2022:4865808. doi: 10.1155/2022/4865808. eCollection 2022.
9
Image denoising by transfer learning of generative adversarial network for dental CT.基于生成对抗网络的迁移学习在牙科 CT 中的图像去噪。
Biomed Phys Eng Express. 2020 Sep 8;6(5):055024. doi: 10.1088/2057-1976/abb068.
10
Image Motion Deblurring Based on Deep Residual Shrinkage and Generative Adversarial Networks.基于深度残差收缩和生成对抗网络的图像运动去模糊。
Comput Intell Neurosci. 2022 Jan 21;2022:5605846. doi: 10.1155/2022/5605846. eCollection 2022.

本文引用的文献

1
Fuzzy System Based Medical Image Processing for Brain Disease Prediction.基于模糊系统的用于脑部疾病预测的医学图像处理
Front Neurosci. 2021 Jul 30;15:714318. doi: 10.3389/fnins.2021.714318. eCollection 2021.
2
An Image Enhancement Algorithm Based on Fractional-Order Phase Stretch Transform and Relative Total Variation.基于分数阶相位拉伸变换和相对全变差的图像增强算法。
Comput Intell Neurosci. 2021 Jan 13;2021:8818331. doi: 10.1155/2021/8818331. eCollection 2021.
3
Low-Illumination Image Enhancement in the Space Environment Based on the DC-WGAN Algorithm.
基于 DC-WGAN 算法的空间环境低光照图像增强。
Sensors (Basel). 2021 Jan 4;21(1):286. doi: 10.3390/s21010286.
4
DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment.基于深度学习的无参考拼接图像质量评估(DLNR-SIQA:Deep Learning-Based No-Reference Stitched Image Quality Assessment)
Sensors (Basel). 2020 Nov 12;20(22):6457. doi: 10.3390/s20226457.