• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能和增强现实技术用于 3D 印度舞蹈姿势重建与文化复兴。

AI and augmented reality for 3D Indian dance pose reconstruction cultural revival.

机构信息

Deparment of Computer Science and Engineering, Anna University, Guindy Campus, Chennai, 600025, India.

出版信息

Sci Rep. 2024 Apr 4;14(1):7906. doi: 10.1038/s41598-024-58680-w.

DOI:10.1038/s41598-024-58680-w
PMID:38575710
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10994917/
Abstract

This paper delves into the specialized domain of human action recognition, focusing on the Identification of Indian classical dance poses, specifically Bharatanatyam. Within the dance context, a "Karana" embodies a synchronized and harmonious movement encompassing body, hands, and feet, as defined by the Natyashastra. The essence of Karana lies in the amalgamation of nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Although numerous, Natyashastra codifies 108 karanas, showcased in the intricate stone carvings adorning the Nataraj temples of Chidambaram, where Lord Shiva's association with these movements is depicted. Automating pose identification in Bharatanatyam poses challenges due to the vast array of variations, encompassing hand and body postures, mudras (hand gestures), facial expressions, and head gestures. To simplify this intricate task, this research employs image processing and automation techniques. The proposed methodology comprises four stages: acquisition and pre-processing of images involving skeletonization and Data Augmentation techniques, feature extraction from images, classification of dance poses using a deep learning network-based convolution neural network model (InceptionResNetV2), and visualization of 3D models through mesh creation from point clouds. The use of advanced technologies, such as the MediaPipe library for body key point detection and deep learning networks, streamlines the identification process. Data augmentation, a pivotal step, expands small datasets, enhancing the model's accuracy. The convolution neural network model showcased its effectiveness in accurately recognizing intricate dance movements, paving the way for streamlined analysis and interpretation. This innovative approach not only simplifies the identification of Bharatanatyam poses but also sets a precedent for enhancing accessibility and efficiency for practitioners and researchers in the Indian classical dance.

摘要

本文深入探讨了人类动作识别这一专业领域,聚焦于印度古典舞蹈姿势的识别,特别是 Bharatanatyam 舞蹈。在舞蹈语境中,“Karana” 是指身体、手和脚协调和谐的动作,这是由 Natyashastra 定义的。Karana 的本质在于将 nritta hasta(手部动作)、sthaana(身体姿势)和 chaari(腿部动作)融合在一起。虽然 Natyashastra 规定了 108 种 Karana,但在 Chidambaram 的 Nataraj 寺庙中,我们可以看到精美的石雕上展示了其中的许多动作,这些动作描绘了湿婆与这些动作的关联。在 Bharatanatyam 中,由于手部和身体姿势、mudra(手势)、面部表情和头部动作等方面的变化非常多,因此自动识别姿势是一项具有挑战性的任务。为了简化这个复杂的任务,本研究采用了图像处理和自动化技术。所提出的方法包括四个阶段:涉及骨架化和数据增强技术的图像采集和预处理、从图像中提取特征、使用基于深度学习网络的卷积神经网络模型(InceptionResNetV2)对舞蹈姿势进行分类以及通过从点云中创建网格来可视化 3D 模型。使用先进的技术,如 MediaPipe 库进行身体关键点检测和深度学习网络,简化了识别过程。数据增强是一个关键步骤,它可以扩展小数据集,提高模型的准确性。卷积神经网络模型在准确识别复杂舞蹈动作方面表现出了有效性,为简化分析和解释铺平了道路。这种创新方法不仅简化了 Bharatanatyam 姿势的识别,而且为印度古典舞蹈的从业者和研究人员提供了增强可访问性和效率的范例。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/0d20c841fc9d/41598_2024_58680_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/3811b54cb22b/41598_2024_58680_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/8cd523eaa89f/41598_2024_58680_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/4c11cd945926/41598_2024_58680_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/6dee1712fb9b/41598_2024_58680_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/3daaa5f80afe/41598_2024_58680_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/fbcf9dca75c1/41598_2024_58680_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/5d467d68247b/41598_2024_58680_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/87111930bbd1/41598_2024_58680_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/2f3007202e17/41598_2024_58680_Fige_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/053f57235223/41598_2024_58680_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/4d4643a11315/41598_2024_58680_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/a481c891660b/41598_2024_58680_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/1dd9cd8af02e/41598_2024_58680_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/97aa44609251/41598_2024_58680_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/1db678d1d425/41598_2024_58680_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/0d20c841fc9d/41598_2024_58680_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/3811b54cb22b/41598_2024_58680_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/8cd523eaa89f/41598_2024_58680_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/4c11cd945926/41598_2024_58680_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/6dee1712fb9b/41598_2024_58680_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/3daaa5f80afe/41598_2024_58680_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/fbcf9dca75c1/41598_2024_58680_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/5d467d68247b/41598_2024_58680_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/87111930bbd1/41598_2024_58680_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/2f3007202e17/41598_2024_58680_Fige_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/053f57235223/41598_2024_58680_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/4d4643a11315/41598_2024_58680_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/a481c891660b/41598_2024_58680_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/1dd9cd8af02e/41598_2024_58680_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/97aa44609251/41598_2024_58680_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/1db678d1d425/41598_2024_58680_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1bf7/10994917/0d20c841fc9d/41598_2024_58680_Fig11_HTML.jpg

相似文献

1
AI and augmented reality for 3D Indian dance pose reconstruction cultural revival.人工智能和增强现实技术用于 3D 印度舞蹈姿势重建与文化复兴。
Sci Rep. 2024 Apr 4;14(1):7906. doi: 10.1038/s41598-024-58680-w.
2
Dance Action Recognition Model Using Deep Learning Network in Streaming Media Environment.基于深度学习网络的流媒体环境下的舞蹈动作识别模型。
J Environ Public Health. 2022 Sep 12;2022:8955326. doi: 10.1155/2022/8955326. eCollection 2022.
3
Pose Estimation-Assisted Dance Tracking System Based on Convolutional Neural Network.基于卷积神经网络的姿态估计辅助舞蹈跟踪系统。
Comput Intell Neurosci. 2022 Jun 3;2022:2301395. doi: 10.1155/2022/2301395. eCollection 2022.
4
Image analysis and teaching strategy optimization of folk dance training based on the deep neural network.基于深度神经网络的民间舞训练图像分析与教学策略优化。
Sci Rep. 2024 May 13;14(1):10909. doi: 10.1038/s41598-024-61134-y.
5
Analysis of Main Movement Characteristics of Hip Hop Dance Based on Deep Learning of Dance Movements.基于舞蹈动作深度学习的嘻哈舞主要动作特征分析。
Comput Intell Neurosci. 2022 May 23;2022:6794018. doi: 10.1155/2022/6794018. eCollection 2022.
6
The Use of Hand Gestures (Hastas) in Bharatanatyam for Creative Aging.巴拉坦纳蒂姆舞中手势(手印)在创造性老龄化中的应用。
J Med Humanit. 2025 Jun;46(2):235-242. doi: 10.1007/s10912-024-09861-1. Epub 2024 Jun 24.
7
A dataset of Sattriya dance: Classical dance of Assam.一个关于萨特里亚舞的数据集:阿萨姆邦的古典舞蹈。
Data Brief. 2023 Dec 9;52:109878. doi: 10.1016/j.dib.2023.109878. eCollection 2024 Feb.
8
A Deep Learning-Based End-to-End Composite System for Hand Detection and Gesture Recognition.基于深度学习的手检测与手势识别端到端复合系统。
Sensors (Basel). 2019 Nov 30;19(23):5282. doi: 10.3390/s19235282.
9
The Fusion Application of Deep Learning Biological Image Visualization Technology and Human-Computer Interaction Intelligent Robot in Dance Movements.深度学习生物图像可视化技术与人机交互智能机器人在舞蹈动作中的融合应用。
Comput Intell Neurosci. 2022 Sep 20;2022:2538896. doi: 10.1155/2022/2538896. eCollection 2022.
10
Dance-Specific Action Recognition Method Based on Double-Stream CNN in Complex Environment.基于双流卷积神经网络的复杂环境下舞蹈动作识别方法。
J Environ Public Health. 2022 Aug 30;2022:9327277. doi: 10.1155/2022/9327277. eCollection 2022.

本文引用的文献

1
An integrated mediapipe-optimized GRU model for Indian sign language recognition.基于 Mediapipe 优化的 GRU 模型的印地语手语识别。
Sci Rep. 2022 Jul 13;12(1):11964. doi: 10.1038/s41598-022-15998-7.
2
Text Data Augmentation for Deep Learning.用于深度学习的文本数据增强
J Big Data. 2021;8(1):101. doi: 10.1186/s40537-021-00492-0. Epub 2021 Jul 19.
3
Classification of K-Pop Dance Movements Based on Skeleton Information Obtained by a Kinect Sensor.基于 Kinect 传感器获取的骨骼信息对 K-Pop 舞蹈动作进行分类。
Sensors (Basel). 2017 Jun 1;17(6):1261. doi: 10.3390/s17061261.