• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的 3D 肢体和附属物追踪方法,用于束缚的成年。

DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult .

机构信息

Computer Vision Laboratory, School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland.

Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland.

出版信息

Elife. 2019 Oct 4;8:e48571. doi: 10.7554/eLife.48571.

DOI:10.7554/eLife.48571
PMID:31584428
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6828327/
Abstract

Studying how neural circuits orchestrate limbed behaviors requires the precise measurement of the positions of each appendage in three-dimensional (3D) space. Deep neural networks can estimate two-dimensional (2D) pose in freely behaving and tethered animals. However, the unique challenges associated with transforming these 2D measurements into reliable and precise 3D poses have not been addressed for small animals including the fly, . Here, we present DeepFly3D, a software that infers the 3D pose of tethered, adult using multiple camera images. DeepFly3D does not require manual calibration, uses pictorial structures to automatically detect and correct pose estimation errors, and uses active learning to iteratively improve performance. We demonstrate more accurate unsupervised behavioral embedding using 3D joint angles rather than commonly used 2D pose data. Thus, DeepFly3D enables the automated acquisition of behavioral measurements at an unprecedented level of detail for a variety of biological applications.

摘要

研究神经回路如何协调肢体行为需要精确测量每个附肢在三维(3D)空间中的位置。深度神经网络可以估计自由行为和系绳动物的二维(2D)姿势。然而,将这些 2D 测量值转换为可靠和精确的 3D 姿势的独特挑战尚未针对包括苍蝇在内的小型动物得到解决。在这里,我们介绍了 DeepFly3D,这是一款软件,可以通过多个摄像头图像推断系绳成年 的 3D 姿势。DeepFly3D 不需要手动校准,使用图像结构自动检测和纠正姿势估计错误,并使用主动学习来迭代提高性能。我们展示了使用 3D 关节角度进行更准确的无监督行为嵌入,而不是常用的 2D 姿势数据。因此,DeepFly3D 能够以前所未有的细节水平自动获取各种生物学应用的行为测量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/7bcab5c01921/elife-48571-resp-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1135bb55619e/elife-48571-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/37eb1e8d26e2/elife-48571-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/ebaa30986edb/elife-48571-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/b20de7344f67/elife-48571-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/aa5fdbae93b0/elife-48571-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/41e5d15bac61/elife-48571-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/83a093591239/elife-48571-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/c42ad32eb2b3/elife-48571-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1da7be2edcdb/elife-48571-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/3fe5f93f31ec/elife-48571-fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/2e2d4f86e258/elife-48571-fig11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1648ccea234c/elife-48571-fig12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/dce8581c40d6/elife-48571-fig13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/ea0807ff06d1/elife-48571-resp-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/23e611e48f26/elife-48571-resp-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/7bcab5c01921/elife-48571-resp-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1135bb55619e/elife-48571-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/37eb1e8d26e2/elife-48571-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/ebaa30986edb/elife-48571-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/b20de7344f67/elife-48571-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/aa5fdbae93b0/elife-48571-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/41e5d15bac61/elife-48571-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/83a093591239/elife-48571-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/c42ad32eb2b3/elife-48571-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1da7be2edcdb/elife-48571-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/3fe5f93f31ec/elife-48571-fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/2e2d4f86e258/elife-48571-fig11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/1648ccea234c/elife-48571-fig12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/dce8581c40d6/elife-48571-fig13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/ea0807ff06d1/elife-48571-resp-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/23e611e48f26/elife-48571-resp-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f329/6828327/7bcab5c01921/elife-48571-resp-fig3.jpg

相似文献

1
DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult .基于深度学习的 3D 肢体和附属物追踪方法,用于束缚的成年。
Elife. 2019 Oct 4;8:e48571. doi: 10.7554/eLife.48571.
2
Anipose: A toolkit for robust markerless 3D pose estimation.Anipose:一个用于鲁棒无标记 3D 姿态估计的工具包。
Cell Rep. 2021 Sep 28;36(13):109730. doi: 10.1016/j.celrep.2021.109730.
3
LiftPose3D, a deep learning-based approach for transforming two-dimensional to three-dimensional poses in laboratory animals.基于深度学习的方法 LiftPose3D,用于将实验室动物的二维姿势转换为三维姿势。
Nat Methods. 2021 Aug;18(8):975-981. doi: 10.1038/s41592-021-01226-z. Epub 2021 Aug 5.
4
A deep learning approach for pose estimation from volumetric OCT data.基于深度学习的体 OCT 数据的姿态估计方法。
Med Image Anal. 2018 May;46:162-179. doi: 10.1016/j.media.2018.03.002. Epub 2018 Mar 10.
5
3D Human Pose Machines with Self-Supervised Learning.基于自监督学习的 3D 人体姿态估计
IEEE Trans Pattern Anal Mach Intell. 2020 May;42(5):1069-1082. doi: 10.1109/TPAMI.2019.2892452. Epub 2019 Jan 14.
6
FicTrac: a visual method for tracking spherical motion and generating fictive animal paths.FicTrac:一种用于跟踪球形运动和生成虚构动物路径的可视化方法。
J Neurosci Methods. 2014 Mar 30;225:106-19. doi: 10.1016/j.jneumeth.2014.01.010. Epub 2014 Feb 1.
7
LHPE-nets: A lightweight 2D and 3D human pose estimation model with well-structural deep networks and multi-view pose sample simplification method.LHPE-nets:一种具有良好结构深度网络和多视图姿态样本简化方法的轻量级 2D 和 3D 人体姿态估计模型。
PLoS One. 2022 Feb 23;17(2):e0264302. doi: 10.1371/journal.pone.0264302. eCollection 2022.
8
Fast animal pose estimation using deep neural networks.基于深度神经网络的快速动物姿势估计。
Nat Methods. 2019 Jan;16(1):117-125. doi: 10.1038/s41592-018-0234-5. Epub 2018 Dec 20.
9
Human Joint Angle Estimation Using Deep Learning-Based Three-Dimensional Human Pose Estimation for Application in a Real Environment.基于深度学习的三维人体姿态估计的人体关节角度估计及其在真实环境中的应用。
Sensors (Basel). 2024 Jun 13;24(12):3823. doi: 10.3390/s24123823.
10
FlyLimbTracker: An active contour based approach for leg segment tracking in unmarked, freely behaving Drosophila.FlyLimbTracker:一种基于活动轮廓的方法,用于在未标记、自由活动的果蝇中跟踪腿部节段。
PLoS One. 2017 Apr 28;12(4):e0173433. doi: 10.1371/journal.pone.0173433. eCollection 2017.

引用本文的文献

1
A parametric finite element model of leg campaniform sensilla in to study campaniform sensilla location and arrangement.用于研究钟形感器位置和排列的腿部钟形感器的参数化有限元模型。
J R Soc Interface. 2025 May;22(226):20240559. doi: 10.1098/rsif.2024.0559. Epub 2025 May 7.
2
FlyVISTA, an integrated machine learning platform for deep phenotyping of sleep in .FlyVISTA,一个用于睡眠深度表型分析的集成机器学习平台。 (你提供的原文似乎不完整,最后缺少具体内容)
Sci Adv. 2025 Mar 14;11(11):eadq8131. doi: 10.1126/sciadv.adq8131. Epub 2025 Mar 12.
3
Mapping the landscape of social behavior.

本文引用的文献

1
Using DeepLabCut for 3D markerless pose estimation across species and behaviors.使用 DeepLabCut 进行跨物种和行为的无标记 3D 姿态估计。
Nat Protoc. 2019 Jul;14(7):2152-2176. doi: 10.1038/s41596-019-0176-0. Epub 2019 Jun 21.
2
Threshold-Based Ordering of Sequential Actions during Drosophila Courtship.基于阈值的果蝇求偶过程中连续动作的排序。
Curr Biol. 2019 Feb 4;29(3):426-434.e6. doi: 10.1016/j.cub.2018.12.019. Epub 2019 Jan 17.
3
Fast animal pose estimation using deep neural networks.基于深度神经网络的快速动物姿势估计。
描绘社会行为的全貌。
Cell. 2025 Apr 17;188(8):2249-2266.e23. doi: 10.1016/j.cell.2025.01.044. Epub 2025 Mar 4.
4
A real-time, multi-subject three-dimensional pose tracking system for the behavioral analysis of non-human primates.一种用于非人灵长类动物行为分析的实时多主体三维姿态跟踪系统。
Cell Rep Methods. 2025 Feb 24;5(2):100986. doi: 10.1016/j.crmeth.2025.100986. Epub 2025 Feb 17.
5
NeuroMechFly v2: simulating embodied sensorimotor control in adult Drosophila.NeuroMechFly v2:模拟成年果蝇的具身感觉运动控制
Nat Methods. 2024 Dec;21(12):2353-2362. doi: 10.1038/s41592-024-02497-y. Epub 2024 Nov 12.
6
ONIX: a unified open-source platform for multimodal neural recording and perturbation during naturalistic behavior.ONIX:一个用于自然行为期间多模态神经记录与干扰的统一开源平台。
Nat Methods. 2025 Jan;22(1):187-192. doi: 10.1038/s41592-024-02521-1. Epub 2024 Nov 11.
7
Mapping the landscape of social behavior.描绘社会行为的全貌。
bioRxiv. 2024 Sep 27:2024.09.27.615451. doi: 10.1101/2024.09.27.615451.
8
Miniature linear and split-belt treadmills reveal mechanisms of adaptive motor control in walking Drosophila.微型线性和分体履带式跑步机揭示了行走果蝇自适应运动控制的机制。
Curr Biol. 2024 Oct 7;34(19):4368-4381.e5. doi: 10.1016/j.cub.2024.08.006. Epub 2024 Aug 30.
9
A leg model based on anatomical landmarks to study 3D joint kinematics of walking in .一种基于解剖学标志的腿部模型,用于研究步行中的三维关节运动学。
Front Bioeng Biotechnol. 2024 Jun 26;12:1357598. doi: 10.3389/fbioe.2024.1357598. eCollection 2024.
10
Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools.闪电姿势:通过半监督学习、贝叶斯集成和云原生开源工具改进动物姿势估计。
Nat Methods. 2024 Jul;21(7):1316-1328. doi: 10.1038/s41592-024-02319-1. Epub 2024 Jun 25.
Nat Methods. 2019 Jan;16(1):117-125. doi: 10.1038/s41592-018-0234-5. Epub 2018 Dec 20.
4
Imaging neural activity in the ventral nerve cord of behaving adult Drosophila.在行为成年果蝇的腹神经索中成像神经活动。
Nat Commun. 2018 Oct 22;9(1):4390. doi: 10.1038/s41467-018-06857-z.
5
DeepLabCut: markerless pose estimation of user-defined body parts with deep learning.DeepLabCut:基于深度学习的用户自定义身体部位无标记姿态估计。
Nat Neurosci. 2018 Sep;21(9):1281-1289. doi: 10.1038/s41593-018-0209-y. Epub 2018 Aug 20.
6
Optogenetic dissection of descending behavioral control in .光遗传学解析下行行为控制。
Elife. 2018 Jun 26;7:e34275. doi: 10.7554/eLife.34275.
7
FlyLimbTracker: An active contour based approach for leg segment tracking in unmarked, freely behaving Drosophila.FlyLimbTracker:一种基于活动轮廓的方法,用于在未标记、自由活动的果蝇中跟踪腿部节段。
PLoS One. 2017 Apr 28;12(4):e0173433. doi: 10.1371/journal.pone.0173433. eCollection 2017.
8
Systematic exploration of unsupervised methods for mapping behavior.对用于行为映射的无监督方法进行系统探索。
Phys Biol. 2017 Feb 6;14(1):015002. doi: 10.1088/1478-3975/14/1/015002.
9
Mechanisms of Parkinson's Disease: Lessons from Drosophila.帕金森病的机制:来自果蝇的启示
Curr Top Dev Biol. 2017;121:173-200. doi: 10.1016/bs.ctdb.2016.07.005. Epub 2016 Jul 30.
10
Recovery of locomotion after injury in Drosophila melanogaster depends on proprioception.黑腹果蝇受伤后的运动恢复取决于本体感觉。
J Exp Biol. 2016 Jun 1;219(Pt 11):1760-71. doi: 10.1242/jeb.133652. Epub 2016 Mar 18.