• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过少样本驱动的风格到内容无监督域适应减少触觉传感中的跨传感器域差距。

Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation.

作者信息

Jing Xingshuo, Qian Kun

机构信息

School of Automation, Southeast University, Nanjing 210096, China.

Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China.

出版信息

Sensors (Basel). 2025 Jan 5;25(1):256. doi: 10.3390/s25010256.

DOI:10.3390/s25010256
PMID:39797047
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11723470/
Abstract

Transferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer properties. By understanding the data collected from each type of visuotactile sensors as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method to reduce cross-sensor domain gaps. We first propose a Global and Local Aggregation Bottleneck (GLAB) layer to compress features extracted by an encoder, enabling the extraction of features containing key information and facilitating unlabeled few-sample-driven learning. We introduce a Fourier-style transformation (FST) module and a prototype-constrained learning loss to promote global conditional domain-adversarial adaptation, bridging style-level gaps. We also propose a high-confidence guided teacher-student network, utilizing a self-distillation mechanism to further reduce content-level gaps between the two domains. Experiments on three cross-sensor domain adaptation and real-world robotic cross-sensor shape recognition tasks demonstrate that our method outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy on the DIGIT recognition dataset.

摘要

将从标准GelSight传感器学到的知识转移到其他视觉触觉传感器上,对于减少数据收集和标注工作很有吸引力。然而,由于传感器在内部光源、成像效果和弹性体特性方面存在差异,这种跨传感器转移具有挑战性。通过将从每种视觉触觉传感器收集的数据理解为不同的域,我们提出了一种少样本驱动的风格到内容的无监督域适应方法,以减少跨传感器的域差距。我们首先提出了一个全局和局部聚合瓶颈(GLAB)层,用于压缩编码器提取的特征,从而能够提取包含关键信息的特征,并促进无标签的少样本驱动学习。我们引入了一个傅里叶风格变换(FST)模块和一个原型约束学习损失,以促进全局条件域对抗适应,弥合风格层面的差距。我们还提出了一个高置信度引导的师生网络,利用自蒸馏机制进一步缩小两个域之间的内容层面差距。在三个跨传感器域适应和现实世界机器人跨传感器形状识别任务上的实验表明,我们的方法优于现有方法,特别是在DIGIT识别数据集上达到了89.8%的准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/6f074e2a63ee/sensors-25-00256-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/56bd45004765/sensors-25-00256-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/d2edd589a50d/sensors-25-00256-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/a0b0f662f42f/sensors-25-00256-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/53d8bc0693b2/sensors-25-00256-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/396603cbe4d0/sensors-25-00256-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/2801b4db56e5/sensors-25-00256-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/cdf658a1360c/sensors-25-00256-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/4ee21f4b5a54/sensors-25-00256-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/28902765c54d/sensors-25-00256-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/6f074e2a63ee/sensors-25-00256-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/56bd45004765/sensors-25-00256-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/d2edd589a50d/sensors-25-00256-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/a0b0f662f42f/sensors-25-00256-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/53d8bc0693b2/sensors-25-00256-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/396603cbe4d0/sensors-25-00256-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/2801b4db56e5/sensors-25-00256-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/cdf658a1360c/sensors-25-00256-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/4ee21f4b5a54/sensors-25-00256-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/28902765c54d/sensors-25-00256-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/615f/11723470/6f074e2a63ee/sensors-25-00256-g010.jpg

相似文献

1
Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation.通过少样本驱动的风格到内容无监督域适应减少触觉传感中的跨传感器域差距。
Sensors (Basel). 2025 Jan 5;25(1):256. doi: 10.3390/s25010256.
2
Histogram matching-enhanced adversarial learning for unsupervised domain adaptation in medical image segmentation.用于医学图像分割中无监督域适应的直方图匹配增强对抗学习
Med Phys. 2025 Mar 18. doi: 10.1002/mp.17757.
3
Source free domain adaptation for medical image segmentation with fourier style mining.基于傅里叶风格挖掘的源自由域自适应医学图像分割。
Med Image Anal. 2022 Jul;79:102457. doi: 10.1016/j.media.2022.102457. Epub 2022 Apr 12.
4
Unsupervised cross-modality domain adaptation via source-domain labels guided contrastive learning for medical image segmentation.通过源域标签引导的对比学习实现医学图像分割的无监督跨模态域适应
Med Biol Eng Comput. 2025 Feb 13. doi: 10.1007/s11517-025-03312-2.
5
Dual-view global and local category-attentive domain alignment for unsupervised conditional adversarial domain adaptation.用于无监督条件对抗域适应的双视图全局和局部类别注意力域对齐
Neural Netw. 2025 May;185:107129. doi: 10.1016/j.neunet.2025.107129. Epub 2025 Jan 8.
6
Domain-Adversarial-Guided Siamese Network for Unsupervised Cross-Domain 3-D Object Retrieval.基于域对抗引导的孪生网络的无监督跨域三维目标检索
IEEE Trans Cybern. 2022 Dec;52(12):13862-13873. doi: 10.1109/TCYB.2021.3139927. Epub 2022 Nov 18.
7
Unsupervised Domain Adaptation with Asymmetrical Margin Disparity loss and Outlier Sample Extraction.无监督领域自适应学习中的不对称边缘差异损失与异常样本提取。
Neural Netw. 2023 Nov;168:602-614. doi: 10.1016/j.neunet.2023.09.045. Epub 2023 Sep 27.
8
GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force.凝胶视觉:用于估计几何形状和力的高分辨率机器人触觉传感器。
Sensors (Basel). 2017 Nov 29;17(12):2762. doi: 10.3390/s17122762.
9
Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning.基于对抗学习的 CT 容积中多尺度无监督域自适应自动胰腺分割。
Med Phys. 2022 Sep;49(9):5799-5818. doi: 10.1002/mp.15827. Epub 2022 Jul 27.
10
A Structure-Aware Framework of Unsupervised Cross-Modality Domain Adaptation via Frequency and Spatial Knowledge Distillation.基于频率和空间知识蒸馏的无监督跨模态域自适应的结构感知框架。
IEEE Trans Med Imaging. 2023 Dec;42(12):3919-3931. doi: 10.1109/TMI.2023.3318006. Epub 2023 Nov 30.

本文引用的文献

1
TouchRoller: A Rolling Optical Tactile Sensor for Rapid Assessment of Textures for Large Surface Areas.TouchRoller:一种用于快速评估大面积纹理的滚动式光学触觉传感器。
Sensors (Basel). 2023 Feb 28;23(5):2661. doi: 10.3390/s23052661.
2
Deep Unsupervised Domain Adaptation with Time Series Sensor Data: A Survey.深度无监督的时间序列传感器数据域自适应研究综述
Sensors (Basel). 2022 Jul 23;22(15):5507. doi: 10.3390/s22155507.
3
Tactile Image Sensors Employing Camera: A Review.采用相机的触觉图像传感器:综述。
Sensors (Basel). 2019 Sep 12;19(18):3933. doi: 10.3390/s19183933.
4
The TacTip Family: Soft Optical Tactile Sensors with 3D-Printed Biomimetic Morphologies.《TacTip 家族:具有 3D 打印仿生形态的软光学触觉传感器》。
Soft Robot. 2018 Apr;5(2):216-227. doi: 10.1089/soro.2017.0052. Epub 2018 Jan 3.
5
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.