Naddaf-Sh Amir-M, Baburao Vinay S, Zargarzadeh Hassan
Phillip M. Drayer Electrical Engineering Department, Lamar University, Beaumont, TX 77705, USA.
CRC-Evans, Houston, TX 77066, USA.
Sensors (Basel). 2025 Jan 6;25(1):277. doi: 10.3390/s25010277.
Automated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliability and effectiveness of these methods remains a significant challenge. This study employs the Segment Anything Model (SAM), a vision foundation model, to design an AI-assisted tool for weld defect detection in real-world ultrasonic B-scan images. It utilizes a proprietary dataset of B-scan images generated from AUT data collected during automated girth weld inspections of oil and gas pipelines, detecting a specific defect type: lack of fusion (LOF). The implementation includes integrating knowledge from the B-scan image context into the natural image-based SAM 1 and SAM 2 through a fully automated, promptable process. As part of designing a practical AI-assistant tool, the experiments involve applying both vanilla and low-rank adaptation (LoRA) fine-tuning techniques to the image encoder and mask decoder of different variants of both models, while keeping the prompt encoder unchanged. The results demonstrate that the utilized method achieves improved performance compared to a previous study on the same dataset.
自动超声检测(AUT)是石油和天然气等行业基础设施评估的关键工具,虽然熟练的操作人员会手动分析复杂的AUT数据,但基于人工智能(AI)的方法在实现自动化解释方面显示出了潜力。然而,提高这些方法的可靠性和有效性仍然是一项重大挑战。本研究采用视觉基础模型“分割一切模型”(SAM),设计了一种人工智能辅助工具,用于在实际的超声B扫描图像中检测焊缝缺陷。它利用了一个专有数据集,该数据集由在石油和天然气管道自动环焊缝检测期间收集的AUT数据生成的B扫描图像组成,用于检测一种特定的缺陷类型:未熔合(LOF)。该实现包括通过一个完全自动化、可提示的过程,将来自B扫描图像上下文的知识整合到基于自然图像的SAM 1和SAM 2中。作为设计实用人工智能辅助工具的一部分,实验涉及对两种模型不同变体的图像编码器和掩码解码器应用普通和低秩自适应(LoRA)微调技术,同时保持提示编码器不变。结果表明,与之前在同一数据集上的研究相比,所采用的方法性能有所提高。