Li Yen-Ting, Chan Yu-Cheng, Huang Chen-Che, Hsu Yu-Chang, Chen Ssu-Han
Circle AI Incorporation, Taipei, 114, Taiwan.
Center for Artificial Intelligence & Data Science, Ming Chi University of Technology, New Taipei City, 243, Taiwan.
Sci Rep. 2025 Jan 17;15(1):2311. doi: 10.1038/s41598-025-86323-1.
This study develops the you only look once segmentation (YOLOSeg), an end-to-end instance segmentation model, with applications to segment small particle defects embedded on a wafer die. YOLOSeg uses YOLOv5s as the basis and extends a UNet-like structure to form the segmentation head. YOLOSeg can predict not only bounding boxes of particle defects but also the corresponding bounding polygons. Furthermore, YOLOSeg also attempts to obtain a set of better weights by combining with several training tricks such as freezing layers, switching mask loss, using auto-anchor and introducing denoising diffusion probabilistic models (DDPM) image augmentation. The experiment results on the testing image set show that YOLOSeg's average precision (AP) and intersection over union (IoU) are as high as 0.821 and 0.732 respectively. Even when the sizes of particle defects are extremely small, the performance of YOLOSeg is far superior to current instance segmentation models such as mask R-CNN, YOLACT, YUSEG, and Ultralytics's YOLOv5s-segmentation. Additionally, preparing the training image set for YOLOSeg is time-saving because it needs neither to collect a large number of defective samples, nor to annotate pseudo defects, nor to design hand-craft features.
本研究开发了一种端到端实例分割模型——你只看一次分割(YOLOSeg),并将其应用于分割晶圆芯片上嵌入的小颗粒缺陷。YOLOSeg以YOLOv5s为基础,扩展了类似UNet的结构来形成分割头。YOLOSeg不仅可以预测颗粒缺陷的边界框,还可以预测相应的边界多边形。此外,YOLOSeg还尝试通过结合冻结层、切换掩码损失、使用自动锚点和引入去噪扩散概率模型(DDPM)图像增强等多种训练技巧来获得一组更好的权重。在测试图像集上的实验结果表明,YOLOSeg的平均精度(AP)和交并比(IoU)分别高达0.821和0.732。即使颗粒缺陷的尺寸极小,YOLOSeg的性能也远远优于当前的实例分割模型,如掩码R-CNN、YOLACT、YUSEG和Ultralytics的YOLOv5s分割模型。此外,为YOLOSeg准备训练图像集很省时,因为它既不需要收集大量有缺陷的样本,也不需要标注伪缺陷,也不需要设计手工特征。