Zhao Xusheng, Dai Qiong, Bai Xu, Wu Jia, Peng Hao, Peng Huailiang, Yu Zhengtao, Yu Philip S
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6693-6707. doi: 10.1109/TNNLS.2024.3392575. Epub 2025 Apr 4.
Multiple instance learning (MIL) trains models from bags of instances, where each bag contains multiple instances, and only bag-level labels are available for supervision. The application of graph neural networks (GNNs) in capturing intrabag topology effectively improves MIL. Existing GNNs usually require filtering low-confidence edges among instances and adapting graph neural architectures to new bag structures. However, such asynchronous adjustments to structure and architecture are tedious and ignore their correlations. To tackle these issues, we propose a reinforced GNN framework for MIL (RGMIL), pioneering the exploitation of multiagent deep reinforcement learning (MADRL) in MIL tasks. MADRL enables the flexible definition or extension of factors that influence bag graphs or GNNs and provides synchronous control over them. Moreover, MADRL explores structure-to-architecture correlations while automating adjustments. Experimental results on multiple MIL datasets demonstrate that RGMIL achieves the best performance with excellent explainability. The code and data are available at https://github.com/RingBDStack/RGMIL.
多实例学习(MIL)从实例包中训练模型,其中每个包包含多个实例,并且只有包级标签可用于监督。图神经网络(GNN)在捕获包内拓扑结构方面的应用有效地改进了多实例学习。现有的图神经网络通常需要过滤实例之间低置信度的边,并使图神经架构适应新的包结构。然而,这种对结构和架构的异步调整既繁琐又忽略了它们之间的相关性。为了解决这些问题,我们提出了一种用于多实例学习的强化图神经网络框架(RGMIL),率先在多实例学习任务中利用多智能体深度强化学习(MADRL)。多智能体深度强化学习能够灵活定义或扩展影响包图或图神经网络的因素,并对它们进行同步控制。此外,多智能体深度强化学习在自动调整的同时探索结构与架构之间的相关性。在多个多实例学习数据集上的实验结果表明,RGMIL具有出色的可解释性,实现了最佳性能。代码和数据可在https://github.com/RingBDStack/RGMIL获取。