%0 Journal Article %T DSP-Transformer:基于注意力机制的最小支配集问题求解方法
DSP-Transformer: An Approach for Solving the Minimum Dominating Set Problem Based on Attention Mechanism %A 刘斯豪 %A 何伟骅 %J Advances in Applied Mathematics %P 916-926 %@ 2324-8009 %D 2025 %I Hans Publishing %R 10.12677/aam.2025.144216 %X 最小支配集问题在计算机科学与网络优化领域具有广泛应用,但其NP-完全性使得传统启发式方法计算成本高且难以泛化。本文提出了DSP-Transformer,一种基于Transformer结构的深度强化学习求解方法,通过编码器–解码器架构与近似策略优化方法,高效求解最小支配集问题。由于注意力机制更适合捕获图上的潜在关系,相较于传统基于消息传递机制的编码器,本方法在性能上也有显著提升。实验结果表明,该方法在T1基准数据集和ER随机图合成数据集上的表现优于传统启发式算法,能够提供更优质的解,同时显著减少计算成本。DSP-Transformer预训练后可泛化到更大规模的图实例,无需每次重新搜索,展现出深度学习在求解组合优化问题中的巨大潜力。
The Minimum Dominating Set Problem has widespread applications in computer science and network optimization. However, its NP-completeness makes traditional heuristic methods computationally expensive and difficult to generalize. In this paper, we propose DSP-Transformer, a deep reinforcement learning approach based on the Transformer architecture, which efficiently solves DSP through an encoder-decoder framework and approximate policy optimization. Since the attention mechanism is better suited for capturing latent relationships in graphs, our approach achieves significant performance improvements over traditional message-passing-based encoders. Experimental results demonstrate that DSP-Transformer outperforms conventional heuristic algorithms on both the T1 benchmark dataset and ER random graphs, providing higher-quality solutions while significantly reducing computational costs. Once pretrained, DSP-Transformer can generalize to larger graph instances without requiring repeated searches, highlighting the potential of deep learning in tackling combinatorial optimization problems. %K 组合优化, %K 最小支配集, %K Transformer, %K 注意力机制
Combinatorial Optimization %K Minimum Dominating Set %K Transformer %K Attention Mechanism %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=113221