%0 Journal Article
%T 深度稀疏门控Transformer图像去模糊模型
Deep Sparse Gated Transformer for Image Deblurring
%A 路嘉伟
%J Journal of Image and Signal Processing
%P 325-332
%@ 2325-6745
%D 2025
%I Hans Publishing
%R 10.12677/jisp.2025.143029
%X 基于Transformer的图像去模糊方法已经取得了显著的成绩,现阶段已经现有的大多数Transformer图像恢复方法将内部模块设计为自注意力 + 前馈网络的模式。为了降低这样设计带来的巨大计算开销与时间成本,本文提出了一种能够同时融合空间特征与通道特征的深度稀疏门控自注意力求解器。该方法通过Top-k稀疏选择与ReLU2稀疏激活将注意力转化为深度稀疏的形式,能够有效地消除令牌全局交互带来的冗余表示,还能增强通道特征融合能力。此外,本文通过设计判别式频域门控模块实现自适应保留与增强对图像恢复有帮助的特征,进一步完成空间特征融合。由这些基本模块组成的神经网络在GoPro基准数据集上取得了先进的结果。
Transformer-based image deblurring methods have achieved remarkable results. Currently, most existing Transformer-based image restoration approaches adopt a design pattern of self-attention and feed-forward networks for their internal modules. To reduce the substantial computational overhead and time costs associated with such designs, this paper proposes a deep sparse gated self-attention solver capable of simultaneously integrating spatial and channel features. By employing Top-k sparse selection and ReLU2 sparse activation, this method transforms attention into a deep sparse form, effectively eliminating redundant representations caused by token-wise global interactions while enhancing channel feature fusion capabilities. Furthermore, this paper designs a discriminative frequency-domain gating module to adaptively preserve and enhance features beneficial for image restoration, thereby further improving spatial feature fusion. The neural network composed of these fundamental modules achieves state-of-the-art results on the GoPro benchmark dataset.
%K 图像去模糊,
%K 神经网络,
%K Transformer,
%K 自注意力,
%K 特征融合
Image Deblurring
%K Neural Networks
%K Transformer
%K Self-Attention
%K Feature Fusion
%U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=118862