%0 Journal Article %T CDNet:空间向量用于双视图对应学习的研究
CDNet: Using Spatial Vectors for Two-View Correspondence Learning Research %A 李浩然 %J Computer Science and Application %P 22-32 %@ 2161-881X %D 2025 %I Hans Publishing %R 10.12677/csa.2025.154074 %X 特征匹配是计算机视觉中的一项基本而重要的任务,目的是在给定的一对图像之间找到正确的对应关系(即内线)。严格地说,特征匹配通常包括四个步骤,即特征提取、特征描述、建立初始对应集和去除虚假对应(即离群值去除)。然而,现有的方法单纯考虑到了对应点之间的联系,而忽视了场景图片中可以获取的视觉信息。在本文中,我们提出了一种新型剪枝框架Context Depth Net (CDNet)来准确识别内线和恢复相机姿态。我们从对应点中提取方向信息作为提示方法指导剪枝操作,并利用向量场更好地挖掘对应之间的深层空间信息,最后设计一组融合模块来使空间信息更好融合。实验表明,所提出的CDNet在室内室外数据集上的测试结果优于先前提出的方法。
Feature matching is a fundamental and crucial task in computer vision, aiming to find the correct correspondences (i.e., inliers) between a given pair of images. Strictly speaking, feature matching typically involves four steps: feature extraction, feature description, establishing an initial set of correspondences, and removing false correspondences (i.e., outlier removal). However, existing methods only consider the connections between corresponding points while neglecting the visual information that can be obtained from scene images. In this paper, we propose a novel pruning framework called Context Depth Net (CDNet) to accurately identify inliers and recover camera poses. We extract directional information from corresponding points as a cue to guide the pruning process, and utilize vector fields to better mine the deep spatial information between correspondences. Finally, we design a set of fusion modules to better integrate the spatial information. Experiments show that the proposed CDNet performs better on indoor and outdoor datasets than previously proposed methods. %K 向量场, %K 上下文, %K Transformer
Vector Field %K Context %K Transformer %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=110876