%0 Journal Article %T 基于矩阵分解和相似性保持的跨模态检索研究
Research on Cross-Modal Retrieval Based on Matrix Factorization and Similarity Preservation %A 张心文 %J Computer Science and Application %P 1264-1272 %@ 2161-881X %D 2023 %I Hans Publishing %R 10.12677/CSA.2023.136124 %X 早先的基于哈希的跨模态检索方法因为语义提取以及运行速度慢不适合于大数据场景。因此提出一种新的框架叫做独特相似哈希(Unique Similar Hashing, USH)。USH是一个两步学习的哈希方法,先学习哈希码再学习哈希函数。第一阶段,用核函数将数据非线性地投影到核空间,然后使用矩阵分解学习潜在空间。哈希码从潜在空间中学习而来,为了避免量化误差并不放松哈希码的离散约束,而是直接计算它的封闭解。在学习一个优质的哈希码之后,再学习一个哈希函数将原始样本映射到低维的汉明空间。在Wiki数据集上与最先进的方法进行验证,USH在mAP上取得较好结果,证明了该方法的有效性。
Earlier hash-based cross-modal retrieval methods were not suitable for big data scenarios due to problems with semantic extraction and slow running speed. Therefore, a new framework called Unique Similar Hashing (USH) was proposed. USH is a two-stage learning-based hashing method that learns hash codes first and then hash functions. In the first stage, data is nonlinearly projected to a kernel space using kernel functions, followed by learning a latent space using matrix factorization. Hash codes are then learned from the latent space by computing their closed-form solution directly instead of relaxing the discrete constraints to avoid quantization errors. After learning high-quality hash codes, a hash function is learned to map original samples to a low-dimensional Hamming space. It’s validated on the Wiki dataset against state-of-the-art methods, USH achieved good results in mAP, demonstrating the effectiveness of this approach. %K 哈希方法,跨模态检索,矩阵分解
Hashing Method %K Cross-Modal Retrieval %K Matrix Factorization %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=67734