oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Multimodal Task-Driven Dictionary Learning for Image Classification  [PDF]
Soheil Bahrampour,Nasser M. Nasrabadi,Asok Ray,W. Kenneth Jenkins
Computer Science , 2015,
Abstract: Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications -- multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared to the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning  [PDF]
Janarthanan Rajendran,Mitesh M. Khapra,Sarath Chandar,Balaraman Ravindran
Computer Science , 2015,
Abstract: Recently there has been a lot of interest in learning common representations for multiple views of data. These views could belong to different modalities or languages. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, V1 and V2) but parallel data is available between each of these views and a pivot view (V3). We propose a model for learning a common representation for V1, V2 and V3 using only the parallel data available between V1V3 and V2V3. The proposed model is generic and even works when there are n views of interest and only one pivot view which acts as a bridge between them. There are two specific downstream applications that we focus on (i) Transfer learning between languages L1,L2,...,Ln using a pivot language L and (ii) cross modal access between images and a language L1 using a pivot language L2. We evaluate our model using two datasets : (i) publicly available multilingual TED corpus and (ii) a new multilingual multimodal dataset created and released as a part of this work. On both these datasets, our model outperforms state of the art approaches.
Active Dictionary Learning in Sparse Representation Based Classification  [PDF]
Jin Xu,Haibo He,Hong Man
Computer Science , 2014,
Abstract: Sparse representation, which uses dictionary atoms to reconstruct input vectors, has been studied intensively in recent years. A proper dictionary is a key for the success of sparse representation. In this paper, an active dictionary learning (ADL) method is introduced, in which classification error and reconstruction error are considered as the active learning criteria in selection of the atoms for dictionary construction. The learned dictionaries are caculated in sparse representation based classification (SRC). The classification accuracy and reconstruction error are used to evaluate the proposed dictionary learning method. The performance of the proposed dictionary learning method is compared with other methods, including unsupervised dictionary learning and whole-training-data dictionary. The experimental results based on the UCI data sets and face data set demonstrate the efficiency of the proposed method.
Computational Intractability of Dictionary Learning for Sparse Representation  [PDF]
Meisam Razaviyayn,Hung-Wei Tseng,Zhi-Quan Luo
Computer Science , 2015,
Abstract: In this paper we consider the dictionary learning problem for sparse representation. We first show that this problem is NP-hard by polynomial time reduction of the densest cut problem. Then, using successive convex approximation strategies, we propose efficient dictionary learning schemes to solve several practical formulations of this problem to stationary points. Unlike many existing algorithms in the literature, such as K-SVD, our proposed dictionary learning scheme is theoretically guaranteed to converge to the set of stationary points under certain mild assumptions. For the image denoising application, the performance and the efficiency of the proposed dictionary learning scheme are comparable to that of K-SVD algorithm in simulation.
Sparse Representation on Graphs by Tight Wavelet Frames and Applications  [PDF]
Bin Dong
Mathematics , 2014, DOI: 10.1016/j.acha.2015.09.005
Abstract: In this paper, we introduce a new (constructive) characterization of tight wavelet frames on non-flat domains in both continuum setting, i.e. on manifolds, and discrete setting, i.e. on graphs; discuss how fast tight wavelet frame transforms can be computed and how they can be effectively used to process graph data. We start with defining the quasi-affine systems on a given manifold $\cM$ that is formed by generalized dilations and shifts of a finite collection of wavelet functions $\Psi:=\{\psi_j: 1\le j\le r\}\subset L_2(\R)$. We further require that $\psi_j$ is generated by some refinable function $\phi$ with mask $a_j$. We present the condition needed for the masks $\{a_j: 0\le j\le r\}$ so that the associated quasi-affine system generated by $\Psi$ is a tight frame for $L_2(\cM)$. Then, we discuss how the transition from the continuum (manifolds) to the discrete setting (graphs) can be naturally done. In order for the proposed discrete tight wavelet frame transforms to be useful in applications, we show how the transforms can be computed efficiently and accurately by proposing the fast tight wavelet frame transforms for graph data (WFTG). Finally, we consider two specific applications of the proposed WFTG: graph data denoising and semi-supervised clustering. Utilizing the sparse representation provided by the WFTG, we propose $\ell_1$-norm based optimization models on graphs for denoising and semi-supervised clustering. On one hand, our numerical results show significant advantage of the WFTG over the spectral graph wavelet transform (SGWT) by [1] for both applications. On the other hand, numerical experiments on two real data sets show that the proposed semi-supervised clustering model using the WFTG is overall competitive with the state-of-the-art methods developed in the literature of high-dimensional data classification, and is superior to some of these methods.
Supervised Dictionary Learning and Sparse Representation-A Review  [PDF]
Mehrdad J. Gangeh,Ahmed K. Farahat,Ali Ghodsi,Mohamed S. Kamel
Computer Science , 2015,
Abstract: Dictionary learning and sparse representation (DLSR) is a recent and successful mathematical model for data representation that achieves state-of-the-art performance in various fields such as pattern recognition, machine learning, computer vision, and medical imaging. The original formulation for DLSR is based on the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although this formulation is optimal for solving problems such as denoising, inpainting, and coding, it may not lead to optimal solution in classification tasks, where the ultimate goal is to make the learned dictionary and corresponding sparse representation as discriminative as possible. This motivated the emergence of a new category of techniques, which is appropriately called supervised dictionary learning and sparse representation (S-DLSR), leading to more optimal dictionary and sparse representation in classification tasks. Despite many research efforts for S-DLSR, the literature lacks a comprehensive view of these techniques, their connections, advantages and shortcomings. In this paper, we address this gap and provide a review of the recently proposed algorithms for S-DLSR. We first present a taxonomy of these algorithms into six categories based on the approach taken to include label information into the learning of the dictionary and/or sparse representation. For each category, we draw connections between the algorithms in this category and present a unified framework for them. We then provide guidelines for applied researchers on how to represent and learn the building blocks of an S-DLSR solution based on the problem at hand. This review provides a broad, yet deep, view of the state-of-the-art methods for S-DLSR and allows for the advancement of research and development in this emerging area of research.
On the Invariance of Dictionary Learning and Sparse Representation to Projecting Data to a Discriminative Space  [PDF]
Mehrdad J. Gangeh,Ali Ghodsi
Computer Science , 2015,
Abstract: In this paper, it is proved that dictionary learning and sparse representation is invariant to a linear transformation. It subsumes the special case of transforming/projecting the data into a discriminative space. This is important because recently, supervised dictionary learning algorithms have been proposed, which suggest to include the category information into the learning of dictionary to improve its discriminative power. Among them, there are some approaches that propose to learn the dictionary in a discriminative projected space. To this end, two approaches have been proposed: first, assigning the discriminative basis as the dictionary and second, perform dictionary learning in the projected space. Based on the invariance of dictionary learning to any transformation in general, and to a discriminative space in particular, we advocate the first approach.
A Split-and-Merge Dictionary Learning Algorithm for Sparse Representation  [PDF]
Subhadip Mukherjee,Chandra Sekhar Seelamantula
Computer Science , 2014,
Abstract: In big data image/video analytics, we encounter the problem of learning an overcomplete dictionary for sparse representation from a large training dataset, which can not be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm for parallel dictionary learning. The fundamental idea behind the algorithm is to learn a sparse representation in two phases. In the first phase, the whole training dataset is partitioned into small non-overlapping subsets, and a dictionary is trained independently on each small database. In the second phase, the dictionaries are merged to form a global dictionary. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy operating on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques, that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Supervised learning of sparse context reconstruction coefficients for data representation and classification  [PDF]
Xuejie Liu,Jingbin Wang,Ming Yin,Benjamin Edwards,Peijuan Xu
Computer Science , 2015, DOI: 10.1007/s00521-015-2042-5
Abstract: Context of data points, which is usually defined as the other data points in a data set, has been found to play important roles in data representation and classification. In this paper, we study the problem of using context of a data point for its classification problem. Our work is inspired by the observation that actually only very few data points are critical in the context of a data point for its representation and classification. We propose to represent a data point as the sparse linear combination of its context, and learn the sparse context in a supervised way to increase its discriminative ability. To this end, we proposed a novel formulation for context learning, by modeling the learning of context parameter and classifier in a unified objective, and optimizing it with an alternative strategy in an iterative algorithm. Experiments on three benchmark data set show its advantage over state-of-the-art context-based data representation and classification methods.
Latent Semantic Learning with Structured Sparse Representation for Human Action Recognition  [PDF]
Zhiwu Lu,Yuxin Peng
Computer Science , 2011, DOI: 10.1016/j.patcog.2012.09.027
Abstract: This paper proposes a novel latent semantic learning method for extracting high-level features (i.e. latent semantics) from a large vocabulary of abundant mid-level features (i.e. visual keywords) with structured sparse representation, which can help to bridge the semantic gap in the challenging task of human action recognition. To discover the manifold structure of midlevel features, we develop a spectral embedding approach to latent semantic learning based on L1-graph, without the need to tune any parameter for graph construction as a key step of manifold learning. More importantly, we construct the L1-graph with structured sparse representation, which can be obtained by structured sparse coding with its structured sparsity ensured by novel L1-norm hypergraph regularization over mid-level features. In the new embedding space, we learn latent semantics automatically from abundant mid-level features through spectral clustering. The learnt latent semantics can be readily used for human action recognition with SVM by defining a histogram intersection kernel. Different from the traditional latent semantic analysis based on topic models, our latent semantic learning method can explore the manifold structure of mid-level features in both L1-graph construction and spectral embedding, which results in compact but discriminative high-level features. The experimental results on the commonly used KTH action dataset and unconstrained YouTube action dataset show the superior performance of our method.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.