oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 2 )

2019 ( 118 )

2018 ( 120 )

2017 ( 138 )

Custom range...

Search Results: 1 - 10 of 22335 matches for " Jean Ponce "
All listed articles are free for downloading (OA Articles)
Page 1 /22335
Display every page Item
Closure of Smooth Maps in $W^{1,p}(B^3;S^2)$
Augusto C. Ponce,Jean Van Schaftingen
Mathematics , 2009,
Abstract: For every $2 < p < 3$, we show that $u \in W^{1,p}(B^3; S^2)$ can be strongly approximated by maps in $C^\infty(\Bar{B}^3; S^2)$ if, and only if, the distributional Jacobian of $u$ vanishes identically. This result was originally proved by Bethuel-Coron-Demengel-Helein, but we present a different strategy which is motivated by the $W^{2,p}$-case.
Density of bounded maps in Sobolev spaces into complete manifolds
Pierre Bousquet,Augusto Ponce,Jean Van Schaftingen
Mathematics , 2015,
Abstract: Given a complete noncompact Riemannian manifold $N^n$, we study the density of the set of bounded Sobolev maps on the cube $(W^{1, p} \cap L^\infty) (Q^m; N^n)$ in the Sobolev space $W^{1, p} (Q^m; N^n)$ for $1 \le p \le m$. The density always holds when $p$ is not integer. When $p$ is an integer, the density can fail, and we prove that a quantitative levelling property is equivalent with the density. This new condition is ensured by a uniform Lipschitz geometry or by bounds on the injectivity radius and on the curvature. As a byproduct, we give necessary and sufficent conditions for the density of the set of smooth maps $C^\infty (\bar{Q^m}; N^n)$ in the space $W^{1, p} (Q^m; N^n)$.
Strong density for higher order Sobolev spaces into compact manifolds
Pierre Bousquet,Augusto Ponce,Jean Van Schaftingen
Mathematics , 2012, DOI: 10.4171/JEMS/518
Abstract: Given a compact manifold $N^n$, an integer $k \in \mathbb{N}_*$ and an exponent $1 \le p < \infty$, we prove that the class $C^\infty(\overline{Q}^m; N^n)$ of smooth maps on the cube with values into $N^n$ is dense with respect to the strong topology in the Sobolev space $W^{k, p}(Q^m; N^n)$ when the homotopy group $\pi_{\lfloor kp \rfloor}(N^n)$ of order $\lfloor kp \rfloor$ is trivial. We also prove the density of maps that are smooth except for a set of dimension $m - \lfloor kp \rfloor - 1$, without any restriction on the homotopy group of $N^n$
Convex Sparse Matrix Factorizations
Francis Bach,Julien Mairal,Jean Ponce
Computer Science , 2008,
Abstract: We present a convex formulation of dictionary learning for sparse signal decomposition. Convexity is obtained by replacing the usual explicit upper bound on the dictionary size by a convex rank-reducing term similar to the trace norm. In particular, our formulation introduces an explicit trade-off between size and sparsity of the decomposition of rectangular matrices. Using a large set of synthetic examples, we compare the estimation abilities of the convex and non-convex approaches, showing that while the convex formulation has a single local minimum, this may lead in some cases to performance which is inferior to the local minima of the non-convex formulation.
Sparse Modeling for Image and Vision Processing
Julien Mairal,Francis Bach,Jean Ponce
Computer Science , 2014,
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Task-Driven Dictionary Learning
Julien Mairal,Francis Bach,Jean Ponce
Statistics , 2010, DOI: 10.1109/TPAMI.2011.156
Abstract: Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Density of smooth maps for fractional Sobolev spaces $W^{s, p}$ into $\ell$ simply connected manifolds when $s \ge 1$
Pierre Bousquet,Augusto C. Ponce,Jean Van Schaftingen
Mathematics , 2012,
Abstract: Given a compact manifold $N^n \subset \mathbb{R}^\nu$, $s \ge 1$ and $1 \le p < \infty$, we prove that the class of smooth maps on the cube with values into $N^n$ is strongly dense in the fractional Sobolev space $W^{s, p}(Q^m; N^n)$ when $N^n$ is $\lfloor sp \rfloor$ simply connected. For $sp$ integer, we prove weak density of smooth maps with values into $N^n$ when $N^n$ is $sp - 1$ simply connected. The proofs are based on the existence of a retraction of $\mathbb{R}^\nu$ onto $N^n$ except for a small subset of $N^n$ and on a pointwise estimate of fractional derivatives of composition of maps in $W^{s, p} \cap W^{1, sp}$.
Online Learning for Matrix Factorization and Sparse Coding
Julien Mairal,Francis Bach,Jean Ponce,Guillermo Sapiro
Mathematics , 2009,
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.
Strong approximation of fractional Sobolev maps
Pierre Bousquet,Augusto C. Ponce,Jean Van Schaftingen
Mathematics , 2013, DOI: 10.1007/s11784-014-0172-5
Abstract: Brezis and Mironescu have announced several years ago that for a compact manifold $N^n \subset \mathbb{R}^\nu$ and for real numbers $0 < s < 1$ and $1 \le p < \infty$ the class $C^\infty(\overline{Q}^m; N^n)$ of smooth maps on the cube with values into $N^n$ is dense with respect to the strong topology in the Sobolev space $W^{s, p}(Q^m; N^n)$ when the homotopy group $\pi_{\lfloor sp \rfloor}(N^n)$ of order $\lfloor sp \rfloor$ is trivial. The proof of this beautiful result is long and rather involved. Under the additional assumption that $N^n$ is $\lfloor sp \rfloor$ simply connected, we give a shorter proof of their result. Our proof for $sp \ge 1$ is based on the existence of a retraction of $\mathbb{R}^\nu$ onto $N^n$ except for a small subset in the complement of $N^n$ and on the Gagliardo-Nirenberg interpolation inequality for maps in $W^{1, q} \cap L^\infty$. In contrast, the case $sp < 1$ relies on the density of step functions on cubes in $W^{s, p}$.
Supervised Dictionary Learning
Julien Mairal,Francis Bach,Jean Ponce,Guillermo Sapiro,Andrew Zisserman
Computer Science , 2008,
Abstract: It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.
Page 1 /22335
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.