oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 6 )

2018 ( 18 )

2017 ( 15 )

2016 ( 18 )

Custom range...

Search Results: 1 - 10 of 4510 matches for " Pascal Bianchi "
All listed articles are free for downloading (OA Articles)
Page 1 /4510
Display every page Item
Ergodic convergence of a stochastic proximal point algorithm
Pascal Bianchi
Mathematics , 2015,
Abstract: The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates $(x_n)$ given by $x_{n+1} = (I+\lambda_n A(\xi_{n+1},\,.\,))^{-1}(x_n)$ where $(A(s,\,.\,):s\in E)$ is a collection of maximal monotone operators on a separable Hilbert space, $(\xi_n)$ is an independent identically distributed sequence of random variables on $E$ and $(\lambda_n)$ is a positive sequence in $\ell^2\backslash \ell^1$. The weighted averaged sequence of iterates is shown to converge weakly to a zero (assumed to exist) of the Aumann expectation ${\mathbb E}(A(\xi_1,\,.\,))$ under the assumption that the latter is maximal. We consider applications to stochastic optimization problems of the form $\min {\mathbb E}(f(\xi_1,x))$ w.r.t. $x\in \bigcap_{i=1}^m X_i$ where $f$ is a normal convex integrand and $(X_i)$ is a collection of closed convex sets. In this case, the iterations are closely related to a stochastic proximal algorithm recently proposed by Wang and Bertsekas.
High-Rate Vector Quantization for the Neyman-Pearson Detection of Correlated Processes
Joffrey Villard,Pascal Bianchi
Mathematics , 2010, DOI: 10.1109/TIT.2011.2158479
Abstract: This paper investigates the effect of quantization on the performance of the Neyman-Pearson test. It is assumed that a sensing unit observes samples of a correlated stationary ergodic multivariate process. Each sample is passed through an N-point quantizer and transmitted to a decision device which performs a binary hypothesis test. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of samples tends to infinity, assuming that the observed process satisfies certain mixing conditions. The main contribution of this paper is to provide a compact closed-form expression of the error exponent in the high-rate regime i.e., when the number N of quantization levels tends to infinity, generalizing previous results of Gupta and Hero to the case of non-independent observations. If d represents the dimension of one sample, it is proved that the error exponent converges at rate N^{2/d} to the one obtained in the absence of quantization. As an application, relevant high-rate quantization strategies which lead to a large error exponent are determined. Numerical results indicate that the proposed quantization rule can yield better performance than existing ones in terms of detection error.
A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non Separable Functions
Olivier Fercoq,Pascal Bianchi
Mathematics , 2015,
Abstract: This paper introduces a coordinate descent version of the V\~u-Condat algorithm. By coordinate descent, we mean that only a subset of the coordinates of the primal and dual iterates is updated at each iteration, the other coordinates being maintained to their past value. Our method allows us to solve optimization problems with a combination of differentiable functions, constraints as well as non-separable and non-differentiable regularizers. We show that the sequences generated by our algorithm converge to a saddle point of the problem at stake, for a wider range of parameter values than previous methods. In particular, the condition on the step-sizes depends on the coordinate-wise Lipschitz constant of the differentiable function's gradient, which is a major feature allowing classical coordinate descent to perform so well when it is applicable. We illustrate the performances of the algorithm on a total-variation regularized least squares regression problem and on large scale support vector machine problems.
Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
Pascal Bianchi,Walid Hachem
Mathematics , 2015,
Abstract: The purpose of this paper is to study the dynamical behavior of the sequence $(x_n)$ produced by the forward-backward algorithm $y_{n+1} \in B(u_{n+1}, x_n)$, $x_{n+1} = ( I + \gamma_{n+1} A(u_{n+1}, \cdot))^{-1}( x_n - \gamma_{n+1} y_{n+1} )$ where $A(\xi) = A(\xi, \cdot)$ and $B(\xi) = B(\xi, \cdot)$ are two functions valued in the set of maximal monotone operators on $\mathbb{R}^N$, $(u_n)$ is a sequence of independent and identically distributed random variables, and $(\gamma_n)$ is a sequence of vanishing step sizes. Following the approach of the recent paper~\cite{bia-(arxiv)15}, we define the operators ${\mathcal A}(x) = {\mathbb E}[ A(u_1, x) ]$ and ${\mathcal B}(x) = {\mathbb E} [ B(u_1, x)]$, where the expectations are the set-valued Aumann integrals with respect to the law of $u_1$, and assume that the monotone operator ${\mathcal A} + {\mathcal B}$ is maximal (sufficient conditions for maximality are provided). It is shown that with probability one, the interpolated process obtained from the iterates $x_n$ is an asymptotic pseudo trajectory in the sense of Bena\"{\i}m and Hirsch of the differential inclusion $\dot z(t) \in - ({\mathcal A} + {\mathcal B})(z(t))$. The convergence of the empirical means of the $x_n$'s towards a zero of ${\mathcal A} + {\mathcal B}$ follows, as well as the convergence of the sequence $(x_n)$ itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments.
Distributed on-line multidimensional scaling for self-localization in wireless sensor networks
Gemma Morral,Pascal Bianchi
Computer Science , 2015,
Abstract: The present work considers the localization problem in wireless sensor networks formed by fixed nodes. Each node seeks to estimate its own position based on noisy measurements of the relative distance to other nodes. In a centralized batch mode, positions can be retrieved (up to a rigid transformation) by applying Principal Component Analysis (PCA) on a so-called similarity matrix built from the relative distances. In this paper, we propose a distributed on-line algorithm allowing each node to estimate its own position based on limited exchange of information in the network. Our framework encompasses the case of sporadic measurements and random link failures. We prove the consistency of our algorithm in the case of fixed sensors. Finally, we provide numerical and experimental results from both simulated and real data. Simulations issued to real data are conducted on a wireless sensor network testbed.
Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
Pascal Bianchi,Jérémie Jakubowicz
Mathematics , 2011,
Abstract: We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Under the assumption of decreasing stepsize, it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points. As an important feature, the algorithm does not require the double-stochasticity of the gossip matrices. It is in particular suitable for use in a natural broadcast scenario for which no feedback messages between agents are required. It is proved that our result also holds if the number of communications in the network per unit of time vanishes at moderate speed as time increases, allowing for potential savings of the network's energy. Applications to power allocation in wireless ad-hoc networks are discussed. Finally, we provide numerical results which sustain our claims.
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Pascal Bianchi,Walid Hachem,Franck Iutzeler
Mathematics , 2014,
Abstract: Based on the idea of randomized coordinate descent of $\alpha$-averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by V\~u and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.
Neyman-Pearson Detection of a Gaussian Source using Dumb Wireless Sensors
Pascal Bianchi,Jeremie Jakubowicz,Francois Roueff
Mathematics , 2010,
Abstract: We investigate the performance of the Neyman-Pearson detection of a stationary Gaussian process in noise, using a large wireless sensor network (WSN). In our model, each sensor compresses its observation sequence using a linear precoder. The final decision is taken by a fusion center (FC) based on the compressed information. Two families of precoders are studied: random iid precoders and orthogonal precoders. We analyse their performance in the regime where both the number of sensors k and the number of samples n per sensor tend to infinity at the same rate, that is, k/n tends to c in (0, 1). Contributions are as follows. 1) Using results of random matrix theory and on large Toeplitz matrices, it is proved that the miss probability of the Neyman-Pearson detector converges exponentially to zero, when the above families of precoders are used. Closed form expressions of the corresponding error exponents are provided. 2) In particular, we propose a practical orthogonal precoding strategy, the Principal Frequencies Strategy (PFS), which achieves the best error exponent among all orthogonal strategies, and which requires very few signaling overhead between the central processor and the nodes of the network. 3) Moreover, when the PFS is used, a simplified low-complexity testing procedure can be implemented at the FC. We show that the proposed suboptimal test enjoys the same error exponent as the Neyman-Pearson test, which indicates a similar asymptotic behaviour of the performance. We illustrate our findings by numerical experiments on some examples.
Nearly Optimal Resource Allocation for Downlink OFDMA in 2-D Cellular Networks
Nassar Ksairi,Pascal Bianchi,Philippe Ciblat
Mathematics , 2010,
Abstract: In this paper, we propose a resource allocation algorithm for the downlink of sectorized two-dimensional (2-D) OFDMA cellular networks assuming statistical Channel State Information (CSI) and fractional frequency reuse. The proposed algorithm can be implemented in a distributed fashion without the need to any central controlling units. Its performance is analyzed assuming fast fading Rayleigh channels and Gaussian distributed multicell interference. We show that the transmit power of this simple algorithm tends, as the number of users grows to infinity, to the same limit as the minimal power required to satisfy all users' rate requirements i.e., the proposed resource allocation algorithm is asymptotically optimal. As a byproduct of this asymptotic analysis, we characterize a relevant value of the reuse factor that only depends on an average state of the network.
Performance of a Distributed Stochastic Approximation Algorithm
Pascal Bianchi,Gersende Fort,Walid Hachem
Mathematics , 2012,
Abstract: In this paper, a distributed stochastic approximation algorithm is studied. Applications of such algorithms include decentralized estimation, optimization, control or computing. The algorithm consists in two steps: a local step, where each node in a network updates a local estimate using a stochastic approximation algorithm with decreasing step size, and a gossip step, where a node computes a local weighted average between its estimates and those of its neighbors. Convergence of the estimates toward a consensus is established under weak assumptions. The approach relies on two main ingredients: the existence of a Lyapunov function for the mean field in the agreement subspace, and a contraction property of the random matrices of weights in the subspace orthogonal to the agreement subspace. A second order analysis of the algorithm is also performed under the form of a Central Limit Theorem. The Polyak-averaged version of the algorithm is also considered.
Page 1 /4510
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.