Abstract:
Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) can be used to solve convex optimization problems that consist of a sum of two functions. Convergence rate estimates for these algorithms have received much attention lately. In particular, linear convergence rates have been shown by several authors under various assumptions. One such set of assumptions is strong convexity and smoothness of one of the functions in the minimization problem. The authors recently provided a linear convergence rate bound for such problems. In this paper, we show that this rate bound is tight for many algorithm parameter choices.

Abstract:
Convex optimization has become ubiquitous in most quantitative disciplines of science, including variational image processing. Proximal splitting algorithms are becoming popular to solve such structured convex optimization problems. Within this class of algorithms, Douglas--Rachford (DR) and alternating direction method of multipliers (ADMM) are designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of DR (resp. ADMM) when the involved functions (resp. their Legendre-Fenchel conjugates) are moreover partly smooth. More precisely, when both of the two functions (resp. their conjugates) are partly smooth relative to their respective manifolds, we show that DR (resp. ADMM) identifies these manifolds in finite time. Moreover, when these manifolds are affine or linear, we prove that DR/ADMM is locally linearly convergent. When $J$ and $G$ are locally polyhedral, we show that the optimal convergence radius is given in terms of the cosine of the Friedrichs angle between the tangent spaces of the identified manifolds. This is illustrated by several concrete examples and supported by numerical experiments.

Abstract:
Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show linear convergence rate bounds for Douglas-Rachford splitting under strong convexity and smoothness assumptions. We show that these bounds generalize and/or improve on similar bounds in the literature and that the bounds are tight for the class of problems under consideration. For finite dimensional Euclidean problems, we show how the rate bounds depend on the metric that is used in the algorithm. We also show how to select this metric to optimize the bound. Many real-world problems do not satisfy both the smoothness and strongly convexity assumptions. Therefore, we also propose heuristic metric and parameter selection methods to improve the performance of a wider class of problems. The efficiency of the proposed heuristics is confirmed in a numerical example on a model predictive control problem, where improvements of more than one order of magnitude are observed.

Abstract:
Douglas-Rachford splitting is an algorithm that solves composite monotone inclusion problems, in which composite convex optimization problems is a subclass. Recently, several authors have shown local and global convergence rate bounds for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (which is Douglas-Rachford splitting applied to the dual problem). In this paper, we show global convergence rate bounds under various assumptions on the problem data. The presented bounds improve and/or generalize previous results from the literature. We also provide examples that show that the provided bounds are indeed tight for the different classes of problems under consideration for many algorithm parameters.

Abstract:
In this paper, we investigate the Douglas-Rachford method for two closed (possibly nonconvex) sets in Euclidean spaces. We show that under certain regularity conditions, the Douglas-Rachford method converges locally with R-linear rate. In convex settings, we prove that the linear convergence is global. Our study recovers recent results on the same topic.

Abstract:
We provide a simple analysis of the Douglas-Rachford splitting algorithm in the context of $\ell^1$ minimization with linear constraints, and quantify the asymptotic linear convergence rate in terms of principal angles between relevant vector spaces. In the compressed sensing setting, we show how to bound this rate in terms of the restricted isometry constant. More general iterative schemes obtained by $\ell^2$-regularization and over-relaxation including the dual split Bregman method are also treated, which answers the question how to choose the relaxation and soft-thresholding parameters to accelerate the asymptotic convergence rate. We make no attempt at characterizing the transient regime preceding the onset of linear convergence.

Abstract:
Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance. In this paper, we provide a comprehensive convergence rate analysis of the Douglas-Rachford splitting (DRS), Peaceman-Rachford splitting (PRS), and alternating direction method of multipliers (ADMM) algorithms under various regularity assumptions including strong convexity, Lipschitz differentiability, and bounded linear regularity. The main consequence of this work is that relaxed PRS and ADMM automatically adapt to the regularity of the problem and achieve convergence rates that improve upon the (tight) worst-case rates that hold in the absence of such regularity. All of the results are obtained using simple techniques.

Abstract:
The Douglas-Rachford splitting algorithm is a classical optimization method that has found many applications. When specialized to two normal cone operators, it yields an algorithm for finding a point in the intersection of two convex sets. This method for solving feasibility problems has attracted a lot of attention due to its good performance even in nonconvex settings. In this paper, we consider the Douglas-Rachford algorithm for finding a point in the intersection of two subspaces. We prove that the method converges strongly to the projection of the starting point onto the intersection. Moreover, if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspaces. Our results improve upon existing results in three ways: First, we identify the location of the limit and thus reveal the method as a best approximation algorithm; second, we quantify the rate of convergence, and third, we carry out our analysis in general (possibly infinite-dimensional) Hilbert space. We also provide various examples as well as a comparison with the classical method of alternating projections.

Abstract:
The Douglas-Rachford algorithm is a classical and very successful method for solving optimization and feasibility problems. In this paper, we provide novel conditions sufficient for finite convergence in the context of convex feasibility problems. Our analysis builds upon, and considerably extends, pioneering work by Spingarn. Specifically, we obtain finite convergence in the presence of Slater's condition in the affine-polyhedral and in a hyperplanar-epigraphical case. Various examples illustrate our results. Numerical experiments demonstrate the competitiveness of the Douglas-Rachford algorithm for solving linear equations with a positivity constraint when compared to the method of alternating projections and the method of reflection-projection.