Abstract:
We study the problem of finding the global Riemannian center of mass of a set of data points on a Riemannian manifold. Specifically, we investigate the convergence of constant step-size gradient descent algorithms for solving this problem. The challenge is that often the underlying cost function is neither globally differentiable nor convex, and despite this one would like to have guaranteed convergence to the global minimizer. After some necessary preparations we state a conjecture which we argue is the best (in a sense described) convergence condition one can hope for. The conjecture specifies conditions on the spread of the data points, step-size range, and the location of the initial condition (i.e., the region of convergence) of the algorithm. These conditions depend on the topology and the curvature of the manifold and can be conveniently described in terms of the injectivity radius and the sectional curvatures of the manifold. For manifolds of constant nonnegative curvature (e.g., the sphere and the rotation group in $\mathbb{R}^{3}$) we show that the conjecture holds true (we do this by proving and using a comparison theorem which seems to be of a different nature from the standard comparison theorems in Riemannian geometry). For manifolds of arbitrary curvature we prove convergence results which are weaker than the conjectured one (but still superior over the available results). We also briefly study the effect of the configuration of the data points on the speed of convergence.

Abstract:
The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are generalized to Riemannian manifolds. Several algorithms are presented and their convergence properties are analyzed employing the Riemannian structure of the manifold. Specifically, two apparently new algorithms, which can be thought of as Newton's method and the conjugate gradient method on Riemannian manifolds, are presented and shown to possess, respectively, quadratic and superlinear convergence. Examples of each method on certain Riemannian manifolds are given with the results of numerical experiments. Rayleigh's quotient defined on the sphere is one example. It is shown that Newton's method applied to this function converges cubically, and that the Rayleigh quotient iteration is an efficient approximation of Newton's method. The Riemannian version of the conjugate gradient method applied to this function gives a new algorithm for finding the eigenvectors corresponding to the extreme eigenvalues of a symmetric matrix. Another example arises from extremizing the function $\mathop{\rm tr} {\Theta}^{\scriptscriptstyle\rm T}Q{\Theta}N$ on the special orthogonal group. In a similar example, it is shown that Newton's method applied to the sum of the squares of the off-diagonal entries of a symmetric matrix converges cubically.

Abstract:
We investigate the low-energy behavior of the gradient flow of the $L^2$ norm of the Riemannian curvature on four-manifolds. Specifically, we show long time existence and exponential convergence to a metric of constant sectional curvature when the initial metric has positive Yamabe constant and small initial energy.

Abstract:
A Riemannian gradient algorithm based on geometric structures of a manifold consisting of all positive definite matrices is proposed to calculate the numerical solution of the linear matrix equation . In this algorithm, the geodesic distance on the curved Riemannian manifold is taken as an objective function and the geodesic curve is treated as the convergence path. Also the optimal variable step sizes corresponding to the minimum value of the objective function are provided in order to improve the convergence speed. Furthermore, the convergence speed of the Riemannian gradient algorithm is compared with that of the traditional conjugate gradient method in two simulation examples. It is found that the convergence speed of the provided algorithm is faster than that of the conjugate gradient method. 1. Introduction The linear matrix equation where are arbitrary real matrices, is a nonnegative integer, and denotes the transpose of the matrix , arises from many fields, such as the control theory, the dynamic programming, and the stochastic filtering [1–4]. In the past decades, there has been increasing interest in the solution problems of this equation. In the case of , some numerical methods, such as Bartels-Stewart method, Hessenberg-Schur method, and Schur and QR decompositions method, were proposed in [5, 6]. Based on the Kronecker product and the fixed point theorem in partially ordered sets, some sufficient conditions for the existence of a unique symmetric positive definite solution are given in [7, 8]. Ran and Reurings ([7, Theorem 3.3] and [9, Theorem 3.1]) pointed out that if is a positive definite matrix, then there exists a unique solution and it is symmetric positive definite. Recently, under the condition that (1) is consistent, Su and Chen presented an efficient numerical iterative method based on the conjugate gradient method (CGM) [10]. In addition, based on geometric structures on a Riemannian manifold, Duan et al. proposed a natural gradient descent algorithm to solve algebraic Lyapunov equations [11, 12]. Following them, we investigate the solution problem of (1) in the view of Riemannian manifolds. Note that this solution of (1) is a symmetric positive definite matrix and the set of all the symmetric positive definite matrices can be considered as a manifold. Thus, it is more convenient to investigate the solution problem with the help of these geometric structures on this manifold. To address such a need, a new framework is presented in this paper to calculate the numerical solution, which is based on the geometric structures on the

Abstract:
A new type of gradient estimate is established for diffusion semigroups on non-compact complete Riemannian manifolds. As applications, a global Harnack inequality with power and a heat kernel estimate are derived for diffusion semigroups on arbitrary complete Riemannian manifolds.

Abstract:
We prove existence and partial regularity of integral rectifiable $m$-dimensional varifolds minimizing functionals of the type $\int |H|^p$ and $\int |A|^p$ in a given Riemannian $n$-dimensional manifold $(N,g)$, $2\leq mm$, under suitable assumptions on $N$ (in the end of the paper we give many examples of such ambient manifolds). To this aim we introduce the following new tools: some monotonicity formulas for varifolds in $\mathbb{R}^S$ involving $\int |H|^p$, to avoid degeneracy of the minimizer, and a sort of isoperimetric inequality to bound the mass in terms of the mentioned functionals.

Abstract:
In this note we present some gradient estimates for the diffusion equation $\partial_t u=\Delta u-\nabla \phi \cdot \nabla u $ on Riemannian manifolds, where $\phi $ is a C^2 function, which generalize estimates of R. Hamilton's and Qi S. Zhang's on the heat equation.

Abstract:
In this paper we study gradient estimates for the positive solutions of the porous medium equation: $$u_t=\Delta u^m$$ where $m>1$, which is a nonlinear version of the heat equation. We derive local gradient estimates of the Li-Yau type for positive solutions of porous medium equations on Riemannian manifolds with Ricci curvature bounded from below. As applications, several parabolic Harnack inequalities are obtained. In particular, our results improve the ones of Lu, Ni, V\'{a}zquez and Villani in [10]. Moreover, our results recover the ones of Davies in [4], Hamilton in [5] and Li and Xu in [7].

Abstract:
In this article, a new Riemannian conjugate gradient method is developed together with global convergence analysis. The existing Fletcher-Reeves-type Riemannian conjugate gradient method is guaranteed to have global convergence if it is implemented with the strong Wolfe conditions. On the other hand, the Dai-Yuan-type Euclidean conjugate gradient method generates globally convergent sequences under the weak Wolfe conditions. This article deals with a generalization of Dai-Yuan's Euclidean algorithm to a Riemannian algorithm that needs not the strong but just the weak Wolfe conditions. The global convergence property of the proposed method is proved by means of the scaled vector transport associated with the differentiated retraction.

Abstract:
We consider abstract operator equations $Fu=y$, where $F$ is a compact linear operator between Hilbert spaces $U$ and $V$, which are function spaces on \emph{closed, finite dimensional Riemannian manifolds}, respectively. This setting is of interest in numerous applications such as Computer Vision and non-destructive evaluation. In this work, we study the approximation of the solution of the ill-posed operator equation with Tikhonov type regularization methods. We prove well-posedness, stability, convergence, and convergence rates of the regularization methods. Moreover, we study in detail the numerical analysis and the numerical implementation. Finally, we provide for three different inverse problems numerical experiments.