Abstract:
We consider the inexact Newton methods $$ x_{n+1}^\d=x_n^\d-g_{\a_n}(F'(x_n^\d)^* F'(x_n^\d)) F'(x_n^\d)^* (F(x_n^\d)-y^\d) $$ for solving nonlinear ill-posed inverse problems $F(x)=y$ using the only available noise data $y^\d$ satisfying $\|y^\d-y\|\le \d$ with a given small noise level $\d>0$. We terminate the iteration by the discrepancy principle $$ \|F(x_{n_\d}^\d)-y^\d\|\le \tau \d<\|F(x_n^\d)-y^\d\|, \qquad 0\le n1$. Under certain conditions on $\{\a_n\}$ and $F$, we prove for a large class of spectral filter functions $\{g_\a\}$ the convergence of $x_{n_\d}^\d$ to a true solution as $\d\rightarrow 0$. Moreover, we derive the order optimal rates of convergence when certain H\"{o}lder source conditions hold. Numerical examples are given to test the theoretical results.

Abstract:
In this paper we consider the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed inverse problems. Under merely Lipschitz condition, we prove that this method together with an a posteriori stopping rule defines an order optimal regularization method if the solution is regular in some suitable sense.

Abstract:
Inexact Newton regularization methods have been proposed by Hanke and Rieder for solving nonlinear ill-posed inverse problems. Every such a method consists of two components: an outer Newton iteration and an inner scheme providing increments by regularizing local linearized equations. The method is terminated by a discrepancy principle. In this paper we consider the inexact Newton regularization methods with the inner scheme defined by Landweber iteration, the implicit iteration, the asymptotic regularization and Tikhonov regularization. Under certain conditions we obtain the order optimal convergence rate result which improves the suboptimal one of Rieder. We in fact obtain a more general order optimality result by considering these inexact Newton methods in Hilbert scales.

Abstract:
We consider a class of inexact Newton regularization methods for solving nonlinear inverse problems in Hilbert scales. Under certain conditions we obtain the order optimal convergence rate result.

Abstract:
For solving linear ill-posed problems regularization methods are required when the right hand side is with some noise. In the present paper regularized solutions are obtained by implicit iteration methods in Hilbert scales. % By exploiting operator monotonicity of certain functions and interpolation techniques in variable Hilbert scales, we study these methods under general smoothness conditions. Order optimal error bounds are given in case the regularization parameter is chosen either {\it a priori} or {\it a posteriori} by the discrepancy principle. For realizing the discrepancy principle, some fast algorithm is proposed which is based on Newton's method applied to some properly transformed equations.

Abstract:
In this paper we propose an extension of the iteratively regularized Gauss--Newton method to the Banach space setting by defining the iterates via convex optimization problems. We consider some a posteriori stopping rules to terminate the iteration and present the detailed convergence analysis. The remarkable point is that in each convex optimization problem we allow non-smooth penalty terms including $L^1$ and total variation (TV) like penalty functionals. This enables us to reconstruct special features of solutions such as sparsity and discontinuities in practical applications. Some numerical experiments on parameter identification in partial differential equations are reported to test the performance of our method.

Abstract:
We consider the computation of stable approximations to the exact solution $x^\dag$ of nonlinear ill-posed inverse problems $F(x)=y$ with nonlinear operators $F:X\to Y$ between two Hilbert spaces $X$ and $Y$ by the Newton type methods $$ x_{k+1}^\delta=x_0-g_{\alpha_k} (F'(x_k^\delta)^*F'(x_k^\delta)) F'(x_k^\delta)^* (F(x_k^\delta)-y^\delta-F'(x_k^\delta)(x_k^\delta-x_0)) $$ in the case that only available data is a noise $y^\delta$ of $y$ satisfying $\|y^\delta-y\|\le \delta$ with a given small noise level $\delta>0$. We terminate the iteration by the discrepancy principle in which the stopping index $k_\delta$ is determined as the first integer such that $$ \|F(x_{k_\delta}^\delta)-y^\delta\|\le \tau \delta <\|F(x_k^\delta)-y^\delta\|, \qquad 0\le k1$. Under certain conditions on $\{\alpha_k\}$, $\{g_\alpha\}$ and $F$, we prove that $x_{k_\delta}^\delta$ converges to $x^\dag$ as $\delta\to 0$ and establish various order optimal convergence rate results. It is remarkable that we even can show the order optimality under merely the Lipschitz condition on the Fr\'{e}chet derivative $F'$ of $F$ if $x_0-x^\dag$ is smooth enough.

Abstract:
The determination of solutions of many inverse problems usually requires a set of measurements which leads to solving systems of ill-posed equations. In this paper we propose the Landweber iteration of Kaczmarz type with general uniformly convex penalty functional. The method is formulated by using tools from convex analysis. The penalty term is allowed to be non-smooth to include the $L^1$ and total variation (TV) like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and piecewise constancy in practical applications. Under reasonable conditions, we establish the convergence of the method. Finally we present numerical simulations on tomography problems and parameter identification in partial differential equations to indicate the performance.

Abstract:
We consider the nonstationary iterated Tikhonov regularization in Banach spaces which defines the iterates via minimization problems with uniformly convex penalty term. The penalty term is allowed to be non-smooth to include $L^1$ and total variation (TV) like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and discontinuities in practical applications. We present the detailed convergence analysis and obtain the regularization property when the method is terminated by the discrepancy principle. In particular we establish the strong convergence and the convergence in Bregman distance which sharply contrast with the known results that only provide weak convergence for a subsequence of the iterative solutions. Some numerical experiments on linear integral equations of first kind and parameter identification in differential equations are reported.