Abstract:
In this paper, the system of ordinary differential equations arisen from a method of semi-discretization of the Sivashinsky equation is considered and its exact solution is obtained. Then the numerical results of the proposed method are compared with that of an special full-discretization of the Sivashinsky equation.

Abstract:
In [7], a new iterative method for solving linear system of equations was presented which can be considered as a modification of the Gauss-Seidel method. Then in [4] a different approach, say 2D-DSPM, and more effective one was introduced. In this paper, we improve this method and give a generalization of it. Convergence properties of this kind of generalization are also discussed. We finally give some numerical experiments to show the efficiency of the method and compare with 2D-DSPM.

Abstract:
In this paper, we consider the system of linear equations $Ax=b$, where $A\in \Bbb{R}^{n \times n}$ is a crisp H-matrix and $b$ is a fuzzy $n$-vector. We then investigate the existence and uniqueness of a fuzzy solution to this system. The results can also be used for the class of M-matrices and strictly diagonally dominant matrices. Finally, some numerical examples are given to illustrate the presented theoretical results.

Abstract:
The approximate inverse (AINV) and the factored approximate inverse (FAPINV) are two known algorithms in the field of preconditioning of linear systems of equations. Both of these algorithms compute a sparse approximate inverse of matrix in the factored form and are based on computing two sets of vectors which are -biconjugate. The AINV algorithm computes the inverse factors and of a matrix independently of each other, as opposed to the AINV algorithm, where the computations of the inverse factors are done independently. In this paper, we show that, without any dropping, removing the dependence of the computations of the inverse factors in the FAPINV algorithm results in the AINV algorithm.

Abstract:
In this paper, a method via sparse-sparse iteration for computing a sparse incomplete factorization of the inverse of a symmetric positive definite matrix is proposed. The resulting factorized sparse approximate inverse is used as a preconditioner for solving symmetric positive definite linear systems of equations by using the preconditioned conjugate gradient algorithm. Some numerical experiments on test matrices from the Harwell-Boeing collection for comparing the numerical performance of the presented method with one available well-known algorithm are also given.

Abstract:
In order to solve an initial value problem by the variational iteration method, a sequence of functions is produced which converges to the solution under some suitable conditions. In the nonlinear case, after a few iterations the terms of the sequence become complicated, and therefore, computing a highly accurate solution would be difficult or even impossible. In this paper, for one-dimensional initial value problems, we propose a new approach which is based on approximating each term of the sequence by a piecewise linear function. Moreover, the convergence of the method is proved. Three illustrative examples are given to show the superiority of the proposed method over the classical variational iteration method.

Abstract:
In this paper, to solve a broad class of complex symmetric linear systems, we recast the complex system in a real formulation and apply the generalized successive overrelaxation (GSOR) iterative method to the equivalent real system. We then investigate its convergence properties and determine its optimal iteration parameter as well as its corresponding optimal convergence factor. In addition, the resulting GSOR preconditioner is used to preconditioned Krylov subspace methods such as GMRES for solving the real equivalent formulation of the system. Finally, we give some numerical experiments to validate the theoretical results and compare the performance of the GSOR method with the modified Hermitian and skew-Hermitian splitting (MHSS) iteration.

Abstract:
Identifying key genes involved in a particular disease is a very important problem which is considered in biomedical research. GeneRank model is based on the PageRank algorithm that preserves many of its mathematical properties. The model brings together gene expression information with a network structure and ranks genes based on the results of microarray experiments combined with gene expression information, for example from gene annotations (GO). In the present study, we present a new preconditioned conjugate gradient algorithm to solve GeneRank problem and study its properties. Some numerical experiments are given to show the effectiveness of the suggested preconditioner.

Abstract:
In this paper, we present a preconditioned variant of the generalized successive overrelaxation (GSOR) iterative method for solving a broad class of complex symmetric linear systems. We study conditions under which the spectral radius of the iteration matrix of the preconditioned GSOR method is smaller than that of the GSOR method and determine the optimal values of iteration parameters. Numerical experiments are given to verify the validity of the presented theoretical results and the effectiveness of the preconditioned GSOR method.

Abstract:
In this paper, the generalized shift-splitting preconditioner is implemented for saddle point problems with symmetric positive definite (1,1)-block and symmetric positive semidefinite (2,2)-block. The proposed preconditioner is extracted form a stationary iterative method which is unconditionally convergent. Moreover, a relaxed version of the proposed preconditioner is presented and some properties of the eigenvalues distribution of the corresponding preconditioned matrix are studied. Finally, some numerical experiments on test problems arisen from finite element discretization of the Stokes problem are given to show the effectiveness of the preconditioners.