oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Optimized Steffensen-Type Methods with Eighth-Order Convergence and High Efficiency Index
F. Soleymani
International Journal of Mathematics and Mathematical Sciences , 2012, DOI: 10.1155/2012/932420
Abstract: Steffensen-type methods are practical in solving nonlinear equations. Since, such schemes do not need derivative evaluation per iteration. Hence, this work contributes two new multistep classes of Steffensen-type methods for finding the solution of the nonlinear equation ()=0. New techniques can be taken into account as the generalizations of the one-step method of Steffensen. Theoretical proofs of the main theorems are furnished to reveal the eighth-order convergence. Per computing step, the derived methods require only four function evaluations. Experimental results are also given to add more supports on the underlying theory of this paper as well as lead us to draw a conclusion on the efficiency of the developed classes.
ON THE LOCAL CONVERGENCE OF A TWO-STEP STEFFENSEN-TYPE METHOD FOR SOLVING GENERALIZED EQUATIONS
ARGYROS,IOANNIS K; HILOUT,SA?D;
Proyecciones (Antofagasta) , 2008, DOI: 10.4067/S0716-09172008000300007
Abstract: we use a two-step steffensen-type method [1], [2], [4], [6], [13]-[16] to solve a generalized equation in a banach space setting under h?lder-type conditions introduced by us in [2], [6] for nonlinear equations. using some ideas given in [4], [6] for nonlinear equations, we provide a local convergence analysis with the following advantages over related [13]-[16]: finer error bounds on the distances involved, and a larger radius of convergence. an application is also provided.
ON THE LOCAL CONVERGENCE OF A TWO-STEP STEFFENSEN-TYPE METHOD FOR SOLVING GENERALIZED EQUATIONS  [cached]
IOANNIS K ARGYROS,SA?D HILOUT
Proyecciones (Antofagasta) , 2008,
Abstract: We use a two-step Steffensen-type method [1], [2], [4], [6], [13]-[16] to solve a generalized equation in a Banach space setting under H lder-type conditions introduced by us in [2], [6] for nonlinear equations. Using some ideas given in [4], [6] for nonlinear equations, we provide a local convergence analysis with the following advantages over related [13]-[16]: finer error bounds on the distances involved, and a larger radius of convergence. An application is also provided.
A class of optimal eighth-order Steffensen-type iterative methods for solving nonlinear equations and their basins of attraction  [PDF]
Anuradha Singh,J. P. Jaiswal
Mathematics , 2014,
Abstract: This article concerned with the issue of solving a nonlinear equation with the help of iterative method where no any derivative evaluation is required per iteration. Therefore, this work contributes to a new class of optimal eighth-order Steffensen-type methods. Theoretical proof has been given to reveal the eighth-order convergence. Numerical comparisons have been carried out to show the effectiveness of contributed scheme.
Self-accelerating two-step Steffensen-type methods with memory and their applications on the solution of nonlinear BVPs  [PDF]
Quan Zheng, Xiuhui Guo, Fengxi Huang
Open Journal of Applied Sciences (OJAppS) , 2012, DOI: 10.4236/ojapps.2012.24B017
Abstract: In this paper, seven self-accelerating iterative methods with memory are derived from an optimal two-step Steffensen-type method without memory for solving nonlinear equations, their orders of convergence are proved to be increased,?numerical examples are demonstrat-ed demonstrated to verify the theoretical results, and applications for solving systems of nonlinear equations and BVPs of nonlinear ODEs are illustrated.
SOME NEW DERIVATIVE FREE METHODS FOR SOLVING NONLINEAR EQUATIONS
Gustavo Fernández Torres,Francisco Rubén Castillo Soria
Academic Research International , 2012,
Abstract: This paper proposes two new iterative methods for solving nonlinear equations. In comparison to the classical Newton’s method, the new proposed methods do not use derivatives; furthermore only two evaluations of the function are needed per iteration. Using the methods proposed, when the starting value is selected close to the root, the order of convergence is 2. The development of the method allows you to achieve classical methods such as secant and Steffensen’s as an alternative to the usual process. The numerical examples show that the proposed methods have the same performance as Newton’s method with the advantage of being derivative free. In comparison to other methods which are derivative free, these methods are more efficient.
SOR- Steffensen-Newton Method to Solve Systems of Nonlinear Equations
Applied Mathematics , 2012, DOI: 10.5923/j.am.20120202.05
Abstract: In this paper, we present SOR-Steffensen-Newton (SOR-SN) algorithm to solve systems of nonlinear equations. We study the convergence of the method. The computational aspects of the method is also studied using some numerical experiment. In comparison of new method with SOR-Newton, SOR-Steffensen and SOR-Secant methods, our method are better in CPU time and number of iterations.
New Efficient Steffensen Type method for Solving Nonlinear Equations  [PDF]
J. P. Jaiswal
Mathematics , 2013,
Abstract: In the present paper, by approximating the derivatives in the Kou et al. \cite{Kou} fourth-order method by central difference quotient, we obtain new modification of this method free from derivatives. We prove the important fact that the method obtained preserve their order of convergence, without calculating any derivative. Finally, numerical tests confirm that our method give the better performance as compare to other well known Steffensen type methods.
Super-Relaxed (η)-Proximal Point Algorithms, Relaxed (η)-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions  [cached]
Ravi P. Agarwal,Ram U. Verma
Fixed Point Theory and Applications , 2009, DOI: 10.1155/2009/957407
Abstract: We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed (η)-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal (η)-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) (η)-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal (η)-monotonicity, and then applying to first-order evolution equations/inclusions.
Super-Relaxed ( )-Proximal Point Algorithms, Relaxed ( )-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions  [cached]
Agarwal RaviP,Verma RamU
Fixed Point Theory and Applications , 2009,
Abstract: We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( )-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( )-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) ( )-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( )-monotonicity, and then applying to first-order evolution equations/inclusions.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.