Local convergence for a Chebyshev-type method in Banach space free of derivatives

This paper is devoted to the study of a Chebyshev-type method free of derivatives for solving nonlinear equations in Banach spaces. Using the idea of restricted convergence domain, we extended the applicability of the Chebyshev-type methods. Our convergence conditions are weaker than the conditions used in earlier studies. Therefore the applicability of the method is extended. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.


Introduction
Let F : Ω ⊆ B 1 −→ B 2 be a Fréchet differentiable operator between the Banach spaces B 1 and B 2 .Due to the wide applications, finding a solution for equation is an important problem in applied mathematics and computational sciences.Convergence analysis of iterative methods require assumptions on the Fréchet derivatives of the operator F. That restricts the applicability of these methods.
In this paper we study the seventh convergence order Chebyshev-type method [13]: where [., .;F ] denotes a divided difference of order one on Ω 2 and x 0 ∈ Ω is an initial point.Throughout this paper L(B 2 , B 1 ) denotes the set of bounded linear operators between B 1 and B 2 .
The study of convergence of iterative algorithms is involving categories: semi-local and local convergence analysis.The semi-local convergence is based on the information around an initial point, to derive conditions ensuring the convergence of these algorithms, while the local convergence is based on the information around a solution to get estimates of the computed radii of the convergence balls.Local results are important since they tell us about the degree of difficulty in choosing initial points.
The above method was studied in [13].Convergence analysis in [13] is based on the assumptions on the Fréchet derivative F up to the order seven.In this study, we use only assumptions on the first Fréchet derivative of the operator F in our convergence analysis, so the the method (2) can be applied to solve equations but the earlier results cannot be applied [1,2,3,4,5,6,7,8,9,10,11,12,13,14] The rest of the paper is structured as follows.In Section 2 we present the local convergence analysis of the method (2).We also provide a radius of convergence, computable error bounds and a uniqueness result.Numerical examples are given in the last section.

Local convergence
We need a definition concerning the monotonicity of functions.
Moreover, T is increasing on D, if a 1 ≤ a 3 and a 2 < a 4 or a 1 < a 3 and a 2 ≤ a 4 or a 1 < a 3 and a 2 < a 4 imply T (a 1 , a 2 ) < T (a 2 , a 4 ).
We have by (3) that Then, by ( 4), (5) and the intermediate value theorem equation h 1 (t) = 0 has solutions in the interval (0, r 0 ).Denote by r 1 the smallest such zero.
) be continuous and nondecreasing functions.Define functions β, g 2 , h 2 on [0, r 0 ) by Suppose that and We get by ( 6) that h 2 (0) < 0. So, by the intermediate value theorem equation h 2 (t) = 0 has solutions in the interval (0, r 0 ).Denote by r 2 the smallest solution of h 2 (t) = 0 in the interval (0, r 0 ).Define functions p 1 and h p 1 on the interval [0, r 0 ) by p 1 (t) = ω 0 (g 2 (t)t, g 1 (t)t) We have by the definition of function w 0 that h p 1 (0) < 0. Suppose that Denote by r p 1 the smallest solution of equation h p 1 (t) = 0 on the interval (0, r 0 ).Define functions ϕ, g 3 , h 3 on the interval [0, r p 1 ) by Suppose that and We have that h 3 (0) < 0. Denote by r 3 the smallest solution of equation h 3 (t) = 0 in the interval (0, r 0 ).Define the radius of convergence r by r = min{r i } i = 1, 2, 3.
Some alternatives to the aforementioned conditions are: Equation w 0 (δt, t) = 1 has positive solutions.Denoted by r 0 the smallest such solution.Functions v 0 , ω 1 , v, ω 2 and ω 3 defined on the same intervals as before are increasing.Then, clearly conditions ( 4), ( 7), ( 8) and ( 10) hold.We can show the local convergence analysis of method (2).