An inertial parallel CQ subgradient extragradient method for variational inequalities application to signal-image recovery

In this paper, we introduce an inertial parallel CQ subgradient extragradient method for (cid:28)nding a common solutions of variational inequality problems. The novelty of this paper is using linesearch methods to (cid:28)nd unknown L constant of L -Lipschitz continuous mappings. Strong convergence theorem has been proved under some suitable conditions in Hilbert spaces. Finally, we show applications to signal and image recovery, and show the good e(cid:30)ciency of our proposed algorithm when the number of subproblems is increasing


Introduction and Preliminaries
Let H be a real Hilbert space endowed with an inner product ⟨., .⟩and the induced norm ∥.∥.A mapping A: H → H is said to be (i) monotone if ⟨Ax − Ay, x − y⟩ ≥ 0 for all x, y ∈ H; (ii) maximal monotone if it is monotone and its graph G(A) := {(x, Ax) : x ∈ H} is not a proper subset of one of any other monotone mapping; (iii) L-Lipchitz continuous if there exists a positive constant L such that ∥Ax − Ay∥ ≤ L∥Ax − Ay∥ for all x, y ∈ H.
It is well-known that a monotone mapping A : H → H is maximal if and only if for each (x, y) ∈ H × H such that ⟨x − u, y − v⟩ ≥ 0 for all (u, v) ∈ G(A), it follows that y = Ax.Let C be nonempty closed convex subset of H and A : H → H is a nonlinear operator.The variational inequality problem (VIP) can be formulated as the problem of nding a point x * ∈ C such that ⟨Ax * , x − x * ⟩ ≥ 0, ∀x ∈ C. ( The set of solutions of VIP ( 1) is denoted by V I(A, C).However, the convergence of this method requires slightly strong assumptions that operators are strongly monotone or inverse strongly monotone.Many algorithms have been proposed and studied for solving VIP(1) of these algorithms involve projection methods [5,6,10,11,39,40,43,46,47,51].The VIP(1) serves as a powerful mathematical tool and generalizes many mathematical methods, in the sense that, it includes many special problems [29] such as convex feasibility problems, linear programming problem, minimizer problem, saddle -point problems, Hierarchical variational inequality problems.It is well known that VI(C,A) is equivalent to the following xed point equation (see [2,3,4,16,17,44,19,21,23,26,29,31,32,33]), x = P C (x − λAx), λ > 0 and r λ (x) := x − P C (x − λAx) = 0.By using the idea of the projection method, Korelevich [24] proposed the extragradient method for solving the VIP(1) under the assumptions of Lipschitz continuous and pseudomonotone of the operator.In this method, if a closed convex set has a simple structure, then the projections onto it can be discovered easily, the extragradient method is computable and very useful.However, we have to use the projection onto C into two times in the extragradient method to obtain the next approximation x n+1 over each iteration.
Later on, Censor et al. [8] proposed the subgradient extragradient method for sloving VIP (1).The second projection onto the closed convex set of the extragradient method was replaced by the projection onto a half Space.Censor et al. [7] used the hybrid method with subgradient extragradient method for obtaining the strong convergence result.This algorithm is dened as follows: Recently, Gibali [15] suggested a self-adaptive subgradient extragradient method by adopting Armijo-like searches [52] and obtained convergence result for VI(A,C) in R n when the pseudo-monotonicity and continuity of the operator are required.
Very recently, Shehu and Iyiola [34] proposed the modied viscosity algorithm with adoption of Armijoline step size rule which is called viscosity type subgradient extragradient line method for a Lipschitz continuous monotone mapping that the Lipschitz constant is unknown in an innite dimensional Hilbert space.This method is dened as follow: where T n := {z ∈ H : ⟨x n − λ n Ax n − y n , z − y n ⟩ ≤ 0}, ρ, µ ∈ (0, 1) and {α n } ⊆ (0, 1).
Our interest in this paper is to study the common variational inequality problems (CVIP).The CVIP is to nd where A i : H → H is a nonlinear operator for all i = 1, 2, ..., N.
In 2012, Censor et al. [9] presented the algorithm for solving the CVIP (4) here, nite elements are computed in parallel of each iterations.The closed convex subset C 1 n , C 2 n , ..., C N n are constructed getting x n+1 which is projected onto the intersection of these closed convex subset.This algorithm is generated by This method has been extensively used due to its simplicity many authors improved it in various ways ( see [14,18,20,25,31,35,36,37,48,49,50] ).
Inspired by the previous results, we introduce the new algorithm by modifying the hybrid subgradient extragradient method combining inertial technique with adoption of Armijo-line step size rule and projection onto the set of intersection sets of half-spaces to nd common solution of variational inequality problems (CVIP).We prove strong convergence theorem under some suitable conditions in Hilbert spaces.Moreover, we apply our main results in image and signal recovery problems.
Proof.Suppose ∥s n −y i n 0 ∥ = 0 for some n 0 ≥ 1.Take l i = n 0 , which satises (6).Suppose that ∥s n −y i n 1 ∥ ̸ = 0 for some n 1 ≥ 1 and assume the contrary that Then, by Lemma 6.3 of [12] and the fact that ρ ∈ (0, 1), we obtain Using the fact that P C is continuous, we have that for all i = 1, 2, ..., N, We consider two cases: s n ∈ C and s n / ∈ C.
(i) If s n ∈ C, then s n = P C (s n ).Now, since ∥s n − y i n 1 ∥ ̸ = 0 and ρ n 1 ≤ 1, it follows from Lemma 6.3 of [12] that Letting n 1 → ∞ in (7), we have that This is a contradiction and hence ( 6) is valid.
This is a contradiction.Therefore, Algorithm 2.1 is well dened and implementable.
Proof.Let x * ∈ F .For each i = 1,2,...N, let n by the property of the metric projection P T i n , we derive and We then obtain from Algorithm 2.1 and Lemma 2.3 (ii) of [42] that Since A i is the monotone operator for all i = 1,2,...,N, we have Thus, Using ( 12) in (13), we obtain Observe that Using the last inequality in ( 14), we have that From ( 15) and s n = x n + θ n (x n − x n−1 ), we have From ( 12) and ( 16), we obtain inequality (8).
Lemma 2.4.Suppose that {x n }, {y i n }, {z i n } generated by Algorithm 2.1.Then then for each i = 1, ..., N, the following relations hold: Proof.(i) Since A i is Lipschitz continuous, A i is continuous.Thus, Lemma 2.1 of [38] ensures that V I(A i , C) is closed and convex for all i = 1, ..., N .Hence, F is closed and convex.From the denitions of C n and Q n , we see that Q n is closed and convex and C n is closed.On the other hand, the relation This implies that C n is convex.Moreover, for each u ∈ F , from Lemma 2.3, we obtain ∥z n − u∥ ≤ ∥s n − u∥ .Thus, F ⊂ C n for all n ≥ 1. Next, we will show that F ⊂ C n ∩ Q n by the induction.Indeed, F ⊂ Q n and so From x n+1 = P Cn∩Qn x 1 and the characterization of the metric projection by Lemma 2.3 (iii) of [42], we obtain This together with the denition of Q n+1 implies that F ⊂ Q n+1 .Thus, by the induction F ⊂ C n ∩Q n for all n ≥ 1.Since F ̸ = ϕ, P F x 1 and x n+1 = P Cn∩Qn x 1 are well dened.(ii).We have x n = P Qn x 1 and F ⊂ Q n .For each u ∈ F , by the property of the projection P Qn we have Thus, the sequence {∥x n − x 1 ∥} is bouned and so {x n } is also bounded.From x n+1 ∈ Q n and x n = P Qn x 1 , we also obtain This implies that the sequence From this inequality, taking n → ∞, we get By the denition of C n and x n+1 ∈ C n , we have From the denition of {θ n } in Step 1 and ( 19) we have This together with the triangle inequality From (21) and the denition of i n , we get From Lemma 2.3 and the triangle inequality, for each u ∈ F , one has From ( 22), ( 25) and the boundedness of {s n }, {x n }, {y i n }, {z i n } and the condition Σθ From ( 15), we have From the condition Σθ n ∥x n − x n−1 ∥ < ∞ and ( 22), we get for all i = 1,...,N.
Theorem 2.5.Let C be a closed and convex subset of a real Hilbert space H. Suppose that {A i } N i=1 : H → H is a nite family of monotone mappings.In addition, the solution set F is nonempty and Σθ n ∥x n −x n−1 ∥ < ∞.Then, the sequences {x n }, {y i n }, {z i n } generated by Algorithm 2.1 converge strongly to P F x 1 .Proof.By Lemma 2.4, F, C n , Q n are nonempty closed and convex subsets.Besides, F ⊂ C n ∩ Q n for all n ≥ 1.Therefore, P F x 1 , P Cn∩Qn x 1 are well-dened.From Lemma 2.4, {x n } is bounded.Assume that p is a weak cluster point of {x n } and {x n k } is subsequence of {x n } converging weakly to p. Since ∥y i n k − x n k ∥ → 0, y i n k ⇀ p.Now we prove that p ∈ F .Indeed, Lemma 2.3 of [42] , ensures that the mapping Taking into account y Therefore, from ( 27), ( 28) and the monotonicity of A i , we nd that Since Passing the limit in ( 29) as k → ∞ and using (30), y i n k ⇀ p, we obtain ⟨x − p, y⟩ ≥ 0 for all (x, y) ∈ G(Q i ).This together with the maximal monotonicity of Finally, we show that x n → p = x † := P F x 1 .From ( 18) and x ∈ F , we have This relation together with the lower weak semi-continuity of the norm implies that By the denition of x † , p = x † and lim and so x n k → x † .Lemma 2.3 ensures that the sequences {y i n }, {z i n } also converge strongly to P F x 1 .

Application to Signal Recovery
Signal processing is analysis, modifying, and synthesizing signals.We can use signal processing techniques for improving transmission, storage eciency and subjective quality and also emphasizing or defecting components of interest in a measured signal.Signal processing problem can be modeled as the following under determinate linear equation system b = Bx+ν where x is a original signal with N components to be recovered (x ∈ R N ), ν, b are noise and the observed signal with noisy for M components respectively (ν, b ∈ R M ) and B : R N → R M (M ≤ N ) is a ltering.Finding the solutions of b = Bx + ν can be seen as solving least squares (LS) problem where The solution of ( 31) can be estimated by many well known iteration methods [13,45].Many algorithms based on optimization have been proposed for solving signal recovery problems 31, see in [22,27,28] In the real, the observation of signal may be disturbed by some lters and noises.The goal in this paper is to nd the original signal without knowing the type of lter and noise.Thus, we can consider this problem in the following problem system.
where x is an original signal, B i is a bounded linear operator and b i is an observed signal with noisy for all i = 1, 2, ..., N. We can apply the Algorithm 2.1 to solve the problem (32) by setting Step 2. Compute y n , where λ i n = ρ l i and l i is the smallest nonegative integer such that Step 3. Compute z i n , where Step 4. Compute z n , i.e., zn = argmax{∥z i n − s n ∥ : i = 1, ..., N }.
In this experiment, the parameters ρ n , θ n , and µ on an implemented algorithm in solving the image deblurring is set as equation (7).The Cauchy error and the signal error are measured by using second norm ∥x n − x n−1 ∥ 2 and ∥x n − x∥ 2 respectively.The performance of the proposed method at n th iteration is measured quantitatively by the means of the signal-to-ratio (SNR), which is dened by where x n is the recovered signal at n th iteration by using the proposed method.The Cauchy error, signal error and SNR quality of the proposed method for recovering the degraded signal are shown in Figures 12-14.The Cauchy error shows that the proposed method can be applied to signal recovering problem.And, the signal error conrms the convergence of the implemented algorithm.It is clearly seen that the solution of the signal recovering problem solved by the proposed algorithm get the quality improvements of the observed signal.

Application to Image Recovery Problem
Image restoration is the process of recovering an unknown image by denoising and deblurring of image.The image restoration problem can be considered in the following linear equation system: where x ∈ R n×1 is an original image, b ∈ R m×1 is the unknown image which is by blurred by matrix B ∈ R m×n and added by noise v.One technique in order to solve problem (33) is the inverse ltering when the image is blurred by a know blurring matrix B some case the inverse of blurring matrix B is dicult to ned, the convex, minimization is use, which is known as the following least squares (LS) problem (31).
In the real, we do not know the blurring matrix of any unknown image in general.So, the goal of solving image restoration is deblurring the image without knowing which is in the blurring operator.This problem can be considered in the problem system 32 where x is the original true image, B i is the blurred matrix, b i is the blurred image by the blurred matrix B i for all i = 1, 2, ..., N. We know that B T i (B i x − b i ) is Lipschitz continuous for each i = 1, 2, ..., N, thus we can apply our Algorithm 3.1 to solve the problem (32) in the area of image restoration problem.
For showing the advantage of our Algorithm (3.1), we will use the following dierent three types of blurred matrices: (1) Gaussian blur of lter size 9 × 9 with standard deviation s = 4 (B 1 ).
We will test these dierent three blur matrices with the following original Grey and RGB images.

Conclusions
In this paper, we solve common variational inequality problems by building the algorithm using the inertial technique with a parallel CQ subgradient extragradient method.We show the strong convergence of the algorithm under some suitable assumptions on the monotone and L− Lipschitz continuous operator with constant L is unknown.We also apply our proposed algorithm to solve signal and image recovery.We obtain that our algorithm gets increased eciency when the subproblems are increasing in both signal and image recovery, see in

Lemma 2 . 3 .
Suppose that x * ∈ F and the sequences {y i n }, {z i n } generated by Step 1 and Step 2 of Algorithm 2.1.Then The original signal x with N = 256, M = 128 is generated by the uniform distribution in the interval [−2, 2] with m = 40 nonzero element.The matrix B 1 , B 2 and B 3 are generated by the Gaussian matrix generated by the MATLAB routine randn(M, N ).The observation b 1 , b 2 and b 3 with M = 128 are generated by white Gaussian noise with signal-to-noise ratio SN R = 20(ForB 1 ), SN R = 40(ForB 2 ) and SN R = 30(ForB 3 ), respectively.The process is started with signal initial data x 1 with N = 256 are picked randomly.

Figures 5 - 7 :
Figures 5-7 : Recovering Signal based on SNR = 14 quality by B 1 , B 2 and B 3 .Next, we aim to nd the solutions of signal recovery problem (31) with N = 2 by using Algorithm 3.1.We show the performance of B 1 , B 2 and B 1 , B 3 and B 2 , B 3 with N = 256, M = 128.

Figures 8 - 10 :
Figures 8-10 : Recovering Signal based on SNR = 14 quality by B 1 , B 2 and B 1 , B 3 and B 2 , B 3 .Next, we aim to nd the solutions of signal recovery problem (31) with N = 3 by using Algorithm 3.1.We show the performance of B 1 , B 2 , B 3 with N = 256, M = 128.

Figures 12 -
Figures 12-14 Cauchy Error, Signal Error and SNR Quality of the proposed methods in recovering the observed signal.

Figures 15 - 16 :
Figures 15-16 : The original Grey and RGB image of sizes 320 × 480 and 323 × 475 × 3, respectively.Three dierent types of blurred Grey and RGB images degraded by the blurring matrixes B 1 , B 2 and B 3 are shown in Figures 17-22. .

Figures 23 - 28 :
Figures 23 -28 : The reconstructed Grey and RGB images with their PSNR for dierent three cases being used the proposed algorithm presented in 10000 th iterations, respectively.Next, we put two dierent blurred matrixes into our Algorithm 3.1, so we can split testing into following there cases when 10000 th iterations is the stoping of the Algorithm: Case IV: Inputting B 1 and B 2 on the Algorithm 3.1; Case V: Inputting B 1 and B 3 on the Algorithm 3.1; Case VI: Inputting B 2 and B 3 on the Algorithm 3.1.

Figures 29 - 34 :
Figures 29-34 : The reconstructed Grey and RGB images with their PSNR for dierent three cases being used the proposed algorithm presented in 10000 th iterations, respectively.It can be seen from Figures 29-34 that the quality of restoration by using the Algorithm 3.1 when two different blurring matrixes are used (N = 2) has improved compare with the previous result for every case, see on Figures 23-28.The last case is inputting three dierent blurring matrixes B 1 , B 2 and B 3 in Algorithm (3.1).The stopping of the algorithm is 10000 th iterations.The result are shown in the following gures.

Figures 35 - 36 :
Figures 35-36 : The reconstructed Grey and RGB images from the blurring operators B 1 , B 2 and B 3 being used the proposed algorithm presented in 10000 th iterations, respectively.Figures 35-36 show the reconstructed Grey and RGB images with thousand iteration.It has been found that the quality of the recovered Grey and RGB images obtained by this algorithm is highest compared to the previous two algorithms.The Cauchy error dene as ∥x n − x n−1 ∥ < 10 −5 .The Figure error is dened as ∥x n − x∥ where x is the original image.The performance of the proposed at x n on image restoring process is measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is dened by P SN R(x n ) = 20 log 10 ( 255 2 M SE ), where M SE = ∥x n − x∥ 2 , ∥x n − x∥ is the second norm of vec(x n − x).The Cauchy error plot is shown for Algorithm 3.1 the validity while the Figures error plot is shown to conrms the convergence of the proposed method and the PSNR quality plot is shown for the measured quantitatively of the image.

Figures 37 - 39 :
Figures 37-39 : Cauchy error, Figure error and PSNR quality plots of the proposed iteration in all cases of Grey images.

Figures 40 - 42 :
Figures 40-42 : Cauchy error, Figure error and PSNR quality plots of the proposed iteration in all cases of RGB images.From Figures 37-42, it is clearly seen that the common solution of deblurring problem with (N ≥ 2) get the quality improvements of the reconstructed Grey and RGB images.Another advantage of the proposed method when the common solution of two or more image deblurring problem has been used to restored

Figures 50 -
Figures 50-56 : The reconstructed RGB images of all cases being used proposed Algorithm 3.1 with PSNR = 29.
was supported by the Thailand Science Research and Innovation Fund and the University of Phayao (Grant No. FF64-UoE002).