YARASA ALGORİTMASI VE KLONAL SEÇİM ALGORİTMASININ OPTİMİZASYON PROBLEMLERİ İLE PERFORMANS ANALİZİ

Evrimsel algoritmalar, ozellikle optimizasyon alaninda calisan bir cok farkli arastirmaci tarafindan tercih edilmektedir. Evrimsel algoritmalarin verilen problemleri optimize etmenin yani sira, bu problemleri az sayida iterasyon kullanarak cozmeleri bu algoritmalar icin onemli bir ayirt edici ozelliktir. Bu calismada, optimizasyon alaninda verimliligi kanitlanmis iki evrimsel algoritma; yarasa algoritmasi ve klonal secim algoritmasi test fonksiyonlari kullanilarak kiyaslanmistir. Kiyaslama yapilan test fonksiyonlarindan elde edilen sonuclara gore, yarasa algoritmasi klonal secim algoritmasina gore daha iyi bir performans gostermistir. Ayrica, yarasa algoritmasi optimizasyonun ilk safhalarinda dahi yuksek cozum kalitesine ulasmistir. Bu analiz, gelecek calismalar icin evrimsel algoritmalarin performans kiyaslamalari acisindan rehber olarak kullanilabilir niteliktedir.


INTRODUCTION
Over the few years, there is an increasing interest to evolution based algorithms and their applications from various fields of research (Adarsh et However, it is seen that no algorithm performs superior for all problems (Wolpert, 1997). Thus, proposing new evolutionary algorithms or modifications, hybridizations of them is still an open area for researchers. Instead of proposing new evolutionary algorithms for problems, modifying the characteristics of the existing ones according to given problems is another preferable method for researchers (Dandy et al., 1996;Gong et al., 2007;Goyal and Patterh, 2016). In literature, there exists two powerful evolutionary algorithms that their efficiencies for optimization are proved; bat algorithm Yang (2010) and clonal selection algorithm (De Castro and Von Zuben, 2000). In this paper, we did a performance analysis by using their original characteristics; instead applying any modifications. The main motivation of this work is to analyze their performances on some of the optimization tasks and to observe their weak and strong characteristics. It is also aimed to create a future field of study to use these algorithms' hybrid versions or applying modifications to the observed weak characteristics.
The rest of the paper is organized as follows; Section 2 gives the detailed information of bat algorithm and clonal selection algorithm. Section 3 explains the benchmark functions used in the experiments and the discussions of the results achieved by the algorithms. Lastly, in section 4, the conclusions are given.

THE DETAILS OF BAT ALGORITHM AND CLONALG
Bat algorithm (BA) and clonal selection algorithm (CLONALG) are derived from Darwin's evolution theory. Both of them use the common steps of evolution such as selecting the best individuals, updating the population according to environmental changes and elimination of weak individuals to increase the survival rate of populations.
Bat algorithm is first proposed by Yang (2010) with the adaptation echolocation behaviour of bats. Bats are quite good in navigation even in the darkness by using their well developed system called echolocation. They send echo to the objects and from the reflection of that echo, distances, coordinates and even the identification of objects can be determined. The bat algorithm uses this powerful echolocation behaviour of bats. According to BA, each bat in the population moves towards to prey with a velocity (V i ) at position (X i ) with frequency (f), rate (r) and loudness (A). In each iteration, the values are updated for each bat who moves to the prey, until a bat reached to the food source. The steps of BA is given in Figure 1.

Figure 1:
The main steps of Bat Algorithm. In Figure 1, β is a random number in between (0-1), f min -f max are the boundaries of frequency (0-100), Best is the best solution achieved so far by the algorithm in the population, α and ɣ are the constants to update the loudness (A) and rate (r) in between (0-1), ε is a random number in between (0-1), A t is the average loudness at time t.
CLONALG is first proposed by De Castro and Von Zuben (2000) by adaptation of immune system of organisms. When an intruder called antigen enters to an organism, antibodies are generated to bind them. When the same antigen enters to the organism, antibodies with high Step 1. Initialize the population of bat with variables; velocity (V), position (X), frequency (f ), rate (r) and loudness (A).
Step 2. Calculate the fitness value of each bat and rank them.
Step 3. Update the variables V, X, f, r, and A by applying following equations; Step 4. If (rand>r i ) Select a solution among the best ones. Generate a solution around the selected solution by using the following equation; Generate a new solution randomly.
Step 6. Calculate the fitness value of each bat.
Step 7. Repeat steps 3-6 until a stopping criterion is met. affinity values are generated by recognizing the antigen. The CLONALG uses the idea of antigen-antibody relationship as the steps are explained in Figure 2.

Figure 2:
The main steps of CLONALG.

EXPERIMENTAL RESULTS AND DISCUSSIONS
In this section, the benchmark functions considered for the performance analysis of BA and CLONALG are presented. The detailed information of benchmarks is given in Table 1 and the graphical representations of benchmarks in 3D are provided in the Figure 3.

Function expression
Range Step 1. Initialize the population of antibodies.
Step 2. Calculate the affinity values of each antibody.
Step 3. Select n number of antibodies from the best ones.
Step 4. Clone the antibodies proportionally to their affinity values.
Step 5. Apply mutation to the antibodies according to their affinity values.
Step 6. Calculate affinity values of each antibody.
Step 7. Discard n number of the worst antibodies while introducing n number of randomly generated antibodies.
Step 8. Repeat Steps 2-7, until a stopping criterion is met. The selected benchmark functions are quite effective to estimate the performance of BA and CLONALG algorithms in terms of convergence rate, the number of iterations and robustness. As it can be seen from the part a in Figure 3, the Shubert function has multiple local points. Additionally, some of the local optimums are quite close to the global optimum point which makes the function difficult to solve by an algorithm. In part b, the Beale function is demonstrated. It has one global optimum which is located among four sharp levels. Part c is a representation of Levy function. It has several local optimums and three peak layers formed with multiple local points. Therefore, finding the global optimum of this function is a challenging task by an algorithm.
In part d, the penalized function has three peak layers with multiple local points. The global optimum is located at the point 0 and it is a distinguishing characteristic for an algorithm to find the global optimum by avoiding local optimum points. In part e, eggholder function representation is given. As it can be seen from the figure, it has large number of local optimum and it is quite difficult to optimize the function. The global optimum point is located at -959.64 and this is covered by many local optimums.
In order to make a fair comparison between the algorithms, the same values are practiced. As it is known, when the number of iterations increases, both of the algorithms can reach to the global optimums of given problems. Since BA and CLONALG are quite powerful algorithms to find the global optimum points of given problems, the maximum number of iterations is fixed to 10000 which is considered as a low number of iterations for an optimization task. The number of population is selected as 100. BA and CLONALG performances are quite dependent to the selection of their control parameters. However, a selected combination of control parameters for a benchmark function may not perform well for another function. Therefore, instead of giving fixed values to the control parameters for all functions, the best combinations are selected for each function through the observational experiments. The performance analysis is handled by using Windows 7 on an intel i5 processor with 4GB RAM using C++ language. The experimental results are given as Mean ± Stdev format for each function in Table 2. Mean is the averaged values over 40 independent trials and the Stdev is the standard deviation of the same 40 runs. For the function Shubert, BA reached to the global optimum point for all values of 40 trials. However, the obtained average value by CLONALG has a large number of deviation which is an indication of low robustness. For the functions Beale and Levy, both algorithms showed the similar performance. BA and CLONALG reached to the global optimum. However, in terms of solution quality which is shown by Mean and in terms of robustness which is shown by Stdev, the BA algorithm provides better results than CLONALG.
For the Penalized function, BA obtained the global optimum with more robust values and the convergence rate of CLONALG is not as good as BA. Since the eggholder function has large number of local optimum points, 10000 number of iterations may not be enough to optimize it. However, even at this early stage of optimization, we can see that the solution quality of BA is better than CLONALG and the values are more robust, since Stdev of BA is lower.
According to the experimental results, the following approaches can be expressed; i. All of the algorithms are capable to find the optimum points of given benchmark functions. ii.
The results obtained for Beale and Levy denote that CLONALG obtains slightly better results than BA. iv.
For all functions, when the mean values found by an algorithm are better than the mean values obtained by the other algorithm, then the same observation is done for the standard deviation values which is an indication of robustness of the algorithm. v.
In general, BA provides better results in terms of solution quality and robustness than CLONALG for the functions used in this experiment set. vi.
In CLONALG, selection of antibodies from the best affinity values gives faster convergence. However, elimination of antibodies and introducing new random antibodies to the population can cause low convergence especially for low number of iterations (Ulutas and Kulturel-Konak, 2011). vii.
BA combines desired features of some optimization algorithms (Yang, 2010). In addition to this, BA uses control parameters. Thus, good convergence speed obtained especially for low number of iterations (Bin Basir and Binti Ahmad, 2014). However, it is known that BA is a parameter dependent optimization algorithm and without a well parameter tuning, convergence speed is affected.

CONCLUSIONS
In this paper, the comparative results of performance analysis of two effective evolutionary algorithms; BA and CLONALG are studied using benchmark functions. According to the experimental results obtained for selected benchmarks, it can be concluded that BA performs better than CLONALG. The selected algorithms proved that they are capable of finding the optimum points of given problems. In order to observe the difference between the performances, low number of iterations is used.
It is seen that BA is a good combination of some optimization algorithms. Hence, the good solution quality can be achieved at the early stages of iterations with a good parameter tuning. In CLONALG, elimination of antibodies and introducing random antibodies to the population can cause low converge at the early stages of iterations.