Investigation of Slime Mould Algorithm and Hybrid Slime Mould Algorithms' Performance in Global Optimization Problems

The Slime mould algorithm (SMA) is a relatively new metaheuristic technique that was just presented. While the performance of the newly proposed algorithms gives satisfactory results in optimization problems, combining a recently proposed algorithm with the components of different algorithms improves the performance of SMAs. In recent years, leader SMA (LSMA) and equilibrium optimizer SMA (ESMA) methods, in which SMA is combined with different algorithms, have been proposed. The advantages of the two proposed methods over SMA in different problems are shown. In this study, in order to eliminate the disadvantages of SMA, such as slow convergence rate and local optimum, the performances of the CEC2020 test functions were investigated together with the LSMA and ESMA methods proposed in recent years. The results obtained are statistically analyzed and given in detail in the study


Introduction
Metaheuristic algorithms have gained unexpectedly widespread popularity in recent years.Their proficiency in tackling several optimization challenges has resulted in this development [1].Among the popular metaheuristic optimization algorithms in the literature are particle swarm optimization, genetic algorithms, differential evolution algorithms, and ant colony algorithms, as well as algorithms such as Grey Wolf Optimizer [2], Equilibrium Optimizer [3], Archimedes Optimization Algorithm [4], Spotted Hyena Optimizer [5], Aquila Optimizer [6], and Slime Mold Optimization Algorithm (SMA) [7], which were proposed in recent years.While each metaheuristic algorithm has distinct benefits, no method, according to the no-free lunch theorem, can handle all optimization problems.The performance of a metaheuristic algorithm is largely determined by its capacity for exploration and exploitation [8].As a result, numerous scholars are continually proposing new algorithms and improving upon the original method.However, while having various appealing properties, it has been noted that these algorithms do not always perform as expected.The effectiveness of most metaheuristic optimization algorithms is dependent on the balance of two opposing aims, exploration and exploitation [9].It is also called exploration and exploitation, diversification and intensification.Exploration guarantees that all areas of the solution domain are sufficiently investigated to provide an approximation of the global optimal solution.Exploitation directs the search effort toward the most effective solutions that have been found up to this point by exploring the environment for further options that are more effective.These two objectives are addressed by search algorithms that use local search techniques, global search approaches, or a combination of both local and global searches: these algorithms are frequently referred to as hybridization [10].
Hybridization may take place in a variety of ways, including the following:

•
Starting the algorithm with one method and then applying the second technique to the final population generated with the first technique,

•
Merging the approach's distinctive operators into the other technique,

•
Using local search to enhance the answer identified by global search, and so on.optimization problems.For this purpose, the leader SMA (LSMA) [11] and equilibrium SMA (ESMA) [12] methods suggested by Naik et al. were chosen.The performance of these three different methods has been examined in the current and widely used CEC2020 test suite.The CEC2020 benchmark problems consist of 10 different optimization problems.These are unimodal functions, multimodal functions, hybrid functions, and composition functions.Furthermore, the performance of these methods was examined using different dimension values, and detailed analyses were carried out.Thus, the different capabilities of the methods obtained as a result of hybridization of a current optimization algorithm were compared with each other and with the original method in different types of problems, and a detailed examination was provided.
The remainder of the paper is organized as follows: To begin with, Section 2 provides an overview of SMA, LSMA, and ESMA.Section 3 describes ten distinct functions drawn from the CEC2020 test functions.Section 4 contains the experimental findings for the test functions.Finally, in Section 5, conclusions are stated and recommendations for further study are made.

Slime Mould Algorithm
In this section, SMA, and hybrid versions of SMA, LSMA, and ESMA, are explained and their mathematical expressions are given.

Original Slime Mould Algorithm
The mathematical notation of SMA consists of three steps.These are approach food, wrap food, and grabble food.In this section, the mathematical structure of SMA is briefly explained [13].
Approach Food: To describe slime mould's approaching behavior as a mathematical equation, the following contraction rule is proposed: where  ⃗⃗⃗⃗⃗ is a [−, ] parameter,  ⃗⃗⃗⃗ decreases linearly from The following is the formula for the variable : where  ∈ 1,2, … , , () is the  's fitness, and  is the best fitness in all iterations.
⃗⃗⃗⃗ is given below: where () ranks in the top fifty percent of the population,  represents a random value in [0,1],  denotes the best fitness in the current iteration phase,  means the worst fitness value,  specifies the series of sorted fitness values.

Wrap Food:
The following equation may be used to update the position of slime mold: where  and  stand for the random value in [0,1], and  and  stand for the lower and upper search range limits.
Grabble Food: As the number of iterations rises, the value of  ⃗⃗⃗⃗ varies at random between [−, ] and eventually approaches zero.The value of  ⃗⃗⃗⃗ varies between [-1,1] and finally goes to zero.

Leader Slime Mould Algorithm
SMA's primary reliance on the population's two slime molds and best leader leads to poor exploitation when more convergence iterations are performed.To eliminate this situation, LSMA has been proposed [1].
According to [2], the updating rule of the SMA concentration for the -th slime mould   (= {  1 ,   2 , … ,    }) for a  dimensional issue from  slime mould is as follows: and The  1 and  2 are random values in the range of 0 and 1;  is the current iteration,  and  upper and lower boundary of the search space, respectively,   is the global best concentration current iteration ,   represents the velocity that is spread evenly throughout the interval,   represents the velocity that goes from 1 to 0 in a linear fashion,  represents the weight of the slime mould,  1 and  2 are the two types of slime mould that were chosen at random from the population of ,  is the probability to determine the slime mould trajectory,  is the elimination-and-dispersal rate which is fixed at 0:03 and  ∈ 1, 2, … , .
The performance of the -th slime mould is determined by its current fitness (  ) and by the fitness of the world's best concentration ( 1) , which is formulated as: Both the velocity   and the velocity   are equally distributed in the [−, ] and [−, ] ranges, respectively.The values of  and  are as follows: and The  is calculated using the slime mould's local fitness value.Let's rank the  slime mould's fitness value for the minimization issue in ascending order in iteration .

Equilibrium Optimizer Slime Mould Algorithm
The search pattern of the SMA requires differential information between two random slime molds and the best slime mold, which may cause results to deviate from the optimum value.The equilibrium pools of the top potential solutions determine how EO searches.
In order to increase integrate the equilibrium pool and augment the SMA's properties, Naik et al. suggested the ESMA.
The air smell is how the slime mold finds the food.Assume there are  slime molds, each of whose location is given by the vector  = [ 1 ,  2 , … ,   ] ′ .The th slime mold's starting location vector is generated at random as Eq. ( 18): where  denotes the current iteration number,  upper bound and  lower bound.The new iteration in  + 1 is modeled as in Eq. ( 19).
Here   is the global best value in the number of iterations.  and   are two randomly selected individuals in  iterations.The  1 and  2 values are random variables that take values between 0 and 1.The  value is 0.03, which is a constant.This number represents the likelihood that is used in the process of eradicating and dispersing the slime mold.
The weighting factor for the slime mold at iteration  is known as the  ⃗⃗⃗ value, and it is determined using the local fitness value.The order of the fitness values in ascending order is done with [, ] =  (), ℎ  = { 1 ,  2 , … ,   }.Thus, the value of  is calculated as in Eq. (20).

𝑊 ⃗⃗⃗ (𝑠𝑜𝑟𝑡𝐼𝑛𝑑𝑒𝑥(𝑗
The  3 value is random variables that take values between 0 and 1.   and   are the local worst (  = ()) and best fitness (  = (1) values, respectively, in the current iteration.The   value is calculated as in Eq. ( 21).  value with the help of other slime molds .shows the decision probability of the trajectory of the slime mold.

Results and Discussion
In On the other hand, hybrid and composition functions, are used to determine the performance of algorithms' ability to avoid local optima and their balance between discovery and exploitation, as they have many local optima.Experiments in the study were carried out on a computer with the Windows 10 operating system, 32 GB RAM, and a CPU of Intel (R) core i9-10900k (3.7 GHz).In the study, the special parameters of the SMA, LSMA, and ESMA algorithms were taken exactly the same as in the original articles.In order to make a fair evaluation under equal conditions, the number of iterations was 1000 and all experiments were run 20 times.In addition, the performances of the algorithms in 3 different dimension values were compared by taking the dimension as 5, 10 and 20.     3 and Table 4 show the results according to dimensions 5, 10, and 20, respectively.Average (Avg.), standard deviation (Std.) and minimum (Min.)values are given in the tables.In addition, for ease of reading, the best values found in each test function are made in bold.  1 and Figure 2, respectively.When Table 3 is examined, it is seen that the methods give the same average in 2 of the 10 test functions.It was seen that LSMA and ESMA gave the best average in 3 functions.While ESMA gave the best average in 4 functions, SMA gave the best average in 1 of them.The convergence curve and boxplot graphics according to Dimension 10 are given in Figure 3 and Figure 4, respectively.

𝐹(𝑥
When Table 4 is examined, it is seen that the methods give the same average in 2 of the 10 test functions.While ESMA gave the best average in 7 functions, LSMA gave the best result in only 1 of them.The convergence curve and boxplot graphics according to Dimension 20 are given in Figure 5 and Figure 6, respectively.
In Table 5, the algorithm or algorithms that give the best value for each test function in different dimensions according to the average value are given.When Table 5 is examined, it is seen that the performance of the methods varies according to the dimension in unimodal functions.In multimodal functions, it was seen that ESMA achieved a better mean value.It has been observed that ESMA gives relatively better results than other methods in hybrid functions.Considering the composite functions, LSMA gave the best average value when the dimension was taken as 5.When the dimension is taken as 10, it is seen that the performances of LSMA and ESMA are the same.Finally, it is seen that ESMA gives better performance when the dimension is taken as 20.In the light of these experimental results, it has been seen that the ESMA method outperforms the other methods, SMA and LSMA, in CEC2020 functions.

Conclusions
Metaheuristic methods have been used successfully in the literature for solving different problems.As the literature studies show, there is no method that gives the best performance for each problem.This increases the interest of the researchers in this subject.For this reason, it is aimed to find the best method by suggesting different hybrid versions of the newly introduced methods to the literature.In this study, performance analyses were made by running different hybrid versions of the SMA method, which has been proposed in recent years, in the CEC 2020 test functions under equal conditions.The experimental results showed that ESMA performed better than the standard SMA and LSMA.This study is significant both for making it easier for researchers to access one of the most recent metaheuristic optimization algorithms, SMA, and its variants, as well as for assisting them in selecting the best algorithm by providing a preliminary idea about the performance of metaheuristic algorithms that they can use in their studies.

Figure 2 .Figure 3 . 10 Figure 4 . 10 Figure 5 .Figure 6 .
Figure 2. Boxplot of the compared methods when dimension 5 The names and equations of these functions are listed in Table1.Unimodal functions play a decisive role in the convergence performance of algorithms.Multimodal functions are used to see if there are problems with early convergence and local optimization in an algorithm.

Table 1 .
: used to control each   ()' s height   : weight value for   (), calculated as below: CEC'2020 test functions and equations : defines which optimum is global optimum   : used to control each   () ′ s coverage range, a small   gives a narrow range for that  ()

.05E+03 0.00E+00 2.05E+03 ESMA 2.05E+03 0.00E+00 2.05E+03
When Table2is examined, it is seen that the methods give the same average in 4 of the 10 test functions.While LSMA gave the best average in 3 functions, ESMA gave the best average in 2 functions.SMA, on the other hand, gave the best average in only one function.The convergence curve and boxplot graphics according to Dimension 5 are given in Figure

Table 5 .
Best algorithm or algorithms according to the average value Figure 1.Convergence curve of the compared methods when dimension 5