In 2002, a new optimization technique was proposed by Passino which is inspired by the foraging strategy of Escherichia Coli (E. Coli) bacteria present in human intestines called Bacteria Foraging Optimization Algorithm (BFOA) [1]. It is a population-based stochastic search algorithm that has been introduced to solve the problem related to optimization and control system. Since its inception, BFOA successfully has drawn the attention of many researchers from diverse fields to exploit its performance as a high-performance optimizer and has been successfully applied in real world applications such as optimal power control [2], image processing [3], jobs scheduling[4], [5] and etc. The advantages that motivate researchers to explore its …show more content…
By sending the signal, it enables an individual bacterium to communicate with others. Healthy bacteria will be reproduced and poor foraging bacteria will be eliminated. The bacteria will keep repeating these processes in their lifetime.
In BFOA, each of the individual bacteria in the search space is representing an individual solution to the optimization problem [6]. Each bacterium will undergo chemotactic steps to the direction of minimum fitness function (rich in nutrients). During the taxis, each bacterium will communicate with other to swarm in the group toward the global optimum. Bacteria will be evaluated again according to their health and sorted in ascending order. Half of them with better health will be reproduced by splitting into two and the other half of poor health bacteria will be eliminated from the search space. In order to explore more space, some of the bacteria will be eliminated and reinitialized randomly to explore unvisited space in order to find the global minimum or maximum point. For better understanding, this algorithm mechanism will be explained in solving an optimization problem.
In optimization problem that we need to find the minimum of J(θ), θ ∈ ℜp, where we do not have measurements or an analytical description of the gradient ∇J(θ). This problem is considered as a non-gradient optimization problem. BFOA does not rely on the gradient function to operate but use concentration of location of search space as the fitness function. Let θ be
Exact optimisation method is the optimisation method that can guarantee to find all optimal solutions. In principle, the optimality of generated solution can be proofed mathematically. Therefore, exact optimisation is also termed as mathematical optimisation. However, exact optimisation approach is impractical usually. The effort of solving an optimisation problem by exact optimisation grows polynomially with the problem size. For example, to solve a problem by brute force approach, the execution time increases exponentially respect to the dimensions of the problem.
Two initial locations for local search (random) and Global search (chaotic) are given and searching is started for initial population. From initial solution (smell), First generation of population size of points (flies) are created. One half of population search locally other half globally for best solution
1 For the broader scientific community. The project will contribute to implementable algorithm and software for the said class of problems which will have applications in real life application in engineering and science. With the scope of
Initial Population: We originally decide the size of the population, and then the method by which the members for the population are chosen. The primary idea of the size of the population always forms a trade-off between the efficiency and the effectiveness of GA implementation. In general, it appears that there should be some ‘ideal’ value for a given length or size of the population, on the grounds that too little a population would not permit adequate opportunity for exploring the search space effectively, while too big a population would damage the efficiency of the approach that no solution could be expected in an realistic amount of time limit [15].
— The paper reviews the state-of-the-art nature inspired metaheuristic algorithms in optimization, including the Firefly algorithm, PSO algorithms and ABC algorithm. By implementing them in Matlab, we will use worked examples to show how each algorithm works. Firefly algorithm is an evolutionary optimization algorithm, and is inspired by the flashing behavior of special flies called fireflies in nature. There are some noisy non-linear mathematical optimization problems that can be effectively solved by Metaheuristic Algorithms. Firefly algorithm is one of the new metaheuristic algorithms for these optimization problems. The algorithm is inspired by the flashing behavior of fireflies. Firefly Algorithm (FA) is a recent nature inspired optimization algorithm, which simulates the flash pattern and characteristics of fireflies. It is a powerful swarm intelligence algorithm inspired by the flashing phenomenon of the fireflies. The optimization results of both PSO and Firefly are analyzed from the results obtained in Matlab and the results are used to compare both the algorithms.
Transmission expansion planning (TEP) is now a significant power system optimization problem. The TEP problem is a large-scale, complex and nonlinear combinatorial problem of mixed integer nature where the number of candidate solutions to be evaluated increases exponentially with system size. The objective of the TEP is to determine the installation plans of new facilities, lines and other network equipments. The main goal of this paper centers on the application of Biogeography -Based Optimization (BBO) for the transmission planning systems and it is one of mathematical methods (algorithms) to get the optimal planning.
PSO algorithm is developed by the social behavior patterns of the organisms that exist and interact within large groups. As, it converges at a faster rate than the global optimization algorithms, the PSO algorithm is applied for solving various optimization problems easily. In the PSO technique, a population called as a swarm of candidate solutions are encoded as particles in the search space. Initially, PSO begins with the random initialization of the population. These particles move iteratively through the D-dimensional search space to search the optimal solutions, by updating the position of each particle. During the movement of the swarm, a vector Xi=(Xi1, Xi2,…., XiD) represents the current position of the particle ‘i’. Vi=(Vi1, Vi2,…., ViD) represents the velocity of the particle which is in the range of [−vmax, vmax]. The best previous position of a particle is denoted as personal best Pbest. The global best position obtained by the population is denoted as Gbest. The PSO searches for the optimal solution by updating the velocity and position of each particle, based on the Pbest and Gbest. The next position of the particle in the search space is calculated by using the new velocity value. This process is repeated for a fixed number of times or until a minimum error is achieved. The rate of the change in the velocity and position of the particle is given as
Swarm Intelligence is a new emerging field for solving such combinatorial optimization problems. Swarm Intelligence, in particular, Ant Colony metaheuristic is inspired from the way ants search for food by building shortest path between food and nest. Our objective in this project is to develop and enhance Ant Colony metaheuristic algorithm for solving Capacitated Vehicle Routing Problem.
It is a swarm-based intelligence algorithm influenced by the social behavior of animals cherishes a flock of birds finding a food supply or a school of fish protecting themselves from a predator. A particle in PSO is analogous to a bird or fish flying through a search (problem) area. The movement of every particle is coordinated by a rate that has each magnitude and direction. Every particle position at any instance of your time is influenced by its best position and also the position of the most effective particle in an exceedingly drawback area. The performance of a particle is measured by a fitness worth that is drawback specific. The PSO rule is analogous to different biological process algorithms.
As the real world problems are getting complex and intricate day by day, the need of fast, simple (with few control parameters) and effective optimization algorithms is increasing among the researchers from various fields. New algorithms are required to cope up with the existing problems. SMO is a new meta-heuristic nature-inspired algorithm and is a hit and trial based mutual iterative strategy for global optimization over discrete and continuous spaces. It has performed superior to other evolutionary and swarm intelligence based algorithms which is clear from the fact that it gave better results when tested on various benchmark problems. As few control parameters are involved in SMO, so it becomes easy to implement SMO in various types of optimization problems. It is exceptionally evident that some inborn disadvantages are likewise there in each algorithm. To overcome these, various modifications have been made in the original SMO. These modifications has enhanced the basic algorithm and also improved its efficiency due to which, it can be applied to solve various other real world complex optimization problems.
This approach propose a “fuzzy-multi-objective particle swarm optimization” (FMOPSO) for solving TCQT problem. The parameters of cost, time and quality are defined by fuzzy numbers and a “fuzzy multi attribute utility” procedure is used with limits of fuzzy arithmetic operation to adopt and evaluate the selected construction methods. The proposed method is justified and implemented through computational analyses.
To solve this kind of problems many methods were proposed citing NSGAII [1] SPEA2 [2] Indicator-based EA [3] [4], those methods are eventually an evolutionary algorithms which start with an initial population
where λ is typically set at 2.59 as a control parameter, and state of equation is satisfied by 0 ≤ y(n) ≤1. In order to generate initial conditions for the neural network, the equation (7) is first iterated 50 times and values are discarded; and then iterated again to initialize w0, w1, w2, w3, A0, B0, C0, D0, K0, K1, K2, K3, n0, n1, n2, n3. In order for equation (7) to provide randomness and reproducibility of same initial conditions each time it is run even on different computing machine, it should be ensured that values like Key, λ set at 2.59, and initial iteration of 50 to discard values are used with same precision
The effect of different factors on the performance characteristics in a given combination runs can be examined by using the orthogonal array (OA). Simultaneously OA experimental design will also provides a number of input factors to permit the study of their effects and interconnections with the experimental output. Taguchi optimization technique involves to
This set is passed to OPF again to check for feasibility and if it is impossible to improve the solution with those rules then the process stops. The best candidate is selected as a solution after the process is repeated for a specified number of times. The paper which we are referring [1] indicates that the solution obtained by heuristic method is better than that of LR and GA and the time taken to produce the solution is significantly less than those two method.