CHAPTER 1
INTRODUCTION
This chapter is discussed about the project background, the problem of the project, the objective of the project and project scope.
1.1 PROJECT MOTIVATION
With the development of artificial intelligence in recent years, there has been a growing interest in algorithms inspired by the observation of natural phenomena. We can see that all the algorithms are good replacements as method to solve complex computational problems.
Various heuristic approaches have been adopted by researches including genetic algorithm (Holland 1975), simulated annealing (Kirkpatrick et al. 1983), immune system (Farmer et al. 1986), ant system (Dorigo et al. 1996) and particle swarm optimization (Kennedy and Eberhart 1995; Kennedy and Eberhart 1997).
All the above heuristics involve the desired problem solution from iterative cycles of iterations between the population members. Unfortunately, no algorithm can solve all the optimization problems and have some algorithms can be the best solution for a few problems than the others.
Optimization problems are widely encountered in various fields in science and technology. Sometimes such problems can be very complex due to the actual and practical nature of the objective function or the model constraints. Most of power system optimization problems have complex and nonlinear characteristics with heavy equality and inequality constraints.
Optimization is the important process to many systems in industry and it will produce
the algorithm comparison between fitness fifth function of De Jong and the number of populations
These algorithms have their own positives and negatives. New algorithms are evolving day by day. Two algorithms listed below are discussed in this essay.
In this chapter, we would just look into the introduction of the project its purpose and applications of the project. An overview of the system originally planned to be developed is also introduced here
Greedy technique is an algorithm design policy, built on configurations such as different choices, values to find objective .Greedy algorithms produce good solutions on mathematical problems. The main aim is to find some configurations that are either maximized or minimized. Greedy Algorithms provide a solution for optimization problems that has certain sequence of steps, with a set of choices for each step. Another solution for Greedy algorithm is dynamic programming . It is also used to determine the best choices. But greedy algorithm always makes the choice that is best at the moment to provide the optimal solution for the problem. A greedy algorithm for an optimization always provides the current sub solution. Basically greedy algorithm always gives an optimal solution to the MST (Minimum Spanning tree)problem. Some Examples that are solved by greedy algorithm are Dijkstra’s shortest path algorithm and Prim/Kruskal’s algorithms.
Multi-swarm optimization is a variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist.
Exact optimisation method is the optimisation method that can guarantee to find all optimal solutions. In principle, the optimality of generated solution can be proofed mathematically. Therefore, exact optimisation is also termed as mathematical optimisation. However, exact optimisation approach is impractical usually. The effort of solving an optimisation problem by exact optimisation grows polynomially with the problem size. For example, to solve a problem by brute force approach, the execution time increases exponentially respect to the dimensions of the problem.
The first key step is to do fuzzification and using the trapezoidal of the numbers to develop start up numbers.
Particle swarm optimization (PSO) is initialized with a group of random particles (solutions) and then searches for optima by updating generations. In every iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) each particle has achieved so far, this value is called Pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called Gbest. Each particle consists of: Data representing a possible solution, a velocity value indicating how much the Data can be changed, a personal best (Pbest) value indicating the closest the particle 's Data has ever come to the Target.
A Hybrid Cuckoo Algorithm and Practice Swarm Optimization for Planning of Parking Lots in Distribution systems
This paper presents a novel and ingenious algorithm for automatic generation and optimization of the
The original PSO algorithm was developed by Kennedy and Eberhart. It was encouraged by swarming like schooling behavior in fish and flocking behavior in birds. Both the swarms do not process any global optimization but they provide a technique for confusing predators and predator
Obtaining a good and nearly optimal solution with a reasonable amount of computational effort is the major motivating factor for this method. Integer programming can find the exact optimal solution, but it is computationally intensive.
It is developed from swarm intelligence and is inspired by social behaviour of bird flocking or fish schooling. PSO is an optimization algorithm with implicit parallelism which can be easily handled with the non-differential objective functions. PSO algorithm uses a number of particle vectors moving around in the solution space searching for the optimist solution. Every particle in the algorithm acts as a point in the N-dimensional space. Each particle keeps the information in the solution space for each iteration and the best solution is calculated that has obtained by that particle is called personal best (pbest). This solution is obtained according to the personal experiences of each particle vector. Another best value that is tracked by the PSO is in the neighborhood of that particle and this value is called (gbest)
Abstract—Differential Evolution (DE) is seemingly a standout amongst the most capable and flexible evolutionary optimizers for the nonstop parameter spaces as of late. Since the advancement of DE algorithm on late years is quick and the exploration on and with DE have now achieved a great state, there is an essential need to study late parts of DE algorithm thoroughly. Considering the tremendous advance of research with DE and its applications in various areas of science and innovation, we find that it is an imperative to give a basic concepts of the most recent literary works distributed furthermore to bring up some critical future roads of research. The motivation behind this paper is to condense and sort out the data on these present improvements on DE. Starting with a fundamental ideas and definition of differential advancement, hybridization of DE with different optimizers, furthermore the multi-faceted literature on applications of DE. The paper likewise displays some of fascinating open issues and future research issues on DE.
C.Due to lack of analytical mechanism ,creating the design for swarm based system is difficult.