Upload
emily-logan
View
217
Download
2
Embed Size (px)
Citation preview
经典算法集合经典算法集合
School of Informatio Science and EngineeringSchool of Informatio Science and Engineering
University of JinanUniversity of Jinan
Yuehui Chen Yuehui Chen [email protected]@ujn.edu.cn
http://cilab.ujn.edu.cnhttp://cilab.ujn.edu.cn
Genetic AlgorithmsGenetic Algorithms1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
1.1 Introduction of Genetic Algorithms1.1 Introduction of Genetic Algorithms1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms1.3 Major Advantages1.3 Major Advantages
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms 2.1 Representation2.1 Representation2.2 Initial Population2.2 Initial Population2.3 Evaluation2.3 Evaluation2.4 Genetic Operators2.4 Genetic Operators
3.3. Encoding IssueEncoding Issue3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space3.2 Selection3.2 Selection
Genetic AlgorithmsGenetic Algorithms
4.4. Genetic OperatorsGenetic Operators4.1 Conventional Operators4.1 Conventional Operators4.2 Arithmetical Operators4.2 Arithmetical Operators4.3 Direction-based Operators4.3 Direction-based Operators4.4 Stochastic Operators4.4 Stochastic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms5.1 Structure Adaptation5.1 Structure Adaptation5.2 Parameters Adaptation5.2 Parameters Adaptation
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms6.1 Adaptive Hybrid GA Approach 6.1 Adaptive Hybrid GA Approach 6.2 Parameter Control Approach of GA6.2 Parameter Control Approach of GA6.3 Parameter Control Approach using Fuzzy Logic Controller6.3 Parameter Control Approach using Fuzzy Logic Controller6.4 Design of aHGA using Conventional Heuristics and FLC6.4 Design of aHGA using Conventional Heuristics and FLC
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
1.1 Introduction of Genetic Algorithms1.1 Introduction of Genetic Algorithms
1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms
1.3 Major Advantages1.3 Major Advantages
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
3.3. Encoding IssueEncoding Issue
4.4. Genetic OperatorsGenetic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms
1.1 Introduction of Genetic Algorithms1.1 Introduction of Genetic Algorithms₪ Since 1960s, there has been being an increasing interest in imitating living beinSince 1960s, there has been being an increasing interest in imitating living bein
gs to develop gs to develop powerful algorithms for powerful algorithms for NP hard optimization problemsNP hard optimization problems..
₪ A common term accepted recently refers to such techniques as A common term accepted recently refers to such techniques as Evolutionary CoEvolutionary Computation mputation oror Evolutionary Optimization Evolutionary Optimization methods. methods.
₪ The best known algorithms in this class include:The best known algorithms in this class include:■ Genetic AlgorithmsGenetic Algorithms ( (GAGA)), , developed by Dr. Holland.developed by Dr. Holland.
Holland, J.Holland, J.: : Adaptation in Natural and Artificial SystemsAdaptation in Natural and Artificial Systems , University of Michigan Press, Ann Arbor, , University of Michigan Press, Ann Arbor, MI, 1975; MIT Press, Cambridge, MA, 1992.MI, 1975; MIT Press, Cambridge, MA, 1992.
Goldberg, D.Goldberg, D.: : Genetic Algorithms in Search, Optimization and Machine LearningGenetic Algorithms in Search, Optimization and Machine Learning , Addison-Wesle, Addison-Wesley, Reading, MA, 1989.y, Reading, MA, 1989.
■ Evolution StrategiesEvolution Strategies ( (ESES)), , developed by Dr. Rechenberg and Dr. Schwefel.developed by Dr. Rechenberg and Dr. Schwefel.Rechenberg, I.Rechenberg, I.: : Evolution strategie: Optimierung technischer Systeme nach Prinzipien der biologiEvolution strategie: Optimierung technischer Systeme nach Prinzipien der biologi
schen Evolution, schen Evolution, Frommann-Holzboog, 1973.Frommann-Holzboog, 1973.Schwefel, H.: Schwefel, H.: Evolution and Optimum SeekingEvolution and Optimum Seeking, John Wiley & Sons, 1995., John Wiley & Sons, 1995.
■ Evolutionary ProgrammingEvolutionary Programming ( (EPEP)),, developed by Dr. Fogel. developed by Dr. Fogel.Fogel, L. A. Owens & M. WalshFogel, L. A. Owens & M. Walsh: : Artificial Intelligence through Simulated EvolutionArtificial Intelligence through Simulated Evolution , John Wiley & , John Wiley &
Sons, 1966. Sons, 1966.
■ Genetic ProgrammingGenetic Programming ( (GPGP)),, developed by Dr. Koza. developed by Dr. Koza.Koza, J. R.Koza, J. R.: : Genetic ProgrammingGenetic Programming, MIT Press, 1992., MIT Press, 1992.Koza, J. R.Koza, J. R.: : Genetic Programming IIGenetic Programming II, MIT Press, 1994., MIT Press, 1994.
1.1 Introduction of Genetic Algorithms1.1 Introduction of Genetic Algorithms₪ The The Genetic Algorithms (GA)Genetic Algorithms (GA), as powerful and broadly , as powerful and broadly applicable stochaapplicable stocha
stic searchstic search and and optimization techniquesoptimization techniques, are perhaps the most widely kn, are perhaps the most widely known types of own types of Evolutionary Computation methodsEvolutionary Computation methods today. today.
₪ In past few years, the GA community has turned much of its attention to tIn past few years, the GA community has turned much of its attention to the optimization problems of he optimization problems of industrial engineeringindustrial engineering, resulting in a fresh b, resulting in a fresh body of ody of research and applicationsresearch and applications..
■ Goldberg, D.:Goldberg, D.: Genetic Algorithms in Search, Optimization and Machine Learning, Genetic Algorithms in Search, Optimization and Machine Learning, AdAddison-Wesley, Reading, MA, 1989.dison-Wesley, Reading, MA, 1989.
■ Fogel, D.:Fogel, D.: Evolutionary Computation: Toward a New Philosophy of Machine IntelligeEvolutionary Computation: Toward a New Philosophy of Machine Intelligence, nce, IEEE Press, Piscataway, NJ, 1995.IEEE Press, Piscataway, NJ, 1995.
■ Back, T.:Back, T.: Evolutionary Algorithms in Theory and Practice, Evolutionary Algorithms in Theory and Practice, Oxford University Press, Oxford University Press, New York, 1996.New York, 1996.
■ Michalewicz, Z.:Michalewicz, Z.: Genetic Algorithm + Data Structures = Evolution Programs.Genetic Algorithm + Data Structures = Evolution Programs. 33rdrd ed., ed., NNew York: Springer-Verlag, 1996. ew York: Springer-Verlag, 1996.
■ Gen, M. & R. Cheng:Gen, M. & R. Cheng: Genetic Algorithms and Engineering Design, Genetic Algorithms and Engineering Design, John Wiley, New John Wiley, New York, 1997.York, 1997.
■ Gen, M. & R. ChengGen, M. & R. Cheng: : Genetic Algorithms and Engineering Optimization,Genetic Algorithms and Engineering Optimization, John Wiley, John Wiley, New York, 2000.New York, 2000.
■ Deb, K.:Deb, K.: Multi-objective optimization Using Evolutionary Algorithms,Multi-objective optimization Using Evolutionary Algorithms, John Wiley, 20 John Wiley, 2001.01.
₪ A bibliography on genetic algorithms has been collected by A bibliography on genetic algorithms has been collected by AlanderAlander..■ AlanderAlander, , J.: J.: Indexed Bibliography of Genetic Algorithms: 1957-1993Indexed Bibliography of Genetic Algorithms: 1957-1993, Art of CAD Lt, Art of CAD Lt
d., Espoo, Finland, 1994.d., Espoo, Finland, 1994.
1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms
₪ In general, a GA has In general, a GA has five basic componentsfive basic components, as su, as summarized by Michalewicz.mmarized by Michalewicz.Michalewicz,Michalewicz, Z.: Z.: Genetic Algorithm + Data Structures = Evolution PrGenetic Algorithm + Data Structures = Evolution Pr
ograms.ograms. 3 3rdrd ed., New York: Springer-Verlag, 1996. ed., New York: Springer-Verlag, 1996.
1.1. A A genetic representationgenetic representation of potential solutions to the probl of potential solutions to the problem.em.
2.2. A way to create a population (an A way to create a population (an initial set of potential soluinitial set of potential solutionstions).).
3.3. An An evaluation functionevaluation function rating solutions in terms of their fit rating solutions in terms of their fitness.ness.
4.4. Genetic operatorsGenetic operators that alter the genetic composition of off that alter the genetic composition of offspring (spring (selection, crossover,selection, crossover, mutationmutation,, etc.). etc.).
5.5. Parameter valuesParameter values that genetic algorithm uses ( that genetic algorithm uses (population population sizesize, , probabilities of applying genetic operators, probabilities of applying genetic operators, etc.).etc.).
1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms₪ Genetic RepresentationGenetic Representation and and InitializationInitialization::
■ The genetic algorithm maintains a The genetic algorithm maintains a populationpopulation PP((tt)) of of chromosomeschromosomes oror individ individuals uals vvkk((tt), ), kk=1, 2, …, =1, 2, …, popSizepopSize for for generation generation tt. .
■ Each Each chromosome represents a potentialchromosome represents a potential solutionsolution to the problem at hand. to the problem at hand. ₪ EvaluationEvaluation::
■ Each chromosome is Each chromosome is evaluatedevaluated to give some to give some measure of its measure of its fitnessfitness evaleval((vvkk))..
₪ Genetic OperatorsGenetic Operators::■ Some chromosomes undergo stochastic transformations by means ofSome chromosomes undergo stochastic transformations by means of
genetic operators to form new chromosomesgenetic operators to form new chromosomes, , i.e.,i.e., offspringoffspring. . ■ There are two kinds of transformation: There are two kinds of transformation:
■ CCrossoverrossover, which creates new chromosomes by combining parts from two , which creates new chromosomes by combining parts from two chromosomes. chromosomes.
■ MMutationutation, which creates, which creates new chromosomes by making changes in a singlenew chromosomes by making changes in a single chromosomechromosome..
■ New chromosomesNew chromosomes, called, called offspringoffspring CC((tt),), are then evaluated. are then evaluated. ₪ SelectionSelection::
■ A new population is formed by A new population is formed by selectingselecting thethe more fit chromosomesmore fit chromosomes from from the parent population and the offspringthe parent population and the offspring populationpopulation. .
₪ Best solutionBest solution::■ After several generations, the algorithmAfter several generations, the algorithm converges to converges to the best the best
chromosomechromosome, which hopefully represents, which hopefully represents an optimal or suboptimal an optimal or suboptimal solution to the problem.solution to the problem.
1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms
Initialsolutions
start
11001010101100101010
10111011101011101110
00110110010011011001
11001100011100110001
encoding
chromosome
11001010101100101010
10111011101011101110
11001011001011101110
10111010111010101010
00110001101110011001
00110001100010011001
crossover
mutation
11001011101100101110
10111010101011101010
00110010010011001001
solutions candidates
decoding
fitness computation
evaluation
roulette wheel
selection
termination condition?
Y
N
best solutionstop
newpopulation
The general structure of genetic algorithms Gen, M. & R. Cheng: Genetic Algorithms and Engineering Design, John Wiley, New York, 1997.
offspring
offspring
t 0 P(t)
CC(t)
CM(t)
P(t) + C(t)
1.2 General Structure of Genetic Algorithms1.2 General Structure of Genetic Algorithms₪ Procedure of Simple GAProcedure of Simple GA
procedure: Simple GA
input: GA parameters
output: best solution
begin
t 0; // t: generation number
initialize P(t) by encoding routine; // P(t): population of chromosomes
fitness eval(P) by decoding routine;
while (not termination condition) do
crossover P(t) to yield C(t); // C(t): offspring
mutation P(t) to yield C(t);
fitness eval(C) by decoding routine;
select P(t+1) from P(t) and C(t);
t t+1;
end
output best solution;
end
procedure: Simple GA
input: GA parameters
output: best solution
begin
t 0; // t: generation number
initialize P(t) by encoding routine; // P(t): population of chromosomes
fitness eval(P) by decoding routine;
while (not termination condition) do
crossover P(t) to yield C(t); // C(t): offspring
mutation P(t) to yield C(t);
fitness eval(C) by decoding routine;
select P(t+1) from P(t) and C(t);
t t+1;
end
output best solution;
end
1.3 Major Advantages1.3 Major Advantages
■ Generally, algorithm for solving Generally, algorithm for solving optimization problems is a optimization problems is a sequence of sequence of computational stepscomputational steps which asymptotically which asymptotically converge to optimal solution. converge to optimal solution.
■ Most of classical optimization methods Most of classical optimization methods generate a deterministic sequence of generate a deterministic sequence of computation based on the gradient or computation based on the gradient or higher orderhigher order derivatives of objective derivatives of objective function. function.
■ The methods are applied to a single point in The methods are applied to a single point in the search space.the search space.
■ The point is then improved along the The point is then improved along the deepest descendingdeepest descending direction gradually direction gradually through iterations.through iterations.
■ This This point-to-point approachpoint-to-point approach takes the takes the danger of falling in local optima. danger of falling in local optima.
Conventional Method (point-to-point approach)
initial single point
improvement(problem-specific)
termination condition?
start
stop
Conventional Method
Yes
No
1.3 Major Advantages1.3 Major Advantages
■ Genetic algorithms performs a Genetic algorithms performs a multiple multiple directional searchdirectional search by maintaining a by maintaining a population of potential solutions.population of potential solutions.
■ The The population-to-population approachpopulation-to-population approach is hopeful tois hopeful to make the search escape make the search escape from local optima. from local optima.
■ Population undergoes a simulated Population undergoes a simulated evolution:evolution: at each generation the at each generation the relatively relatively good solutions are good solutions are reproducedreproduced, while the relatively bad , while the relatively bad solutions die.solutions die.
■ Genetic algorithms use Genetic algorithms use probabilistic probabilistic transition rulestransition rules to select someone to be to select someone to be reproduced and someone to die so as reproduced and someone to die so as to guide their search toward regions of to guide their search toward regions of the search space with likely the search space with likely improvement.improvement.
Genetic Algorithm (population-to-population approach)
improvement(problem-independent)
termination condition?
start
stop
Genetic Algorithm
initial point
...initial point
initial point
Initial population
Yes
No
1.3 Major Advantages1.3 Major Advantages Random Search + Directed Search
max f (x)s. t. 0 x ub
Search space
Fitn
ess
f(x)
local optimum
global optimum
local optimumlocal optimum
0 xx1 x2 x4 x5x3
₪ Example of Genetic Algorithm for Unconstrained NumericaExample of Genetic Algorithm for Unconstrained Numerical Optimization (l Optimization (Michalewicz, 1996Michalewicz, 1996))
1.3 Major Advantages1.3 Major Advantages
1)sin()( max xxxf 0.20.1 x
1.3 Major Advantages1.3 Major Advantages₪ Genetic algorithms have received Genetic algorithms have received considerable attention regardingconsiderable attention regarding their their
potential as a potential as a novel optimization techniquenovel optimization technique. There are . There are three major advantagesthree major advantages when applying genetic algorithms to optimization problems.when applying genetic algorithms to optimization problems.■ Genetic algorithms do Genetic algorithms do not have much mathematical requirementsnot have much mathematical requirements about the about the
optimization problems. optimization problems. ■ Due to their evolutionary nature, genetic algorithms will search for solutions without regard Due to their evolutionary nature, genetic algorithms will search for solutions without regard
to the specific inner workings of the problem. to the specific inner workings of the problem.
■ Genetic algorithms can Genetic algorithms can handle any kind of objective functions and any kind of constraintshandle any kind of objective functions and any kind of constraints , , i.e.i.e., linear or nonlinear, defined on discrete, continuous or mixed search spaces., linear or nonlinear, defined on discrete, continuous or mixed search spaces.
■ The ergodicity (The ergodicity ( 遍历性遍历性 ) of evolution operators makes genetic algorithms ) of evolution operators makes genetic algorithms very very effective at performing global searcheffective at performing global search (in probability). (in probability).
■ The traditional approaches perform local search by a convergent stepwise procedure, which The traditional approaches perform local search by a convergent stepwise procedure, which compares the values of nearby points and moves to the relative optimal points. compares the values of nearby points and moves to the relative optimal points.
■ Global optimaGlobal optima can be found only if the problem possesses certain convexity properties that can be found only if the problem possesses certain convexity properties that essentially guarantee that any local optima is a global optima.essentially guarantee that any local optima is a global optima.
■ Genetic algorithms provide us a Genetic algorithms provide us a great flexibility to hybridizegreat flexibility to hybridize with domain with domain dependent heuristicsdependent heuristics to make an efficient implementation for a specific to make an efficient implementation for a specific problem. problem.
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
2.1 Representation2.1 Representation
2.2 Initial Population2.2 Initial Population
2.3 Evaluation2.3 Evaluation
2.4 Genetic Operators2.4 Genetic Operators
3.3. Encoding IssueEncoding Issue
4.4. Genetic OperatorsGenetic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms
2. Example with Simple Genetic Algorithms 2. Example with Simple Genetic Algorithms
₪ WWe explain in detail about e explain in detail about how a genetic algorithm actually how a genetic algorithm actually
worksworks with with aa simple examples. simple examples.
₪ We follow the approach of We follow the approach of implementation of genetic implementation of genetic
algorithmsalgorithms given by Michalewicz. given by Michalewicz.
■ Michalewicz,Michalewicz, Z.: Z.: Genetic Algorithm + Data Structures = Evolution PrGenetic Algorithm + Data Structures = Evolution Pr
ograms.ograms. 3 3rdrd ed., Springer-Verlag ed., Springer-Verlag:: New YorkNew York, 1996., 1996.
₪ The numerical example of The numerical example of unconstrained optimization unconstrained optimization
problemproblem is given as follows: is given as follows:
max f (x1, x2) x1·sin(4x1) + x2·sin(20x2)
s. t. -3.0 x1 4.1 x2
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
max f (x1, x2) x1·sin(4x1) + x2·sin(20x2)s. t. -3.0 x1 4.1 x2
f = 21.5 + x1 Sin [ 4 Pi x1 ] + x2 Sin [ 20 Pi x2 ];Plot3D[f, {x1, -3, 12.1}, {x2, 4.1, 5.8},
PlotPoints ->19,AxesLabel -> {x1, x2, “f(x1, x2)”}];
by Mathematica 4.1
2.1 Representation2.1 Representation
₪ Binary String RepresentationBinary String Representation
The domain of xj is [aj, bj] and the required precision is five places after the decimal point.
The precision requirement implies that the range of domain of each variable should be divided into at least (bj - aj )105 size ranges.
The required bits (denoted with mj) for a variable is calculated as follows:
The mapping from a binary string to a real number for variable xj
is completed as follows:
1210)(2 51 jj mjj
m ab
12)(decimal
jm
jjjjj
absubstringax
2.1 Representation2.1 Representation
₪ Binary String EncodingBinary String Encoding
33 bits
000001010100101001 10111101111111018 bits 15 bits
x1 x2
vj :33 bits
000001010100101001 10111101111111018 bits 15 bits
x1 x2
vj :
The precision requirement implies that the range of domain of each variable should be divided into at least (bj - aj )105 size ranges.
x1 : (12.1-(-3.0)) 10,000 = 151,000 217 <151,000 218, m1 = 18 bits
x2 : (5.8-4.1) 10,000 = 17,000 214 <17,000 215, m2 = 15 bits
precision requirement: m = m1 + m2 = 18 +15 = 33 bits
The required bits (denoted with mj) for a variable is calculated as follows:
2.1 Representation2.1 Representation
₪ Procedure of Binary String EncodingProcedure of Binary String Encoding
step 1: The domain of xj is [aj, bj] and the required precision is five places after the decimal point.
step 2: The precision requirement implies that the range of domain of each variable should be divided into at least (bj - aj )105 size ranges.
step 3: The required bits (denoted with mj) for a variable is calculated as follows:
step 4: A chromosome v is randomly generated, which has the number of genes m, where m is sum of mj (j=1,2).
1210)(2 51 jj mjj
m ab
input: domain of xj [aj, bj], (j=1,2)
output: chromosome v
12)(decimal
jm
jjjjj
absubstringax
input: substringj
output: a real number xj
2.1 Representation2.1 Representation
₪ Procedure of Binary String DecodingProcedure of Binary String Decoding
step 1: Convert a substring (a binary string) to a decimal number.
step 2: The mapping for variable xj is completed as follows:
2.2 Initial Population2.2 Initial Population
₪ Initial population is randomly generated as follows:Initial population is randomly generated as follows:
v1 = [000001010100101001101111011111110] = [x1 x2] = [-2.687969 5.361653]
v2 = [001110101110011000000010101001000] = [x1 x2] = [ 0.474101 4.170144]
v3 = [111000111000001000010101001000110] = [x1 x2] = [10.419457 4.661461]
v4 = [100110110100101101000000010111001] = [x1 x2] = [ 6.159951 4.109598]
v5 = [000010111101100010001110001101000] = [x1 x2] = [ -2.301286 4.477282]
v6 = [111110101011011000000010110011001] = [x1 x2] = [11.788084 4.174346]
v7 = [110100010011111000100110011101101] = [x1 x2] = [ 9.342067 5.121702]
v8 = [001011010100001100010110011001100] = [x1 x2] = [ -0.330256 4.694977]
v9 = [111110001011101100011101000111101] = [x1 x2] = [11.671267 4.873501]
v10 = [111101001110101010000010101101010] = [x1 x2] = [11.446273 4.171908]
2.3 Evaluation2.3 Evaluation₪ The process of The process of evaluating the fitnessevaluating the fitness of a chromosome of a chromosome
consists of the following three steps:consists of the following three steps:
input: chromosome vk, k=1, 2, ..., popSizeoutput: the fitness eval(vk)step 1: Convert the chromosome’s genotype to its phenotype, i.e., convert binary string int
o relative real values xk =(xk1, xk2), k = 1,2, …, popSize.
step 2: Evaluate the objective function f (xk), k = 1,2, …, popSize.
step 3: Convert the value of objective function into fitness. For the maximization problem, the fitness is simply equal to the value of objective function:
eval(vk) = f (xk), k = 1,2, …, popSize.
),,2,1(
),,2,1()()(
ni
popSizekxfveval ik
f (x1, x2) = 21.5 + x1·sin(4π x1) + x2·sin(20π x2)
eval(v1) = f (-2.687969, 5.361653) =19.805119
Example: (x1=-2.687969, x2= 5.361653)
₪ An evaluation function plays An evaluation function plays the role of the environmentthe role of the environment, and it , and it rates chromosomes in terms of rates chromosomes in terms of their fitnesstheir fitness..
₪ The fitness function values of above chromosomes are as The fitness function values of above chromosomes are as follows:follows:
₪ It is clear that chromosome It is clear that chromosome vv44 is is the the strongeststrongest one one and that and that
chromosome chromosome vv33 is is the the weakestweakest one. one.
2.3 Evaluation2.3 Evaluation
eval(v1) = f (-2.687969, 5.361653) =19.805119
eval(v2) = f (0.474101, 4.170144) = 17.370896
eval(v3) = f (10.419457, 4.661461) = 9.590546
eval(v4) = f (6.159951, 4.109598) = 29.406122
eval(v5) = f (-2.301286, 4.477282) = 15.686091
eval(v6) = f (11.788084, 4.174346) = 11.900541
eval(v7) = f (9.342067, 5.121702) = 17.958717
eval(v8) = f (-0.330256, 4.694977) = 19.763190
eval(v9) = f (11.671267, 4.873501) = 26.401669
eval(v10) = f (11.446273, 4.171908) = 10.252480
2.4 Genetic Operators2.4 Genetic Operators₪ SelectionSelection::
■ In most practices, a In most practices, a roulette wheel approachroulette wheel approach is is adopted as the selection adopted as the selection procedure, which is procedure, which is one of theone of the fitness-proportional selectionfitness-proportional selection and can and can select a new population with respectselect a new population with respect to the probability distribution to the probability distribution based on fitness values. based on fitness values.
■ The The roulette wheelroulette wheel can be constructed with the following can be constructed with the following steps:steps:
step 1: Calculate the total fitness for the population
Sizepop
kkevalF
1
)(v
Sizepop
kkevalF
1
)(v
step 2: Calculate selection probability pk for each chromosome vk
SizepopkF
evalp k
k ,...,2,1,)(
= v
step 3: Calculate cumulative probability qk for each chromosome vk
popSizekpqk
jjk ,...,2,1,=
1
step 4: Generate a random number r from the range [0, 1].
step 5: If r q1, then select the first chromosome v1; otherwise, select the kth chromosome vk (2 k popSize) such that qk-1< r qk .
input: population P(t-1), C(t-1)output: population P(t), C(t)
2.4 Genetic Operators2.4 Genetic Operators₪ Illustration of SelectionIllustration of Selection::
step 1: Calculate the total fitness F for the population.
step 2: Calculate selection probability pk for each chromosome vk.
step 3: Calculate cumulative probability qk for each chromosome vk.
step 4: Generate a random number r from the range [0,1].
135372.178)(10
1
k
kevalF v
0.057554=0.148211,=
0.110945,=0.100815,=0.066806,=0.088057,=
0.165077,=0.053839,=0.097515,=0.111180,=
109
8765
4321
pp
pppp
pppp
1.000000=0.942446,=
0.794234,=0.683290,=0.582475,=0.515668,=
0.427611,=0.262534,=0.208695,=0.111180,=
109
8765
4321
qqqq
qqqq
0.197577 0.032685, 0.343242, 0.177618, 0.583392, 0.350871, 0.881893, 0.766503, 0.322062, 0.301431,
input: population P(t-1), C(t-1)output: population P(t), C(t)
2.4 Genetic Operators2.4 Genetic Operators
₪ Illustration of SelectionIllustration of Selection::
step 5: q3< r1 = 0.301432 q4, it means that the chromosome v4 is selected
for new population; q3< r2 = 0.322062 q4, it means that the chromosome
v4 is selected again, and so on. Finally, the new population consists of the
following chromosome.v1' = [100110110100101101000000010111001] (v4 )
v2' = [100110110100101101000000010111001] (v4 )
v3' = [001011010100001100010110011001100] (v8 )
v4' = [111110001011101100011101000111101] (v9 )
v5' = [100110110100101101000000010111001] (v4 )
v6' = [110100010011111000100110011101101] (v7 )
v7' = [001110101110011000000010101001000] (v2 )
v8' = [100110110100101101000000010111001] (v4 )
v9' = [000001010100101001101111011111110] (v1 )
v10' = [001110101110011000000010101001000] (v2 )
2.4 Genetic Operators2.4 Genetic Operators
₪ CrossoverCrossover (One-cut point Crossover) (One-cut point Crossover)■ Crossover used here is Crossover used here is oneone--cut point methodcut point method, which random , which random
selects one cut pointselects one cut point..
■ EExchangesxchanges the right parts the right parts of two parents to generate offspring.of two parents to generate offspring.
■ Consider two chromosomes as follow and the cut point is Consider two chromosomes as follow and the cut point is randomlyrandomly selected after the 17th gene:selected after the 17th gene:
v1 = [100110110100101101000000010111001]
v2 = [001110101110011000000010101001000]
c1 = [100110110100101100000010101001000]
c2 = [001110101110011001000000010111001]
crossing point at 17th gene
2.4 Genetic Operators2.4 Genetic Operators₪ Procedure of One-cut Point Crossover:Procedure of One-cut Point Crossover:
procedure: One-cut Point Crossoverinput: pC, parent Pk, k=1, 2, ..., popSizeoutput: offspring Ck
begin for k 1 to do // popSize: population size
if pc random [0, 1] then // pC: the probability of crossover
i 0; j 0; repeat i random [1, popSize];
j random [1, popSize]; until (i≠j ) p random [1, l -1]; // p: the cut position, l: the length of chromosome Ci Pi [1: p-1] // Pj [p: l ]; Cj Pj [1: p-1] // Pi [p: l ];
end end output offspring Ck;end
2/popSize
2.4 Genetic Operators2.4 Genetic Operators₪ MutationMutation
■ Alters one or more genesAlters one or more genes with a probability equal to the mutation rate. with a probability equal to the mutation rate.
■ Assume that the 16th gene of the chromosome Assume that the 16th gene of the chromosome vv11 is selected for a is selected for a
mutation. mutation.
■ Since the gene is 1, it would be flipped into 0. So the chromosome after Since the gene is 1, it would be flipped into 0. So the chromosome after mutation would be:mutation would be:
v1 = [100110110100101101000000010111001]
c1 = [100110110100101000000010101001000]
mutating point at 16th gene
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms₪ Procedure of Mutation:Procedure of Mutation:
₪ Illustration of Illustration of Mutation:Mutation:
procedure: Mutationinput: pM, parent Pk, k=1, 2, ..., popSizeoutput: offspring Ck
begin for k 1 to popSize do // popSize: population size
for j 1 to l do // l: the length of chromosome if pM random [0, 1] then // pM: the probability of mutation
p random [1, l -1]; // p: the cut position Ck Pk [1: j-1] // Pk [ j ] // Pk[ j+1: l ];
end end
end output offspring Ck ;end
Assume that pM = 0.01
bitPos chromNum bitNo randomNum
105 4 6 0.009857 164 5 32 0.003113 199 7 1 0.000946 329 10 32 0.001282
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
₪ Next GenerationNext Generation
v1' = [100110110100101101000000010111001], f (6.159951, 4.109598) = 29.406122
v2' = [100110110100101101000000010111001], f (6.159951, 4.109598) = 29.406122
v3' = [001011010100001100010110011001100], f (-0.330256, 4.694977) = 19.763190
v4' = [111110001011101100011101000111101], f (11.907206, 4.873501) = 5.702781
v5' = [100110110100101101000000010111001], f (8.024130, 4.170248) = 19.91025
v6' = [110100010011111000100110011101101], f (9.34067, 5.121702) = 17.958717
v7' = [100110110100101101000000010111001], f (6.159951, 4.109598) = 29.406122
v8' = [100110110100101101000000010111001], f (6.159951, 4.109598) = 29.406122
v9' = [000001010100101001101111011111110], f (-2.687969, 5.361653) = 19.805199
v10' = [001110101110011000000010101001000], f (0.474101, 4.170248) = 17.370896
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
₪ Procedure of GA for Unconstrained OptimizationProcedure of GA for Unconstrained Optimization
procedure: GA for Unconstrained Optimization (uO)
input: uO data set, GA parameters
output: best solution
begin
t 0;
initialize P(t) by binary string encoding;
fitness eval(P) by binary string decoding;
while (not termination condition) do
crossover P(t) to yield C(t) by one-cut point crossover;
mutation P(t) to yield C(t);
fitness eval(C) by binary string decoding;
select P(t+1) from P(t) and C(t) by roulette wheel selection;
t t+1;
end
output best solution;
end
procedure: GA for Unconstrained Optimization (uO)
input: uO data set, GA parameters
output: best solution
begin
t 0;
initialize P(t) by binary string encoding;
fitness eval(P) by binary string decoding;
while (not termination condition) do
crossover P(t) to yield C(t) by one-cut point crossover;
mutation P(t) to yield C(t);
fitness eval(C) by binary string decoding;
select P(t+1) from P(t) and C(t) by roulette wheel selection;
t t+1;
end
output best solution;
end
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
₪ Final ResultFinal Result■ The test run is terminated after 1000 generations.The test run is terminated after 1000 generations.■ We obtained the best chromosome in the 884th generation We obtained the best chromosome in the 884th generation
as follows:as follows:
73752438)(
6243295
62276611
73752438)624329562276611()(
21
2
1
.=,xxf
.=x
.=x
.=., .=feval
**
*
*
*v
max f (x1, x2) x1·sin(4x1) + x2·sin(20x2)
s. t. -3.0 x1 4.1 x2
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
₪ Evolutional ProcessEvolutional Process
₪ SimulationSimulation
maxGen: 1000 pC: 0.25 pM: 0.01
2. Example with Simple Genetic Algorithms2. Example with Simple Genetic Algorithms
₪ Evolutional ProcessEvolutional Process
max f (x1, x2) x1·sin(4x1) + x2·sin(20x2)s. t. -3.0 x1 4.1 x2
f = 21.5 + x1 Sin [ 4 Pi x1 ] + x2 Sin [ 20 Pi x2 ];Plot3D[f, {x1, -3.0, 12.1}, {x2, 4.1, 5.8},
PlotPoints ->19,AxesLabel -> {x1, x2, “f(x1, x2)”}];
ContourPlot[ f, {x, -3.0, 12.1},{y, 4.1, 5.8}];
by Mathematica 4.1
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
3.3. Encoding IssueEncoding Issue
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space
3.2 Selection3.2 Selection
4.4. Genetic OperatorsGenetic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms
3. Encoding Issue3. Encoding Issue₪ How to encode a solutionHow to encode a solution of the problem of the problem into a chromosomeinto a chromosome is a is a key key
issueissue for genetic algorithms. for genetic algorithms.■ In In Holland's workHolland's work, encoding is carried out using , encoding is carried out using binary stringsbinary strings..
■ For many GA applications, especially for the problems from For many GA applications, especially for the problems from industrial engineeringindustrial engineering world, the simple GA was difficult to apply world, the simple GA was difficult to apply directly as the directly as the binary string is not a natural codingbinary string is not a natural coding. .
₪ During last ten years, various During last ten years, various nonstring encoding techniquesnonstring encoding techniques have have been created for particular problems. For example:been created for particular problems. For example:■ The The real number codingreal number coding for for constrained optimization problemsconstrained optimization problems
■ The The integer codinginteger coding for for combinatorial optimizationcombinatorial optimization problems. problems.
₪ Choosing an appropriate representationChoosing an appropriate representation of candidate solutions to the of candidate solutions to the problem at hand is the foundation problem at hand is the foundation for applying genetic algorithmsfor applying genetic algorithms to to solve real world problems, which conditions all the subsequent solve real world problems, which conditions all the subsequent steps of genetic algorithms.steps of genetic algorithms.
₪ For any application case, it is necessary For any application case, it is necessary to to analysis carefullyanalysis carefully to to result in an result in an appropriate representationappropriate representation of solutions of solutions together with together with meaningful and meaningful and problem-specific genetic operatorsproblem-specific genetic operators..
3. Encoding Issue3. Encoding Issue₪ According to According to what kind of symbol is usedwhat kind of symbol is used::
■ Binary encodingBinary encoding■ Real number encodingReal number encoding■ Integer/literal permutation encodingInteger/literal permutation encoding■ A general data structure encodingA general data structure encoding
₪ According to According to the structure of encodingsthe structure of encodings::■ One-dimensional encodingOne-dimensional encoding■ Multi-dimensional encodingMulti-dimensional encoding
₪ According to According to the length of chromosomethe length of chromosome::■ Fixed-length encodingFixed-length encoding■ Variable length encodingVariable length encoding
₪ According to According to what kind of contents is encodedwhat kind of contents is encoded::■ Solution onlySolution only■ Solution + parametersSolution + parameters
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space
₪ Basic features of genetic algorithmsBasic features of genetic algorithms is that they work on is that they work on coding spacecoding space and and solution spacesolution space alternatively: alternatively:■ Genetic operations work on coding spaceGenetic operations work on coding space (chromosomes) (chromosomes)
■ While While evaluation and selection work on solution spaceevaluation and selection work on solution space..
■ Natural selection is the link between chromosomes and the Natural selection is the link between chromosomes and the performance of their decoded solutions.performance of their decoded solutions.
Coding space(genotype space)
Solution space(phenotype space)
Encoding
Decoding
GeneticOperations Evaluation and
Selection
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space
₪ For For nonstring coding approachnonstring coding approach, there are , there are three critical three critical
issuesissues emerged concerning with emerged concerning with the encoding and the encoding and
decoding between chromosomes and solutionsdecoding between chromosomes and solutions (or the (or the
mapping between mapping between phenotypephenotype and and genotypegenotype):):
■ The The feasibilityfeasibility of a chromosome of a chromosome
■ The The feasibilityfeasibility refers to the phenomenon that whether or not a solution refers to the phenomenon that whether or not a solution
decoded from a chromosome lies in the feasible region of a given decoded from a chromosome lies in the feasible region of a given
problem. problem.
■ The The legalitylegality of a chromosome of a chromosome
■ The The legalitylegality refers to the phenomenon that whether or not a refers to the phenomenon that whether or not a
chromosome represents a solution to a given problem. chromosome represents a solution to a given problem.
■ The The uniquenessuniqueness of mapping of mapping
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space
₪ Feasibility and Legality as shown in Figure 1.1Feasibility and Legality as shown in Figure 1.1
Coding space Solution spaceillegal one
infeasible one
feasible one
Feasible area
Fig. 1.1 Feasibility and Legality
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space
₪ The The infeasibilityinfeasibility of chromosomes of chromosomes originates from the nature originates from the nature of the constrained optimization problem. of the constrained optimization problem. ■ Whatever methodsWhatever methods, conventional ones or genetic algorithms, , conventional ones or genetic algorithms,
must must handle the constraintshandle the constraints..
■ For many optimization problems, For many optimization problems, the the feasible regionfeasible region can be can be represented as represented as a system of equalities or inequalitiesa system of equalities or inequalities (linear or (linear or nonlinear).nonlinear).
■ For such cases, For such cases, many efficient many efficient penalty methodspenalty methods have been have been proposed to proposed to handle infeasible chromosomeshandle infeasible chromosomes..
■ In constrained optimization problems, In constrained optimization problems, the optimum typically the optimum typically occurs at the boundary between feasible and infeasible areasoccurs at the boundary between feasible and infeasible areas..
■ The The penalty approachpenalty approach will will force genetic searchforce genetic search to approach to to approach to optimumoptimum from both side of feasible and infeasible regions. from both side of feasible and infeasible regions.
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space₪ The The illegalityillegality of chromosomes of chromosomes originates from the nature of encoding originates from the nature of encoding
techniques. techniques. ■ For many For many combinatorial optimizationcombinatorial optimization problems, problems, problem-specific encodingproblem-specific encoding
s are useds are used and such encodings usually and such encodings usually yield to illegal offspringyield to illegal offspring by a simplby a simple one-cut point crossover operatione one-cut point crossover operation. .
■ Because an illegal chromosome can not be decoded to a solution, it meanBecause an illegal chromosome can not be decoded to a solution, it means that such chromosome can not be evaluated, s that such chromosome can not be evaluated, repairing techniquesrepairing techniques are us are usually adopted to convert an illegal chromosome to a legal oneually adopted to convert an illegal chromosome to a legal one. .
₪ For example, the well-known For example, the well-known PMX operatorPMX operator is essentially a kind of t is essentially a kind of two-cut point crossover for permutation representation together witwo-cut point crossover for permutation representation together with h a repairing procedure to a repairing procedure to resolve the illegitimacy causedresolve the illegitimacy caused by the si by the simple two-cut point crossovermple two-cut point crossover..
₪ OrvoshOrvosh and and DavisDavis have shown many combinatorial optimization pr have shown many combinatorial optimization problems using GA.oblems using GA.■ Orvosh, D. & L. Davis: Orvosh, D. & L. Davis: Using a genetic algorithm to optimize problems with Using a genetic algorithm to optimize problems with
feasibility constraints, feasibility constraints, Proc. of 1Proc. of 1stst IEEE Conf. on Evol. Compu., IEEE Conf. on Evol. Compu., pp.548-55pp.548-552, 1994.2, 1994.
■ It is relatively easyIt is relatively easy to repair an infeasible or illegal chromosometo repair an infeasible or illegal chromosome and the and the rerepair strategypair strategy did indeed surpass other strategies such as did indeed surpass other strategies such as rejecting strategy rejecting strategy or penalizing strategyor penalizing strategy..
3.1 Coding Space and Solution Space3.1 Coding Space and Solution Space₪ The The mappingmapping from chromosomes to solutions from chromosomes to solutions (decoding) may (decoding) may
belong to one of the following three cases: belong to one of the following three cases: ■ 1-to-1 mapping1-to-1 mapping■ nn-to-1 mapping-to-1 mapping■ 1-to-1-to-nn mapping mapping
₪ The The 1-to-1 mapping1-to-1 mapping is the best one among three cases and is the best one among three cases and 1-to-1-to-nn mapping mapping is the most undesired one. is the most undesired one.■ We need to consider these problems carefully when designing a We need to consider these problems carefully when designing a
new nonstring coding so as to build an effective genetic new nonstring coding so as to build an effective genetic algorithm.algorithm.
Coding space Solution space
1-to-1 mapping
n-to-1 mapping
1-to-n mapping
3.2 Selection3.2 Selection
₪ The The principleprinciple behind genetic algorithms behind genetic algorithms is essentially is essentially Darwinian natural selectionDarwinian natural selection. .
₪ SelectionSelection provides the driving force in a genetic algorithm provides the driving force in a genetic algorithm and the selection pressure is a critical in it.and the selection pressure is a critical in it.■ Too muchToo much, the search will terminate prematurely., the search will terminate prematurely.
■ Too littleToo little, progress will be slower than necessary., progress will be slower than necessary.
■ Low selection pressureLow selection pressure is indicated at the start to the GA search is indicated at the start to the GA search in favor of in favor of a wide exploration of the search spacea wide exploration of the search space. .
■ High selection pressureHigh selection pressure is recommended at the end in order to is recommended at the end in order to exploit the most promising regionsexploit the most promising regions of the search space. of the search space.
₪ The selection directs GA search towards promising The selection directs GA search towards promising regions in the regions in the search spacesearch space. .
₪ During last few years, many selection methods have been During last few years, many selection methods have been proposed, examined, and compared.proposed, examined, and compared.
3.2 Selection3.2 Selection₪ Sampling SpaceSampling Space
■ In Holland's original GA, parents are replaced by their offspring soon In Holland's original GA, parents are replaced by their offspring soon after they give birth.after they give birth.
■ This is called as This is called as generationalgenerational replacementreplacement. . ■ Because genetic operations are blind in nature, Because genetic operations are blind in nature, offspring may be offspring may be
worseworse than their parents. than their parents. ■ To overcome this problem, several replacement strategies have been To overcome this problem, several replacement strategies have been
examined.examined.■ Holland suggested that each offspring replaces a randomly chosen Holland suggested that each offspring replaces a randomly chosen
chromosome of the current population as it was born.chromosome of the current population as it was born.■ De Jong De Jong proposed aproposed a crowding strategycrowding strategy..
■ DeJong, K.: DeJong, K.: An Analysis of the Behavoir of a Class of Genetic Adaptive SystAn Analysis of the Behavoir of a Class of Genetic Adaptive Systems, Ph.D. thesis, University of Michigan, Ann Arbor, 1975.ems, Ph.D. thesis, University of Michigan, Ann Arbor, 1975.
■ In In the crowding modelthe crowding model, when an offspring was born, one parent was selecte, when an offspring was born, one parent was selected to die. The dying parent was chosen as that parent was most closely resed to die. The dying parent was chosen as that parent was most closely resembled the new offspring using a simple bit-by-bit similarity count to measure mbled the new offspring using a simple bit-by-bit similarity count to measure resemblance.resemblance.
3.2 Selection3.2 Selection
₪ Sampling SpaceSampling Space■ Note that in Holland's works, Note that in Holland's works, selection refers to choosing parents selection refers to choosing parents
for recombinationfor recombination and and new population was formed by replacing new population was formed by replacing parents with their offspringparents with their offspring. . They called it asThey called it as reproductive planreproductive plan..
■ Since Grefenstette and Baker's work, Since Grefenstette and Baker's work, selection is used to form selection is used to form next generationnext generation usually with a probabilistic mechanism usually with a probabilistic mechanism. .
■ Grefenstette,Grefenstette, J. & J. Baker: J. & J. Baker: “How genetic algorithms work: a criti “How genetic algorithms work: a critical look at implicit parallelism,” Proc. of the 3cal look at implicit parallelism,” Proc. of the 3rdrd Inter. Conf. on GA, pp.2 Inter. Conf. on GA, pp.20-27, 1989.0-27, 1989.
■ Michalewicz Michalewicz gave gave a detail descriptiona detail description on simple genetic algorithms on simple genetic algorithms where offspring replaced their parents soon after they were born where offspring replaced their parents soon after they were born at each generation and next generation was formed byat each generation and next generation was formed by roulette roulette wheel selectionwheel selection (Michalewicz, 1994)(Michalewicz, 1994). .
3.2 Selection3.2 Selection₪ Stochastic SamplingStochastic Sampling
■ The The selection phaseselection phase determines the actual number of copies determines the actual number of copies that that each chromosome will receive based on its survival probability.each chromosome will receive based on its survival probability.
■ The The selection phaseselection phase is consist of two parts: is consist of two parts:
■ Determine the Determine the chromosome’s expected valuechromosome’s expected value;;■ Convert the Convert the expected values to the number of offspringexpected values to the number of offspring..
■ A A chromosome’s expected valuechromosome’s expected value is a real number indicating the is a real number indicating the average number of offspring that a chromosome should average number of offspring that a chromosome should receive. receive. The sampling procedure is used to convert the real The sampling procedure is used to convert the real expected value to the number of offspring.expected value to the number of offspring.
■ Roulette wheel selectionRoulette wheel selection■ Stochastic universal samplingStochastic universal sampling
3.2 Selection3.2 Selection
₪ Deterministic SamplingDeterministic Sampling■ Deterministic procedures which select the best Deterministic procedures which select the best
chromosomes from parents and offspring.chromosomes from parents and offspring.■ ((++)-selection)-selection■ ((, , )-selection)-selection■ Truncation selectionTruncation selection■ Block selectionBlock selection■ Elitist selectionElitist selection■ The generational replacementThe generational replacement■ Steady-state reproductionSteady-state reproduction
3.2 Selection3.2 Selection
₪ Mixed SamplingMixed Sampling■ Contains both random and deterministic features Contains both random and deterministic features
simultaneously.simultaneously.
■ Tournament selectionTournament selection ( ( 锦标赛锦标赛 ))
■ Binary tournament selectionBinary tournament selection
■ Stochastic tournament selectionStochastic tournament selection
■ Remainder stochastic samplingRemainder stochastic sampling
3.2 Selection3.2 Selection
₪ Regular Sampling SpaceRegular Sampling Space■ Containing all offspring but just part of parentsContaining all offspring but just part of parents
P6
P3
P1
P6
P3
P1 P1’P1’
P6’P6’
crossover
mutation
population
replacement
P3’P3’
P6 ’
P3 ’
P1’
P6 ’
P3 ’
P1’
new population
selection
Selection based on regular sampling space
3.2 Selection3.2 Selection₪ Enlarged sampling spaceEnlarged sampling space
■ containing all parents and offspringcontaining all parents and offspring
P6
P3
P1
P6
P3
P1 P1’P1’
P6’P6’
crossover
mutation
population
replacement
P3’P3’
P6 ’
P1 ’
P6 ’
P1 ’
new population
selection
Selection based on enlarged sampling space
P1’P1’
P6’P6’P3’P3’
3.2 Selection3.2 Selection
₪ Selection ProbabilitySelection Probability■ Fitness scalingFitness scaling has a twofold intention: has a twofold intention:
■ To maintain To maintain a reasonable differential between relative fitness ratina reasonable differential between relative fitness ratings of chromosomes.gs of chromosomes.
■ To prevent To prevent a too-rapid takeover by some supper chromosomes in a too-rapid takeover by some supper chromosomes in order to meet the requirement to limit competition early on, but to sorder to meet the requirement to limit competition early on, but to stimulate it later.timulate it later.
■ Suppose that the raw fitness Suppose that the raw fitness ffkk (e.g. objective function value) f (e.g. objective function value) f
or the or the kk-th chromosomes, -th chromosomes, the scaled fitness the scaled fitness ffkk'' is: is:
■ Function Function gg(·) may take different form to yield different scaling (·) may take different form to yield different scaling methods.methods.
fk' = g( fk )
3.2 Selection3.2 Selection
Linear scaling
bfaf kk '
Power low scalingkk ff '
problem)on maximizati(for 10
,minmax
min'
ff
fff k
k
Normalizing scaling
Tfk
kef /'
Boltzmann scaling
Scaling Mechanisms
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
3.3. Encoding IssueEncoding Issue
4.4. Genetic OperatorsGenetic Operators
4.1 Conventional operators4.1 Conventional operators
4.2 Arithmetical operators4.2 Arithmetical operators
4.3 Direction-based operators4.3 Direction-based operators
4.4 Stochastic operators4.4 Stochastic operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms
4. Genetic Operators4. Genetic Operators₪ Genetic operatorsGenetic operators are used to are used to alter the genetic alter the genetic
composition of chromosomescomposition of chromosomes during representation. during representation. ₪ There are two common genetic operators:There are two common genetic operators:
■ Crossover Crossover ■ Operating on two chromosomes at a time and generating offspring by Operating on two chromosomes at a time and generating offspring by
combining both chromosomes’ features.combining both chromosomes’ features.
■ MutationMutation■ Producing spontaneous random changes in various chromosomes.Producing spontaneous random changes in various chromosomes.
₪ There are an evolutionary operator:There are an evolutionary operator:■ SelectionSelection
■ Directing a GA search toward promising region in the search space.Directing a GA search toward promising region in the search space.
4. Genetic Operators4. Genetic Operators₪ Crossover can be roughly classified into four classes:Crossover can be roughly classified into four classes:
■ Conventional operatorsConventional operators■ Simple crossover (one-cut point, two-cut point, multi-cut point, uniform)Simple crossover (one-cut point, two-cut point, multi-cut point, uniform)■ Random crossover (flat crossover, blend crossover)Random crossover (flat crossover, blend crossover)■ Random mutation (boundary mutation, plain mutation)Random mutation (boundary mutation, plain mutation)
■ Arithmetical operatorsArithmetical operators■ Arithmetical crossover (convex, affine, linear, average, intermediate)Arithmetical crossover (convex, affine, linear, average, intermediate)■ Extended intermediate crossoverExtended intermediate crossover■ Dynamic mutation (nonuniform mutation)Dynamic mutation (nonuniform mutation)
■ Direction-based operatorsDirection-based operators■ Direction-based crossoverDirection-based crossover■ Directional mutationDirectional mutation
■ Stochastic operatorsStochastic operators■ Unimodal normal distribution crossoverUnimodal normal distribution crossover■ Gaussian mutationGaussian mutation
4.1 Conventional Operators4.1 Conventional Operators One-cut Point Crossover:
Random Mutation (Boundary Mutation):
],...,,,...,,,[
],...,,,...,,,[
2121
2121
nkkk
nkkk
yyyyyy
xxxxxx
y
xcrossing point at kth position
parents
offspring],...,,,...,,,[
],...,,,...,,,[
2121
2121
nkkk
nkkk
xxxyyy
yyyxxx
y'
x'
mutating point at kth position
],...,,,...,,,[ 2121 nkkk xxxxxx xparent
offspring ],...,,,...,,,[ 21'21 nkkk xxxxxx x'
4.2 Arithmetical Operators4.2 Arithmetical Operators
■ CrossoverCrossover■ Suppose that these are two parents Suppose that these are two parents xx11 and and xx22, the offspring can be , the offspring can be
obtained by obtained by 11xx11+ + 22xx22 with different multipliers with different multipliers 11 and and 2 2 ..
Convex Crossover ( 凸 )
Affine Crossover (仿射)
Linear Crossover
If 1+2=1, 1 >0, 2 >0
If 1+2=1
If 1+2 2, 1 >0, 2 >0
x1’=1x1+ 2x2
x2’=1x2+ 2x1
x1
x2 linear hull = R2
solution space
x1
x2
convex hull
affine hull
Fig 1.2 Illustration showing convex, affine, and linear hull
4.2 Arithmetical Operators4.2 Arithmetical Operators
■ Nonuniform Mutation (Dynamic Mutation)Nonuniform Mutation (Dynamic Mutation)■ For a given parent For a given parent xx,, if the element if the element xxkk of it is selected for mutation, the resulting oof it is selected for mutation, the resulting o
ffspring is ffspring is xx' = [' = [xx11 … … xxkk'' … … xxnn], ],
where where xxkk'' is randomly selected is randomly selected from two possible choice: from two possible choice:
■ where where xxkkUU and and xxkk
LL are the are the upper and lower boundsupper and lower bounds for for xxk k ..
■ The function The function ΔΔ((tt, , yy) returns a value in the range [0, ) returns a value in the range [0, yy] such that the value of ] such that the value of ΔΔ((tt, , yy) approaches to 0 as ) approaches to 0 as tt increases ( increases (tt is the generation number): is the generation number):
where where rr is a random number from [0, 1], is a random number from [0, 1], TT is the maximal generation number, and is the maximal generation number, and b is a parameter determining the degree of nonuniformity.b is a parameter determining the degree of nonuniformity.
),('k
Ukkk xxtxx ),(' L
kkkk xxtxx or
b
T
tryyt
1),(
4.3 Direction-based Operators4.3 Direction-based Operators■ This operation use the values of objective function in This operation use the values of objective function in determining the determining the
direction of genetic searchdirection of genetic search::
■ Direction-based crossoverDirection-based crossover
■ Generate a single offspring Generate a single offspring xx' from two parents ' from two parents xx11 and and xx22 according to according to
the following rules: the following rules:
where 0< where 0< rr 1.1.
■ Directional mutationDirectional mutation
■ The offspring after mutation would be:The offspring after mutation would be:
i
ninii
x
xxxfxxxxf
),,,,(),,,,( 11
d
x' = x + r · d
where
r = a random nonnegative real number
x' = r · (x2 - x1)+ x2
4.4 Stochastic Operators4.4 Stochastic Operators₪ Unimodal Normal Distribution Crossover (UNDX) Unimodal Normal Distribution Crossover (UNDX)
■ The UNDX generates two children from a The UNDX generates two children from a region of normal distributioregion of normal distributionn defined by three parents. defined by three parents.
■ In one dimension defined by two parents In one dimension defined by two parents pp11 and and pp22, the standard devi, the standard deviation of the normal distribution is proportional to the distance betweeation of the normal distribution is proportional to the distance between parents n parents pp11 and and pp22. .
■ In the other dimension orthogonal to the first one, the standard deviaIn the other dimension orthogonal to the first one, the standard deviation of the normal distribution is proportional to the distance of the thtion of the normal distribution is proportional to the distance of the third parent ird parent pp33 from the line. from the line.
■ The distance is also divided by in order to reduce the influence of The distance is also divided by in order to reduce the influence of the third parent.the third parent.
n
p3
p1
p2
d2 d1
Axis Connecting two Parents
Normal Distribution12
4.4 Stochastic Operators4.4 Stochastic Operators
₪ Unimodal Normal Distribution Crossover (UNDX)Unimodal Normal Distribution Crossover (UNDX)
Assume
P1 & P2 : the parents vectors
C1 & C2 : the child vectors
n: the number of variables
d1: the distance between parents p1 and p2
d2: the distance of parents p3 from the axis
connecting parents p1 and p2
z1: a random number with normal
distribution N(0, 2 )
zk : a random number with the normal
distribution N(0, 2 ), k=1,2,…, n
& : certain constants
1
k
The children are generated as follows:
,,...,2,1,,
||)(
,
...,,3,2
,0(~),,0(~
2/)(
12121
2211
2211
21
2112
2111
i jnjiee
PPPPe
ndd
nk
NzNz
PPm
ezezmC
ezezmC
ji
kk
n
kkk
n
kkk
)
4.4 Stochastic Operators4.4 Stochastic Operators
₪ Gaussian Mutation Gaussian Mutation An chromosome in evolution strategies consists of two components (x,
), where the first vector x represents a point in the search space, the second vector represents standard deviation.
An offspring (x', ') is generated as follows:
),0(
),0(
σxx
σσσ
N
eN
where N(0, D ') is a vector of independent random Gaussian numbers with a mean of zero and standard deviations .
Evolution Strategy
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
3.3. Encoding IssueEncoding Issue
4.4. Genetic OperatorsGenetic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
5.1 Structure Adaptation5.1 Structure Adaptation
5.2 Parameters Adaptation5.2 Parameters Adaptation
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms
5. Adaptation of Genetic Algorithm5. Adaptation of Genetic Algorithm₪ Since the genetic algorithms are inspired from the idea of Since the genetic algorithms are inspired from the idea of
evolution, it is natural to expect that theevolution, it is natural to expect that the adaptationadaptation is used not only is used not only for finding solutionsfor finding solutions to a given problem to a given problem, but also , but also for tuning the for tuning the genetic algorithmsgenetic algorithms to the particular problem to the particular problem..
₪ There are two kinds of adaptation of GA.There are two kinds of adaptation of GA.■ Adaptation to ProblemsAdaptation to Problems
■ Advocates modifying some componentsAdvocates modifying some components of genetic algorithms, such as of genetic algorithms, such as representation, crossover, mutation, and selection, to choose an appropriate form representation, crossover, mutation, and selection, to choose an appropriate form of the algorithm to meet the nature of a given problem.of the algorithm to meet the nature of a given problem.
■ Adaptation to Evolutionary processesAdaptation to Evolutionary processes■ Suggests a way to tune the parametersSuggests a way to tune the parameters of the changing configurations of genetic of the changing configurations of genetic
algorithms while solving the problem.algorithms while solving the problem.■ Divided into five classes:Divided into five classes:
Adaptive parameter settingsAdaptive parameter settings Adaptive genetic operatorsAdaptive genetic operators Adaptive selectionAdaptive selection Adaptive representationAdaptive representation Adaptive fitness functionAdaptive fitness function
5.1 Structure Adaptation5.1 Structure Adaptation
₪ This approach requires This approach requires a a modificationmodification of an of an original original problemproblem into an appropriated form suitable for the genetic into an appropriated form suitable for the genetic algorithms.algorithms.
₪ This approach includes a This approach includes a mapping between potential mapping between potential solutions and binary representationsolutions and binary representation, taking care of , taking care of decodes or repair procedures, etc.decodes or repair procedures, etc.
₪ For For complex problemscomplex problems, such an approach usually fails to , such an approach usually fails to provide successful applicationsprovide successful applications..
Fig. 1.3 Adapting a problem to the genetic algorithms.
adaptation
Problem
Adapted problemAdapted problem Genetic AlgorithmsGenetic Algorithms
5.1 Structure Adaptation5.1 Structure Adaptation₪ Various non-standardVarious non-standard implementations of the GAs have been implementations of the GAs have been
created for created for particular problemsparticular problems..₪ This approach This approach leaves the problem unchanged and adapts the leaves the problem unchanged and adapts the
genetic algorithmsgenetic algorithms by by modifying a chromosome representationmodifying a chromosome representation of of a potential solution and applying appropriate genetic operators.a potential solution and applying appropriate genetic operators.
₪ It is It is not a good choicenot a good choice to to use the whole original solutionuse the whole original solution of a of a given problem as the chromosome because many given problem as the chromosome because many real problemsreal problems are too are too complexcomplex to have a suitable implementation of genetic to have a suitable implementation of genetic algorithms with the whole solution representation. algorithms with the whole solution representation.
Fig. 1.4 Adapting the genetic algorithms to a problem.
adaptation
Problem Adapted problemAdapted problem
Genetic AlgorithmsGenetic Algorithms
5.1 Structure Adaptation5.1 Structure Adaptation
₪ The approach is to The approach is to adapt both GAs and the given problemadapt both GAs and the given problem. . ₪ GAs are used to GAs are used to evolve an appropriate permutationevolve an appropriate permutation and/or and/or
combination of some items under consideration, and a combination of some items under consideration, and a heuristic methodheuristic method is subsequently used to construct a is subsequently used to construct a solution according to the permutation.solution according to the permutation.
₪ The approach has been successfully applied in the area of The approach has been successfully applied in the area of industrial engineering and has recently become the main industrial engineering and has recently become the main approach for the practical use of the GAs.approach for the practical use of the GAs.
Fig. 1.5 Adapting both the genetic algorithms and the problem.
ProblemAdapted GAsAdapted GAs
Genetic AlgorithmsGenetic AlgorithmsAdapted problemAdapted problem
5.2 Parameters Adaptation5.2 Parameters Adaptation
₪ The behaviors of GA are characterized by the The behaviors of GA are characterized by the balancebalance between between exploitationexploitation and and explorationexploration in the search space in the search space, , which is strongly affected by which is strongly affected by the parameters of GAthe parameters of GA. . ■ Usually, Usually, fixed parametersfixed parameters are used in most applications of GA are used in most applications of GA
and are determined with a set-and-test approach.and are determined with a set-and-test approach.
■ Since GA is an intrinsically dynamic and adaptive process, the Since GA is an intrinsically dynamic and adaptive process, the use of use of constant parametersconstant parameters is thus in contrast to the general is thus in contrast to the general evolutionary spirit.evolutionary spirit.
₪ Therefore, it is a natural idea to try Therefore, it is a natural idea to try to modify the values of to modify the values of strategy parameters during the run of the genetic algorithmstrategy parameters during the run of the genetic algorithm by using the following three ways.by using the following three ways.■ DeterministicDeterministic: using some deterministic rule: using some deterministic rule
■ AdaptiveAdaptive: taking feedback information from the current state of : taking feedback information from the current state of searchsearch
■ Self-adaptiveSelf-adaptive: employing some self-adaptive mechanism: employing some self-adaptive mechanism
5.2 Parameters Adaptation5.2 Parameters Adaptation
₪ The adaptation takes place if the value of a The adaptation takes place if the value of a strategy parametstrategy parameterer by some is by some is altered by some deterministic rulealtered by some deterministic rule..
₪ Time-varying approachTime-varying approach is used, which is measured by the is used, which is measured by the number of generations.number of generations.
₪ For example, the mutation For example, the mutation ratio is decreasedratio is decreased gradually alo gradually along ng with the elapse of generationwith the elapse of generation by using the following equ by using the following equation.ation.
■ where where tt is the current generation number is the current generation number and and maxGenmaxGen is the maxi is the maximum generationmum generation. .
■ Hence, mutation ratio will decrease from 0.5 to 0.2 as the number Hence, mutation ratio will decrease from 0.5 to 0.2 as the number of generations increase to of generations increase to maxGenmaxGen..
t pM = 0.5 - 0.3maxGen
5.2 Parameters Adaptation5.2 Parameters Adaptation₪ Adaptive AdaptationAdaptive Adaptation
■ The adaptation takes place if there is some form of feedback from the The adaptation takes place if there is some form of feedback from the evolutionary pevolutionary processrocess, which is used to , which is used to determine the directiondetermine the direction and/or and/or magnitude of the changemagnitude of the change to t to the strategy parameter.he strategy parameter.
■ Early approach include Rechenberg’s Early approach include Rechenberg’s 1/5 success rule1/5 success rule in evolution strategies, which in evolution strategies, which was used to vary the step size of mutation. was used to vary the step size of mutation.
■ Rechenberg,Rechenberg, I.: I.: Evolutionstrategie: Optimieriung technischer Systems nach Prinzipien der biEvolutionstrategie: Optimieriung technischer Systems nach Prinzipien der biologischen Evolution, ologischen Evolution, Frommann-Holzboog, Stuttgart, Germany, 1973.Frommann-Holzboog, Stuttgart, Germany, 1973.
■ The rule states that the ratio of successful mutations to all mutations should be 1/5. Hence, iThe rule states that the ratio of successful mutations to all mutations should be 1/5. Hence, if the ratio is greater than 1/5 then increase the step size, and if the ratio is less than 1/5 then f the ratio is greater than 1/5 then increase the step size, and if the ratio is less than 1/5 then decrease the step size.decrease the step size.
■ Davis’s Davis’s adaptive operator fitnessadaptive operator fitness utilizes feedback on the success of a utilizes feedback on the success of a larger numbelarger numberr of reproduction operators to adjust the ratio being used. of reproduction operators to adjust the ratio being used.
■ Davis,Davis, L.L.: “Applying adaptive algorithms to epistatic domains,” : “Applying adaptive algorithms to epistatic domains,” Proc. of the Inter. Joint Conf. Proc. of the Inter. Joint Conf. on Artif. Intel.,on Artif. Intel., pp.162-164, 1985. pp.162-164, 1985.
■ Julstrom’s Julstrom’s adaptive mechanism regulates the ratioadaptive mechanism regulates the ratio between crossovers and mutatio between crossovers and mutations based on their performance.ns based on their performance.
■ Julstrom,Julstrom, B.: B.: “What have you done for me lately? Adapting operator probabilities in a steady-“What have you done for me lately? Adapting operator probabilities in a steady-state genetic algorithm,” state genetic algorithm,” Proc. of the 6Proc. of the 6thth Inter. Conf. on GA, Inter. Conf. on GA,pp.81-87, 1995.pp.81-87, 1995.
■ An extensive study of these kinds of An extensive study of these kinds of learning-rule mechanismslearning-rule mechanisms has been done by has been done by TuTuson and Rossson and Ross..
■ Tuson, A. & P. Ross:Tuson, A. & P. Ross: “Cost based operator rate adaptation: an investigation,” “Cost based operator rate adaptation: an investigation,” Proc. of the 4Proc. of the 4thth Inter. Conf. on Para. Prob. Solving from NatureInter. Conf. on Para. Prob. Solving from Nature , pp.461-469, 1996., pp.461-469, 1996.
5.2 Parameters Adaptation5.2 Parameters Adaptation₪ Self-adaptive AdaptationSelf-adaptive Adaptation
■ The adaptation enables strategy parameters to evolve along with the evolThe adaptation enables strategy parameters to evolve along with the evolutionary process. The utionary process. The parametersparameters are encoded onto the chromosomes of are encoded onto the chromosomes of the the chromosomeschromosomes and and undergo mutationundergo mutation and and recombinationrecombination. .
■ The The encoded parameters do not affect the fitness of chromosomes directlyencoded parameters do not affect the fitness of chromosomes directly, but bett, but better values will lead to better chromosomes and these chromosomes will be more likeler values will lead to better chromosomes and these chromosomes will be more likely to survive and produce offspring, hence propagating these better parameter valuey to survive and produce offspring, hence propagating these better parameter values.s.
■ The parameters to The parameters to self-adaptself-adapt can be ones that can be ones that control the operationcontrol the operation of genetic algori of genetic algorithms, ones that control the operation of reproduction or other operators, or probabilitthms, ones that control the operation of reproduction or other operators, or probabilities of using alternative processes.ies of using alternative processes.
■ Schwefel developed the method to self-adapt the mutation step size and Schwefel developed the method to self-adapt the mutation step size and the mutation rotation angles in evolution strategies.the mutation rotation angles in evolution strategies.
■ Schwefel, Schwefel, H.:H.: Evolution and Optimum Seeking, Evolution and Optimum Seeking, Wiley, New York, 1995.Wiley, New York, 1995.
■ Hinterding used a Hinterding used a multi-chromosome multi-chromosome to implement the self-adaptation in to implement the self-adaptation in the cutting stock problem with contiguity.the cutting stock problem with contiguity.
■ where self-adaptation is used to adapt the probability of using one of the two availabwhere self-adaptation is used to adapt the probability of using one of the two available mutation operators, and the strength of the group mutation operator.le mutation operators, and the strength of the group mutation operator.
遗传算法的数学基础遗传算法的数学基础
(( 11 )模式定理 )模式定理 (( 22 )积木块假设 )积木块假设
模式模式
模式是指种群个体基因串中的相似样板,它用来模式是指种群个体基因串中的相似样板,它用来描述基因串中某些特征位相同的结构。在二进描述基因串中某些特征位相同的结构。在二进制编码中,模式是基于三个字符集制编码中,模式是基于三个字符集 (0,1,*)(0,1,*) 的字的字符串,符号符串,符号 ** 代表任意字符,即 代表任意字符,即 0 0 或者 或者 11 。 。
模式示例:模式示例: 10**110**1
两个定义两个定义
■ 定义定义 11 :模式 :模式 H H 中确定位置的个数称为模式 中确定位置的个数称为模式 H H 的阶,的阶,记作记作 O(H)O(H) 。例如。例如 O(10**1)=3 O(10**1)=3 。。
■ 定义定义 22 :模式 :模式 H H 中第一个确定位置和最后一个确定位置中第一个确定位置和最后一个确定位置之间的距离称为模式 之间的距离称为模式 H H 的定义距,记作的定义距,记作 δ(H)δ(H) 。例如。例如δ(10**1)=4 δ(10**1)=4 。 。
模式的阶和定义距的含义模式的阶和定义距的含义
模式阶用来反映不同模式间确定性的差异,模模式阶用来反映不同模式间确定性的差异,模式阶数越高,模式的确定性就越高,所匹配的式阶数越高,模式的确定性就越高,所匹配的样本数就越少。在遗传操作中,即使阶数相同样本数就越少。在遗传操作中,即使阶数相同的模式,也会有不同的性质,而模式的定义距的模式,也会有不同的性质,而模式的定义距就反映了这种性质的差异。 就反映了这种性质的差异。
模式定理模式定理
模式定理:具有低阶、短定义距以及平均适应度模式定理:具有低阶、短定义距以及平均适应度高于种群平均适应度的模式在子代中呈指数增长。高于种群平均适应度的模式在子代中呈指数增长。
模式定理保证了较优的模式(遗传算法的较优模式定理保证了较优的模式(遗传算法的较优解)的数目呈指数增长,为解释遗传算法机理提解)的数目呈指数增长,为解释遗传算法机理提供了数学基础。 供了数学基础。
模式定理模式定理
从模式定理可看出,有高平均适应度、短定义距、低从模式定理可看出,有高平均适应度、短定义距、低阶的模式,在连续的后代里获得至少以指数增长的串数阶的模式,在连续的后代里获得至少以指数增长的串数目,这主要是因为选择使最好的模式有更多的复制,交目,这主要是因为选择使最好的模式有更多的复制,交叉算子不容易破坏高频率出现的、短定义长的模式,而叉算子不容易破坏高频率出现的、短定义长的模式,而一般突变概率又相当小,因而它对这些重要的模式几乎一般突变概率又相当小,因而它对这些重要的模式几乎没有影响。 没有影响。
积木块假设 积木块假设 积木块假设:遗传算法通过短定义距、低阶以及高平均适积木块假设:遗传算法通过短定义距、低阶以及高平均适
应度的模式(积木块),在遗传操作下相互结合,最终接应度的模式(积木块),在遗传操作下相互结合,最终接近全局最优解。近全局最优解。
模式定理保证了较优模式的样本数呈指数增长,从而使遗模式定理保证了较优模式的样本数呈指数增长,从而使遗传算法找到全局最优解的可能性存在;而积木块假设则指传算法找到全局最优解的可能性存在;而积木块假设则指出了在遗传算子的作用下,能生成全局最优解。 出了在遗传算子的作用下,能生成全局最优解。
遗传算法的收敛性分析遗传算法的收敛性分析
遗传算法要实现全局收敛,首先要求任意初始种遗传算法要实现全局收敛,首先要求任意初始种群经有限步都能到达全局最优解,其次算法必须群经有限步都能到达全局最优解,其次算法必须由保优操作来防止最优解的遗失。与算法收敛性由保优操作来防止最优解的遗失。与算法收敛性有关的因素主要包括种群规模、选择操作、交叉有关的因素主要包括种群规模、选择操作、交叉概率和变异概率。 概率和变异概率。
种群规模对收敛性的影响种群规模对收敛性的影响
通常,种群太小则不能提供足够的采样点,以通常,种群太小则不能提供足够的采样点,以致算法性能很差;种群太大,尽管可以增加优致算法性能很差;种群太大,尽管可以增加优化信息,阻止早熟收敛的发生,但无疑会增加化信息,阻止早熟收敛的发生,但无疑会增加计算量,造成收敛时间太长,表现为收敛速度计算量,造成收敛时间太长,表现为收敛速度缓慢。缓慢。
选择操作对收敛性的影响选择操作对收敛性的影响
选择操作使高适应度个体能够以更大的概率生存,选择操作使高适应度个体能够以更大的概率生存,从而提高了遗传算法的全局收敛性。如果在算法中从而提高了遗传算法的全局收敛性。如果在算法中采用最优保存策略,即将父代群体中最佳个体保留采用最优保存策略,即将父代群体中最佳个体保留下来,不参加交叉和变异操作,使之直接进入下一下来,不参加交叉和变异操作,使之直接进入下一代,最终可使遗传算法以概率代,最终可使遗传算法以概率 11 收敛于全局最优解。收敛于全局最优解。
交叉概率对收敛性的影响交叉概率对收敛性的影响 交叉操作用于个体对,产生新的个体,实质上是交叉操作用于个体对,产生新的个体,实质上是
在解空间中进行有效搜索。交叉概率太大时,种在解空间中进行有效搜索。交叉概率太大时,种群中个体更新很快,会造成高适应度值的个体很群中个体更新很快,会造成高适应度值的个体很快被破坏掉;概率太小时,交叉操作很少进行,快被破坏掉;概率太小时,交叉操作很少进行,从而会使搜索停滞不前,造成算法的不收敛。 从而会使搜索停滞不前,造成算法的不收敛。
变异概率对收敛性的影响变异概率对收敛性的影响
变异操作是对种群模式的扰动,有利于增加种群变异操作是对种群模式的扰动,有利于增加种群的多样性 。但是,变异概率太小则很难产生新模的多样性 。但是,变异概率太小则很难产生新模式,变异概率太大则会使遗传算法成为随机搜索式,变异概率太大则会使遗传算法成为随机搜索算法。 算法。
遗传算法的本质 遗传算法的本质
遗传算法本质上是对染色体模式所进行的一系列运遗传算法本质上是对染色体模式所进行的一系列运算,即通过选择算子将当前种群中的优良模式遗传算,即通过选择算子将当前种群中的优良模式遗传到下一代种群中,利用交叉算子进行模式重组,利到下一代种群中,利用交叉算子进行模式重组,利用变异算子进行模式突变。通过这些遗传操作,模用变异算子进行模式突变。通过这些遗传操作,模式逐步向较好的方向进化,最终得到问题的最优解。式逐步向较好的方向进化,最终得到问题的最优解。
Example - Pattern Recognition
0
0
0
0
1
1
1
1
0
0
0
0
Objective: Recognize a single character, the number 1.
In the genetic algorithm a small population P with only 8 Individuals is chosen to be evolved towards recognizing 1.
The target individual is x = [010010010010].
Initialization
1
0
0
0
0
1
1
1
0
1
1
1
0
0
1
0
1
1
0
0
0
1
0
1
0
0
1
0
0
0
1
1
0
1
0
0
1
0
1
1
0
0
0
0
0
1
1
1
1
1
0
1
0
1
1
0
1
0
1
0
0
0
1
1
0
1
1
1
1
0
1
0
1
0
0
0
0
0
0
0
1
0
1
1
0
0
0
0
1
1
1
1
1
1
1
0
Genotype
As the goal is to generate individuals that are as similar as possible to the target individual, a straightforward way of determining fitness is by counting the number of similar bits between each individual of the population and the target individual. The number of different bits between two bitstrings is termed Hamming distance. For instance, the vector h of Hamming distances between the individuals of P and the target individual is:
h = [6,7,9,5,5,4,6,7].
Fitness evaluation
The fitness of the population can be measured by subtracting the length l = 12 of each individual by its respective Hamming distance to the target individual. Therefore, the vector of fitnesses becomes:
f = [f1, f2, f3, f4, f5, f6, f7, f8] = [6,5,3,7,7,8,6,5].
The ideal individual is the one whose fitness is f = 12. Therefore, the aim of the search to be performed by the GA is to maximize the fitness of each individual, until (at least) one individual of P has fitness f = 12.
phenotype
The final solution
Genetic AlgorithmsGenetic Algorithms
1.1. Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
2.2. Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
3.3. Encoding IssueEncoding Issue
4.4. Genetic OperatorsGenetic Operators
5.5. Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms
6.6. Hybrid Genetic AlgorithmsHybrid Genetic Algorithms6.1 Adaptive Hybrid GA Approach 6.1 Adaptive Hybrid GA Approach
6.2 Parameter control approach of GA6.2 Parameter control approach of GA
6.3 Parameter control approach using Fuzzy Logic Controller6.3 Parameter control approach using Fuzzy Logic Controller
6.4 Design of aHGA using conventional heuristics and FLC6.4 Design of aHGA using conventional heuristics and FLC
6. Hybrid Genetic Algorithms6. Hybrid Genetic Algorithms₪ One of the most common forms of One of the most common forms of hybrid GAhybrid GA is to incorporate is to incorporate local optimizatilocal optimizati
onon as add-on extra to the canonical GA. as add-on extra to the canonical GA.
₪ With hybrid GA, the local optimization is applied to With hybrid GA, the local optimization is applied to each newly generated offspeach newly generated offspringring to move it to a to move it to a local optimumlocal optimum before injecting it into the population. before injecting it into the population.
₪ The The genetic searchgenetic search is used to perform is used to perform global explorationglobal exploration among the populatio among the population while n while local searchlocal search is used to perform is used to perform local exploitationlocal exploitation around chromosome around chromosomes.s.
₪ There are two common forms of There are two common forms of genetic local searchgenetic local search. One features . One features Lamarckian Lamarckian evolutionevolution and the other features the and the other features the Baldwin effectBaldwin effect. Both approaches use the . Both approaches use the metaphor that an chromosome learns (metaphor that an chromosome learns (hill climbinghill climbing) during its lifetime (generat) during its lifetime (generation).ion).
₪ In In LamarckianLamarckian case, the resulting chromosome (after hill climbing) is put back case, the resulting chromosome (after hill climbing) is put back into the population. In the Baldwinian case, only fitness is changed and the geinto the population. In the Baldwinian case, only fitness is changed and the genotype remains unchanged.notype remains unchanged.
₪ The The BaldwinianBaldwinian strategy can sometimes converge to a global optimum when L strategy can sometimes converge to a global optimum when Lamarckian strategy converges to a local optimum using the same local searchiamarckian strategy converges to a local optimum using the same local searching. However, the Baldwinian strategy is much slower than the Lamarckian strang. However, the Baldwinian strategy is much slower than the Lamarckian strategy.tegy.
6. Hybrid Genetic Algorithms6. Hybrid Genetic Algorithms₪ The early works which linked genetic and The early works which linked genetic and LaLa
marckian evolutionary theorymarckian evolutionary theory included: included:
■ Grefenstette introduced Grefenstette introduced Lamarckian operatorsLamarckian operators in into GAs.to GAs.
■ David defined David defined Lamarckian probabilityLamarckian probability for mutatio for mutations in order to enable a mutation operator to be mns in order to enable a mutation operator to be more controlled and to introduce some qualities of ore controlled and to introduce some qualities of a local hill climbing operator.a local hill climbing operator.
■ Shaefer added an Shaefer added an intermediate mappingintermediate mapping between between the chromosome space and solution space into the chromosome space and solution space into a standard GA, which is Lamarckian in nature.a standard GA, which is Lamarckian in nature.
■ Kennedy gave an explanation of hybrid GAs with Kennedy gave an explanation of hybrid GAs with Lamarckian evolution theory.Lamarckian evolution theory.
6. Hybrid Genetic Algorithms6. Hybrid Genetic Algorithms₪ Let Let PP((tt) and ) and CC((tt) be parents and offspring in current ) be parents and offspring in current
generationgeneration t t. . ₪ The general structure of hybrid GAs is described as The general structure of hybrid GAs is described as
follows:follows: procedure: Hybrid Genetic Algorithm
input: GA parameters
output: best solution
begin
t 0;
initialize P(t);
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t);
mutation P(t) to yield C(t);
local search C(t);
fitness eval(C);
select P(t+1) from P(t) and C(t);
t t+1;
end
output best solution;
end
procedure: Hybrid Genetic Algorithm
input: GA parameters
output: best solution
begin
t 0;
initialize P(t);
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t);
mutation P(t) to yield C(t);
local search C(t);
fitness eval(C);
select P(t+1) from P(t) and C(t);
t t+1;
end
output best solution;
end
6. Hybrid Genetic Algorithms6. Hybrid Genetic Algorithms Hybrid GA based on Darwin’s & Lamarckian’s evolution
Grefenstette, J.: “Lamarkian learning in multi-agent environment,” Proc. of the 4th Inter. Conf. on GAs, pp.303-310, 1991.
P6
P3
P1
P6
P3
P1 P1’P1’
P6’P6’
crossover
mutation
population
replacement
P3’P3’
P6 ’
P3 ’
P1’
P6 ’
P3 ’
P1’
new population
selection
hill-climbing( local search )
6.1 Adaptive Hybrid GA Approach 6.1 Adaptive Hybrid GA Approach
₪ Weakness of conventional GA approach to the problem of cWeakness of conventional GA approach to the problem of combinatorial nature of design variablesombinatorial nature of design variables
■ Conventional GAs have Conventional GAs have not any scheme for locating local search arenot any scheme for locating local search areaa resulting from GA loop. resulting from GA loop.
■ The identification of the correct settings of genetic parameters The identification of the correct settings of genetic parameters (such as (such as ppopulation size, opulation size, probability of probability of crossover and mutation crossover and mutation operators) is operators) is not an easy tasknot an easy task..
ImprovingImproving Applying a local search technique to GA loop.
Parameter control approach of GAImprovingImproving
6.1 Adaptive Hybrid GA Approach6.1 Adaptive Hybrid GA Approach
₪ Applying a local search technique to GA loopApplying a local search technique to GA loop■ Hill climbing methodHill climbing method
Michalewicz, Z.:Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution ProgramGenetic Algorithms + Data Structures = Evolution Program , 3, 3rdrd ed., New Yo ed., New York: Spring-Verlag, 1996rk: Spring-Verlag, 1996
impr
ove
impr
ove
Local optimumLocal optimum
Global optimumGlobal optimum
fitnessfitness
Fig. 1.6 Hill climbing method
6.1 Adaptive Hybrid GA Approach6.1 Adaptive Hybrid GA Approach
₪ Applying a local search technique to GA loopApplying a local search technique to GA loop■ Iterative hill climbing method Iterative hill climbing method
■ Yun, Y. S. and C. U. MoonYun, Y. S. and C. U. Moon:: “Comparison of Adaptive Genetic Algorithms for “Comparison of Adaptive Genetic Algorithms for Engineering Optimization Problems,” Engineering Optimization Problems,” International Journal of Industrial Engineering, International Journal of Industrial Engineering, vovol. 1l. 100, no. , no. 44, pp., pp.584-590, 2003584-590, 2003..
impr
ove
impr
ove
Local optimumLocal optimum
Global optimumGlobal optimum
Search rangefor local searchSearch range
for local search
Solution by GASolution by GA
fitnessfitness
Fig. 1.7 Iterative hill climbing method
6.1 Adaptive Hybrid GA Approach6.1 Adaptive Hybrid GA Approach
₪ Procedure of Iterative Procedure of Iterative Hill Climbing MethodHill Climbing Method in GA loop in GA loop
procedure: Iterative hill climbing method in GA loop (Yun and Moon, 2003)
input: a best chromosome vc
output: new best chromosome vn
begin
Select a best chromosome vc in the GA loop;
Randomly generate as many chromosomes as popSize in the
neighborhood of vc;
Select the chromosome vn with the optimal fitness value of the
objective function f among the set of new chromosomes;
if f (vc) > f (vn) then
vc vn;
output new best chromosome vn;
end
6.2 Parameter Control Approach of GA6.2 Parameter Control Approach of GA
■ Two Two MMethodologies for ethodologies for CControlling ontrolling GGenetic enetic PParametersarameters 1.1. Using conventional heuristics Using conventional heuristics [1] [1] Srinvas, M. & L. M. PatnaikSrinvas, M. & L. M. Patnaik: “Adaptive Probabilities of Crossover and Mutation in: “Adaptive Probabilities of Crossover and Mutation in
Genetic Algorithms,” Genetic Algorithms,” IEEE Transaction on Systems, Man and CyberneticsIEEE Transaction on Systems, Man and Cybernetics, vol. 24,, vol. 24,
no. 4, pp. 656-667, 1994.no. 4, pp. 656-667, 1994.
[2] [2] Mak, K. L., Y. S. Wong & X. X. Wang:Mak, K. L., Y. S. Wong & X. X. Wang: “An Adaptive Genetic Algorithm for“An Adaptive Genetic Algorithm for
Manufacturing Cell Formation”, Manufacturing Cell Formation”, International Journal of Manufacturing TechnologyInternational Journal of Manufacturing Technology,,
vol. 16, pp. 491-497, 2000.vol. 16, pp. 491-497, 2000.
2. 2. Using artificial intelligent techniques, such as fuzzy logic controllersUsing artificial intelligent techniques, such as fuzzy logic controllers
[1] [1] Song, Y. H., G. S. Wang, P. T. Wang & A. T. Johns:Song, Y. H., G. S. Wang, P. T. Wang & A. T. Johns: “Environmental/Economic“Environmental/Economic
Dispatch Using Fuzzy Logic Controlled Genetic Algorithms,” Dispatch Using Fuzzy Logic Controlled Genetic Algorithms,” IEEE Proceedings on IEEE Proceedings on
Generation, Transmission and DistributionGeneration, Transmission and Distribution, vol. 144, no. 4, pp. 377-382, 1997, vol. 144, no. 4, pp. 377-382, 1997
[2] [2] Cheong, F. & R. Lai:Cheong, F. & R. Lai: “Constraining the Optimization of a Fuzzy Logic Controller “Constraining the Optimization of a Fuzzy Logic Controller
Using an Enhanced Genetic Algorithm,” Using an Enhanced Genetic Algorithm,” IEEE Transactions on Systems, Man, and IEEE Transactions on Systems, Man, and
Cybernetics-Part B: CyberneticsCybernetics-Part B: Cybernetics, vol. 30, no. 1, pp. 31-46, 2000., vol. 30, no. 1, pp. 31-46, 2000.
[3] [3] Yun, Y. S. Yun, Y. S. && M M.. Gen Gen:: “Performance Analysis of Adaptive Genetic Algorithms with “Performance Analysis of Adaptive Genetic Algorithms with
Fuzzy Logic and Heuristics,” Fuzzy Logic and Heuristics,” Fuzzy Optimization and Decision MakingFuzzy Optimization and Decision Making, , vol. vol. 22, no. , no.
22, pp., pp. 161-175, 161-175, June 2003. June 2003.
6.2 Parameter Control Approach of GA6.2 Parameter Control Approach of GA
■ Srinvas and Patnaik’sSrinvas and Patnaik’s Approach Approach (IEEE-SMC 1994) (IEEE-SMC 1994)
HHeuristic Updating Strategyeuristic Updating Strategy
This scheme is to control This scheme is to control PPcc and and PPMM using various fitness at each generation using various fitness at each generation..
where : where : maximum fitness value at each generationmaximum fitness value at each generation..
: : average fitness value at each generationaverage fitness value at each generation.. : : the larger of the fitness values of the chromosomes to bethe larger of the fitness values of the chromosomes to be crossedcrossed..
: : the fitness value of the the fitness value of the iith chromosome to which the th chromosome to which the mmutation utation
with a ratewith a rate PPMM is applied is applied..
avgcro
avgcro
avg
cro
C
ffk
ffff
ffk
p
,
,)(
3
max
max1
avgmut
avgmut
avg
mut
M
ffk
ffff
ffk
p
,
,)(
4
max
max2
maxf
avgf
crof
mutf
6.2 Parameter Control Approach of GA6.2 Parameter Control Approach of GA
₪ Parameter control approach Parameter control approach uusing sing conventional heuristicsconventional heuristics
■ Mak Mak et alet al.’s.’s Approach Approach (Srinvas & Patnaik, 1994)(Srinvas & Patnaik, 1994) HHeuristic Updating Strategyeuristic Updating Strategy
This scheme is to control This scheme is to control ppcc and and ppMM with respect to the fitness of with respect to the fitness of
offspring at each offspring at each gengenerationeration..
005.0)1()(,05.0)1()( tptptptpMMCC
005.0)1()(,05.0)1()( tptptptpMMCC
procedure: Regulation of and using the fitness of offspring (Srinvas & Patnaik, 1994) input: GA parameters, pC(t-1), pM(t-1) output: pC(t), pM(t) begin if then
if then
if then
end output pC(t), pM(t); end
Cp
Mp
1.0/ popSizeoffSize ff
1.0/ popSizeoffSize ff
1.0/1.0 popSizeoffSize ff
)1()(),1()( tptptptpMMCC
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
₪ Parameter Parameter CControl ontrol AApproach pproach uusing sing Fuzzy Logic Controller (FFuzzy Logic Controller (FLC)LC)
Song, Y. H., G. S. Wang, P. T. Wang & A. T. Johns:Song, Y. H., G. S. Wang, P. T. Wang & A. T. Johns: “Environmental/Economic Dispatch Using“Environmental/Economic Dispatch Using
Fuzzy Logic Controlled Genetic Algorithms,” Fuzzy Logic Controlled Genetic Algorithms,” IEEE Proceedings on Generation, Transmission andIEEE Proceedings on Generation, Transmission and
DistributionDistribution, Vol. 144, No. 4, pp. 377-382, 1997., Vol. 144, No. 4, pp. 377-382, 1997.
■ Basic ConceptBasic Concept
HHeuristic updating strategy for the crossover and mutation rateuristic updating strategy for the crossover and mutation rates is to consider ces is to consider chahanges of average fitness in nges of average fitness in the the GA population of two continuous generations.GA population of two continuous generations.
For example, in minimization problem, we can set the change of the averageFor example, in minimization problem, we can set the change of the average
fitness at generation fitness at generation tt, as follows:, as follows:
where where
parSizeparSize : population size satisfying : population size satisfying the the constraintsconstraints
offSizeoffSize : offspring size satisfying : offspring size satisfying the the constraints constraints
)(tfavg
))()(()( tftftf offSizeparSizeavg
))()(
( 11
offSize
tf
parSize
tfoffSize
k k
parSize
k k
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
procedure: regulation of pC and pM using the average fitness
input: GA parameters, pC(t-1), pM(t-1),Δfave(t-1),Δfave(t),ε,γ
output: pC(t), pM(t)
begin
if
then increase pC and pM for next generation;
if
then decrease pC and pM for next generation;
if
then rapidly increase pC and pM for next generation;
output pC(t), pM(t);
end
γtfεγtfε avgavg )()1( and
εtfγεtfγ avgavg )()1( and
εtfεεtfε avgavg )()1( and
Implementation Strategy for Crossover FLCImplementation Strategy for Crossover FLC
step 1: Input and output of crossover FLCstep 1: Input and output of crossover FLC
The inputs of the crossover FLC are the and The inputs of the crossover FLC are the and
in continuous two generations, the output of which is a change in the ,in continuous two generations, the output of which is a change in the ,
step 2: Membership functions of , and step 2: Membership functions of , and
The membership functions of the fuzzy input and output linguistic variables areThe membership functions of the fuzzy input and output linguistic variables are
illustrated in Figures 1 and 2, respectively. The input and output results of illustrated in Figures 1 and 2, respectively. The input and output results of
discretization for the and are set at Table 1, and thediscretization for the and are set at Table 1, and the
and are normalized into the range [-1.0, 1.0]. The and are normalized into the range [-1.0, 1.0]. The is also is also
normalized into the range [-0.1, 0.1] according to their corresponding maximumnormalized into the range [-0.1, 0.1] according to their corresponding maximum
valuesvalues..
)1( tf avg )(tf avgCp )(tc
)1( tf avg )(tf avg
)1( tf avg )(tf avg
)(tc
)1( tf avg
)(tf avg
)(tc
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
Implementation Strategy for Crossover FLCImplementation Strategy for Crossover FLC
Fig.1.8 Membership functions for Fig. 1.9 Membership function of Fig.1.8 Membership functions for Fig. 1.9 Membership function of
andand
where: NR – Negative larger, NL – Negative large, NM – Negative medium, where: NR – Negative larger, NL – Negative large, NM – Negative medium,
NS – Negative small, ZE – Zero, PS – Positive small, NS – Negative small, ZE – Zero, PS – Positive small,
PM - Positive medium, PL – Positive large, PR – Positive larger.PM - Positive medium, PL – Positive large, PR – Positive larger.
)1( tf avg )(tf avg)(tc
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
Implementation Strategy for Crossover FLCImplementation Strategy for Crossover FLC
inputsinputs outputsoutputs
-4-4
-3-3
-2-2
-1-1
00
11
22
33
44
7.0x
5.07.0 x3.05.0 x
1.03.0 x1.01.0 x
3.01.0 x
5.03.0 x7.05.0 x
7.0x
6.6. 3 3 ParameterParameter Control Approach using Fuzzy Logic Control Approach using Fuzzy Logic ControllerController
Table 1.1 Input and output results of discrimination
Implementation Strategy for Crossover FLCImplementation Strategy for Crossover FLC
step 3: Fuzzy decision tablestep 3: Fuzzy decision table
Use the same fuzzy decision table as the conventional work Use the same fuzzy decision table as the conventional work Song, Song, et alet al..
(1997), and the table is as follow: (1997), and the table is as follow:
Table 1.2 Fuzzy decision table for crossoverTable 1.2 Fuzzy decision table for crossover
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
f(t - 1)NR NL NM NS ZE PS PM PL PR
f(t)
NRNLNMNSZEPSPMPLPR
NRNLNLNMNMNSNSZEZE
NLNLNMNMNSNSZEZEPS
NLNMNMNSNSZEZEPSPS
NMNMNSNSZEZEPSPSPM
NMNSNSZEPEPSPSPMPM
NSNSZEZEPSPSPMPMPL
NSZEZEPSPSPMPMPLPL
ZEZEPSPSPMPMPLPLPR
NEPSPSPMPMPLPLPRPR
c(t)
ZE
₪ Implementation Strategy for Crossover FLCImplementation Strategy for Crossover FLC
step 4: Defuzzification table for control actionsstep 4: Defuzzification table for control actions
For simplicity, the defuzzification table for determining the action of the For simplicity, the defuzzification table for determining the action of the
crossover FLC was setup. It is formulated as follows: crossover FLC was setup. It is formulated as follows: ((Song Song et alet al., 1997., 1997))..
Table 1.3 Defuzzification table for control action of crossoverTable 1.3 Defuzzification table for control action of crossover
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
x -4 -3 -2 -1 0 1 2 3 4
y
-4 -3 -2 -1 0 1 2 3 4
-4 -3 -3 -2 -2 -1 -1 0 0
-3 -3 -2 -2 -1 -1 0 0 1
-3 -2 -2 -1 -1 0 0 1 1
-2 -2 -1 -1 0 0 1 1 2
-2 -1 -1 0 2 1 1 2 2
-1 -1 0 0 1 1 2 2 3
-1 0 0 1 1 2 2 3 3
0 0 1 1 2 2 3 3 4
0 1 1 2 2 3 3 4 4
z
0
Implementation Strategy for Mutation FLCImplementation Strategy for Mutation FLC The inputs of the mutation FLC are the same as those of the crossover FLC, and theThe inputs of the mutation FLC are the same as those of the crossover FLC, and the output of which is a change in the , output of which is a change in the , mm((tt). ).
■ Coordinated Strategy between the FLC and GACoordinated Strategy between the FLC and GA
Mp
6.6. 3 3 Parameter Control Approach using Fuzzy Logic ControllerParameter Control Approach using Fuzzy Logic Controller
GAGA
CrossoverFLC
MutationFLC
)1;(2 tVeval
);(2 tVeval
)( tc
)( tm
)2;( tVeval
)1;( tVeval
)1;( tVeval
);( tVeval
+
+
-
-
Fig. 1.10 Coordinated strategy between the FLC and GA
6.6. 3 3 Parameter Control Approach using Fuzzy Logic Parameter Control Approach using Fuzzy Logic ControllerController
₪ Detailed procedure for Implementing Crossover and Mutation FDetailed procedure for Implementing Crossover and Mutation FLCsLCs
inputinput: GA parameters, : GA parameters, ppCC((tt-1)-1), p, pMM((tt-1)-1), ,, ,
outputoutput: : ppCC((tt)), p, pMM((tt))
stepstep 1: The input variables of the FLCs for regulating the GA operators are the 1: The input variables of the FLCs for regulating the GA operators are the changes of the average fitness in continuous two generations (changes of the average fitness in continuous two generations ( t t -1 and -1 and tt) ) as follows:as follows:
,,
step 2: After normalizing and , assign these values to the step 2: After normalizing and , assign these values to the indexes indexes ii and and jj corresponding to the control actions in the defuzzification corresponding to the control actions in the defuzzification table (see Table table (see Table 33). ).
)1( tfavg)(tfavg
)1( tfavg )(tfavg
)1( tfavg )(tfavg
)(tc)(tm
6.3 Parameter Control Approach using Fuzzy Logic 6.3 Parameter Control Approach using Fuzzy Logic ControllerController
step 3: Calculate the changes of the crossover rate and the mutation rate step 3: Calculate the changes of the crossover rate and the mutation rate as follows:as follows:
,,
where the contents of are the corresponding values of and where the contents of are the corresponding values of and
for defuzzification. The values of 0.02 and 0.002 are given values tofor defuzzification. The values of 0.02 and 0.002 are given values to
regulate the increasing and decreasing ranges of the rates of regulate the increasing and decreasing ranges of the rates of
crossover and mutation operators.crossover and mutation operators.
step 4: Update the change of the rates of the crossover and mutation operators by step 4: Update the change of the rates of the crossover and mutation operators by
using the following equations:using the following equations:
,,
The adjusted rates should not exceed the range from 0.5 to 1.0 for theThe adjusted rates should not exceed the range from 0.5 to 1.0 for the
and the range from 0.0 to 0.1 for the and the range from 0.0 to 0.1 for the
),(02.0)( jiZtc ),(002.0)( jiZtm
),( jiZ )1( tfavg
)(tfavg
)()1()( tctptp CC )()1()( tmtptp MM
)(tpC
)(tpM
Design of adaptive hybrid Genetic Algorithms (aHGAs) using Design of adaptive hybrid Genetic Algorithms (aHGAs) using conventional heuristics and FLCconventional heuristics and FLC
■ Implementing process of aHGAsImplementing process of aHGAs Design of Canonical GA (CGA)Design of Canonical GA (CGA)
Design of Hybrid GA (HGA)Design of Hybrid GA (HGA)
Design of various aHGAsDesign of various aHGAs
6.4 Design of aHGA using Conventional Heuristics and FLC6.4 Design of aHGA using Conventional Heuristics and FLC
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC
Design of Canonical GA (CGA)Design of Canonical GA (CGA)
For the canonical GA (CGA), we use a real-number representation instead of a bit-string one, and the detailed For the canonical GA (CGA), we use a real-number representation instead of a bit-string one, and the detailed implementation procedure for the CGA is as follows:implementation procedure for the CGA is as follows:
procedure: Canonical GA (CGA) (Gen & Cheng, 2000)
input: GA parameters
output: best solution
begin
t 0;
initialize P(t) by random generation based on system constraints;
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t) by non-uniform arithmetic crossover;
mutation P(t) to yield C(t) by uniform mutation;
fitness eval(C);
select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space;
t t+1;
end
output best solution;
end
procedure: Canonical GA (CGA) (Gen & Cheng, 2000)
input: GA parameters
output: best solution
begin
t 0;
initialize P(t) by random generation based on system constraints;
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t) by non-uniform arithmetic crossover;
mutation P(t) to yield C(t) by uniform mutation;
fitness eval(C);
select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space;
t t+1;
end
output best solution;
end
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC Design of Hybrid GA (HGA): CGA with Local SearchDesign of Hybrid GA (HGA): CGA with Local Search
For this HGA, the CGA procedure and For this HGA, the CGA procedure and the iterative hill climbing method the iterative hill climbing method
((Yun & Moon, Yun & Moon, 20032003)) are used as a mixed type. are used as a mixed type.
procedure: CGA with Local Search (HGA)
input: GA parameters
output: best solution
begin
t 0;
initialize P(t) by random generation based on system constraints;
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t) by non-uniform arithmetic crossover;
mutation P(t) to yield C(t) by uniform mutation;
local search C(t) by iterative hill climbing method (Yun & Moon, 2003);
fitness eval(C);
select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space;
t t+1;
end
output best solution;
end
procedure: CGA with Local Search (HGA)
input: GA parameters
output: best solution
begin
t 0;
initialize P(t) by random generation based on system constraints;
fitness eval(P);
while (not termination condition) do
crossover P(t) to yield C(t) by non-uniform arithmetic crossover;
mutation P(t) to yield C(t) by uniform mutation;
local search C(t) by iterative hill climbing method (Yun & Moon, 2003);
fitness eval(C);
select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space;
t t+1;
end
output best solution;
end
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC Design of aHGAs: HGAs with Conventional HeuristicsDesign of aHGAs: HGAs with Conventional Heuristics
■ aHGA1: CGA with local search and adaptive scheme 1aHGA1: CGA with local search and adaptive scheme 1
For the first aHGA (aHGA1), we use the CGA procedure, the iterative hill For the first aHGA (aHGA1), we use the CGA procedure, the iterative hill climbing meclimbing method and thod and the procedures of the heuristic bythe procedures of the heuristic by Mak Mak et al.et al. (2000) (2000) as a mixed type. as a mixed type.
procedure: CGA with Local Search and Adaptive Scheme 1 (aHGA1)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using heuristic updating strategy (Mak et al., 200
0); t t+1;end output best solution;
end
procedure: CGA with Local Search and Adaptive Scheme 1 (aHGA1)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using heuristic updating strategy (Mak et al., 200
0); t t+1;end output best solution;
end
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC Design of aHGAs: HGAs with Conventional HeuristicsDesign of aHGAs: HGAs with Conventional Heuristics
■ aHGA2: CGA with local search and adaptive schema 2aHGA2: CGA with local search and adaptive schema 2
For the first aHGA (aHGA1), we use the CGA procedure, the iterative hillFor the first aHGA (aHGA1), we use the CGA procedure, the iterative hill climbing method climbing method and and the procedures of the heuristic by the procedures of the heuristic by Srinivas and Patnaik (1994)Srinivas and Patnaik (1994) as a mixed type. as a mixed type.
procedure: CGA with local search and adaptive scheme 2 (aHGA2)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using heuristic updating strategy (Srinivas and Patnaik, 199
4); t t+1;end output best solution;
end
procedure: CGA with local search and adaptive scheme 2 (aHGA2)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using heuristic updating strategy (Srinivas and Patnaik, 199
4); t t+1;end output best solution;
end
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC Design of aHGAs: HGAs with FLCDesign of aHGAs: HGAs with FLC
■ flc-aHGA: CGA with local search and adaptive scheme of FLCflc-aHGA: CGA with local search and adaptive scheme of FLC
For the first aHGA (aHGA1), we use the CGA procedure, the iterative hillFor the first aHGA (aHGA1), we use the CGA procedure, the iterative hill
climbing method and the climbing method and the procedures of the FLC (Song procedures of the FLC (Song et alet al., 1997., 1997)) as a mixed type. as a mixed type.
procedure: CGA with Local Search and Adaptive Scheme of FLC (flc-aHGA)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using FLC (Song et al., 1997); t t+1;end output best solution;
end
procedure: CGA with Local Search and Adaptive Scheme of FLC (flc-aHGA)input: GA parametersoutput: best solutionbegin
t 0;initialize P(t) by random generation based on system constraints; fitness eval(P);while (not termination condition) do crossover P(t) to yield C(t) by non-uniform arithmetic crossover; mutation P(t) to yield C(t) by uniform mutation; local search C(t) by iterative hill climbing method; fitness eval(C); select P(t+1) from P(t) and C(t) by elitist strategy in enlarged sampling space; adaptive regulation of GA parameters using FLC (Song et al., 1997); t t+1;end output best solution;
end
6.4 Design of aHGA using Conventional Heuristics a6.4 Design of aHGA using Conventional Heuristics and FLCnd FLC
Flowchart of the proposed algorithms Flowchart of the proposed algorithms
stop
termination condition
mutation
crossover
selection
initial population
start
CGACGA
No
Yes
stop
termination condition
selection
initial population
start
HGAHGA
iterative hillclimbing
No
Yes
stop
termination condition
selection
initial population
start
aHGA1aHGA1
iterative hillclimbing
No
Yes
stop
termination condition
selection
initial population
start
aHGA2aHGA2
iterative hillclimbing
No
Yes
stop
termination condition
selection
initial population
start
flc-aHGAflc-aHGA
iterative hillclimbing
No
Yes
evaluation
mutation
crossover
evaluation
mutation
crossover
evaluation
mutation
crossover
evaluation
mutation
crossover
evaluation
adaptivescheme 1
adaptivescheme 2
adaptiveFLC
Conclusion Conclusion ₪ The The Genetic AlgorithmsGenetic Algorithms ( (GAGA)), as powerful and broadly , as powerful and broadly
applicableapplicable stochastic searchstochastic search andand optimization techniquesoptimization techniques, are , are perhaps the most widely known types of perhaps the most widely known types of Evolutionary Evolutionary Computation methodsComputation methods or or Evolutionary OptimizationEvolutionary Optimization today. today.
₪ In this chapter, we have introduced the following subjects:In this chapter, we have introduced the following subjects:■ Foundations of Genetic AlgorithmsFoundations of Genetic Algorithms
■ Five basic components of Genetic AlgorithmsFive basic components of Genetic Algorithms
■ Example with Simple Genetic AlgorithmsExample with Simple Genetic Algorithms
■ Encoding IssueEncoding Issue
■ Genetic OperatorsGenetic Operators
■ Adaptation of Genetic AlgorithmsAdaptation of Genetic Algorithms■ Structure Adaptation and Parameter AdaptationStructure Adaptation and Parameter Adaptation
■ Hybrid Genetic AlgorithmsHybrid Genetic Algorithms■ Parameter control approach of GAParameter control approach of GA
■ Hybrid Genetic Algorithm with Fuzzy Logic Controller Hybrid Genetic Algorithm with Fuzzy Logic Controller