Algorithms List of metaphor-based metaheuristics
1 algorithms
1.1 simulated annealing (kirkpatrick et al. 1983)
1.2 ant colony optimization (dorigo, 1992)
1.3 particle swarm optimization (kennedy & eberhart 1995)
1.4 harmony search (geem, kim & loganathan 2001)
1.5 artificial bee colony algorithm (karaboga 2005)
1.6 bees algorithm (pham 2005)
1.7 glowworm swarm optimization (krishnanand & ghose 2005)
1.8 shuffled frog leaping algorithm (eusuff, lansey & pasha 2006)
1.9 imperialist competitive algorithm (atashpaz-gargari & lucas 2007)
1.10 river formation dynamics (rabanal, rodríguez & rubio 2007)
1.11 intelligent water drops algorithm (shah-hosseini 2007)
1.12 gravitational search algorithm (rashedi, nezamabadi-pour & saryazdi 2009)
1.13 cuckoo search (yang & deb 2009)
1.14 bat algorithm (yang 2010)
1.15 flower pollination algorithm (yang 2012)
1.16 cuttlefish optimization algorithm (eesa, mohsin, brifcani & orman 2013)
1.17 artificial swarm intelligence (rosenberg 2014)
1.18 duelist algorithm (biyanto 2016)
1.19 killer whale algorithm (biyanto 2016)
1.20 rain water algorithm (biyanto 2017)
1.21 mass , energy balances algorithm (biyanto 2017)
algorithms
simulated annealing (kirkpatrick et al. 1983)
simulated annealing (sa) probabilistic technique inspired heat treatment method in metallurgy. used when search space discrete (e.g., tours visit given set of cities). problems finding precise global optimum less important finding acceptable local optimum in fixed amount of time, simulated annealing may preferable alternatives such gradient descent.
simulated annealing interprets slow cooling slow decrease in probability of accepting worse solutions explores solution space. accepting worse solutions fundamental property of metaheuristics because allows more extensive search optimal solution.
ant colony optimization (dorigo, 1992)
the ant colony optimization algorithm (aco) probabilistic technique solving computational problems can reduced finding paths through graphs. proposed marco dorigo in 1992 in phd thesis, first algorithm aiming search optimal path in graph, based on behavior of ants seeking path between colony , source of food. original idea has since diversified solve wider class of numerical problems, , result, several problems have emerged, drawing on various aspects of behavior of ants. broader perspective, aco performs model-based search , shares similarities estimation of distribution algorithms.
particle swarm optimization (kennedy & eberhart 1995)
particle swarm optimization (pso) computational method optimizes problem iteratively trying improve candidate solution regard given measure of quality. solves problem having population of candidate solutions, here dubbed particles, , moving these particles around in search-space according simple mathematical formulae on particle s position , velocity. each particle s movement influenced local best known position, guided toward best known positions in search-space, updated better positions found other particles. expected move swarm toward best solutions.
pso attributed kennedy, eberhart , shi , first intended simulating social behaviour, stylized representation of movement of organisms in bird flock or fish school. algorithm simplified , observed performing optimization. book kennedy , eberhart describes many philosophical aspects of pso , swarm intelligence. extensive survey of pso applications made poli. recently, comprehensive review on theoretical , experimental works on pso has been published bonyadi , michalewicz.
harmony search (geem, kim & loganathan 2001)
harmony search phenomenon-mimicking metaheuristic introduced in 2001 zong woo geem, joong hoon kim, , g. v. loganathan. harmony search inspired improvisation process of jazz musicians. harmony search has been criticized being special case of well-established evolution strategies algorithm.
the harmony search (hs) relatively simple yet efficient evolutionary algorithm. in hs algorithm bunch/group of solutions randomly generated (called harmony memory). new solution generated using solutions in harmony memory (rather 2 used in ga) , if new solution better worst solution in harmony memory, worst solution gets replaced new solution. though hs relatively new meta heuristic algorithm, effectiveness , advantages have been demonstrated in various applications design of municipal water distribution networks, structural design, traffic routing, load dispatch problem in electrical engineering, multi objective optimization, rostering problems, clustering, classification , feature selection name few. detailed survey on applications of hs can found in , applications of hs in data mining can found in.
artificial bee colony algorithm (karaboga 2005)
artificial bee colony algorithm meta-heuristic algorithm introduced karaboga in 2005, , simulates foraging behaviour of honey bees. abc algorithm has 3 phases: employed bee, onlooker bee , scout bee. in employed bee , onlooker bee phases, bees exploit sources local searches in neighbourhood of solutions selected based on deterministic selection in employed bee phase , probabilistic selection in onlooker bee phase. in scout bee phase analogy of abandoning exhausted food sources in foraging process, solutions not beneficial anymore search progress abandoned, , new solutions inserted instead of them explore new regions in search space. algorithm has well-balanced exploration , exploitation ability.
bees algorithm (pham 2005)
the bees algorithm in basic formulation created pham , co-workers in 2005, , further refined in following years. modelled on foraging behaviour of honey bees, algorithm combines global explorative search local exploitative search. small number of artificial bees (scouts) explores randomly solution space (environment) solutions of high fitness (highly profitable food sources), whilst bulk of population search (harvest) neighbourhood of fittest solutions looking fitness optimum. deterministics recruitment procedure simulates waggle dance of biological bees used communicate scouts findings foragers, , distribute foragers depending on fitness of neighbourhoods selected local search. once search in neighbourhood of solution stagnates, local fitness optimum considered found, , site abandoned. in summary, bees algorithm searches concurrently promising regions of solution space, whilst continuously sampling in search of new favourable regions.
glowworm swarm optimization (krishnanand & ghose 2005)
the glowworm swarm optimization swarm intelligence optimization algorithm developed based on behaviour of glowworms (also known fireflies or lightning bugs). gso algorithm developed , introduced k.n. krishnanand , debasish ghose in 2005 @ guidance, control, , decision systems laboratory in department of aerospace engineering @ indian institute of science, bangalore, india.
the behaviour pattern of glowworms used algorithm apparent capability of glowworms change intensity of luciferin emission , appear glow @ different intensities.
the part 2 of algorithm makes different other evolutionary multimodal optimization algorithms. step allows glowworm swarms automatically subdivide subgroups can converge multiple local optima simultaneously, property of algorithm allows used identify multiple peaks of multi-modal function , makes part of evolutionary multimodal optimization algorithms family.
shuffled frog leaping algorithm (eusuff, lansey & pasha 2006)
the shuffled frog leaping algorithm optimization algorithm used in artificial intelligence. comparable genetic algorithm.
imperialist competitive algorithm (atashpaz-gargari & lucas 2007)
the imperialist competitive algorithm computational method used solve optimization problems of different types. of methods in area of evolutionary computation, ica not need gradient of function in optimization process. specific point of view, ica can thought of social counterpart of genetic algorithms (gas). ica mathematical model , computer simulation of human social evolution, while gas based on biological evolution of species.
this algorithm starts generating set of random candidate solutions in search space of optimization problem. generated random points called initial countries. countries in algorithm counterpart of chromosomes in gas , particles in particle swarm optimization (pso) , array of values of candidate solution of optimization problem. cost function of optimization problem determines power of each country. based on power, of best initial countries (the countries least cost function value), become imperialists , start taking control of other countries (called colonies) , form initial empires.
two main operators of algorithm assimilation , revolution. assimilation makes colonies of each empire closer imperialist state in space of socio-political characteristics (optimization search space). revolution brings sudden random changes in position of of countries in search space. during assimilation , revolution colony might reach better position , has chance take control of entire empire , replace current imperialist state of empire.
imperialistic competition part of algorithm. empires try win game , take possession of colonies of other empires. in each step of algorithm, based on power, empires have chance take control of 1 or more of colonies of weakest empire.
algorithm continues mentioned steps (assimilation, revolution, competition) until stop condition satisfied.
the above steps can summarized below pseudocode.
0) define objective function:
f
(
x
)
,
x
=
(
x
1
,
x
2
,
…
,
x
d
)
;
{\displaystyle f(\mathbf {x} ),\quad \mathbf {x} =(x_{1},x_{2},\dots ,x_{d});\,}
1) initialization of algorithm. generate random solution in search space , create initial empires.
2) assimilation: colonies move towards imperialist states in different in directions.
3) revolution: random changes occur in characteristics of countries.
4) position exchange between colony , imperialist. colony better position imperialist,
has chance take control of empire replacing existing imperialist.
5) imperialistic competition: imperialists compete take possession of colonies of each other.
6) eliminate powerless empires. weak empires lose power gradually , eliminated.
7) if stop condition satisfied, stop, if not go 2.
8) end
river formation dynamics (rabanal, rodríguez & rubio 2007)
river formation dynamics based on imitating how water forms rivers eroding ground , depositing sediments (the drops act swarm). after drops transform landscape increasing/decreasing altitude of places, solutions given in form of paths of decreasing altitudes. decreasing gradients constructed, , these gradients followed subsequent drops compose new gradients , reinforce best ones. heuristic optimization method first presented in 2007 rabanal et al. applicability of rfd other np-complete problems has been studied, , algorithm has been applied fields such routing , robot navigation. main applications of rfd can found @ detailed survey.
intelligent water drops algorithm (shah-hosseini 2007)
intelligent water drops algorithm contains few essential elements of natural water drops , actions , reactions occur between river s bed , water drops flow within. iwd first introduced traveling salesman problem in 2007.
almost every iwd algorithm composed of 2 parts: graph plays role of distributed memory on soils of different edges preserved, , moving part of iwd algorithm, few number of intelligent water drops. these intelligent water drops (iwds) both compete , cooperate find better solutions , changing soils of graph, paths better solutions become more reachable. mentioned iwd-based algorithms need @ least 2 iwds work.
the iwd algorithm has 2 types of parameters: static , dynamic parameters. static parameters constant during process of iwd algorithm. dynamic parameters reinitialized after each iteration of iwd algorithm. pseudo-code of iwd-based algorithm may specified in 8 steps:
1) static parameter initialization
a) problem representation in form of graph
b) setting values static parameters
2) dynamic parameter initialization: soil , velocity of iwds
3) distribution of iwds on problem’s graph
4) solution construction iwds along soil , velocity updating
a) local soil updating on graph
b) soil , velocity updating on iwds
5) local search on each iwd’s solution (optional)
6) global soil updating
7) total-best solution updating
8) go step 2 unless termination condition satisfied
gravitational search algorithm (rashedi, nezamabadi-pour & saryazdi 2009)
gravitational search algorithm based on law of gravity , notion of mass interactions. gsa algorithm uses theory of newtonian physics , searcher agents collection of masses. in gsa, there isolated system of masses. using gravitational force, every mass in system can see situation of other masses. gravitational force therefore way of transferring information between different masses (rashedi, nezamabadi-pour , saryazdi 2009). in gsa, agents considered objects , performance measured masses. these objects attract each other gravity force, , force causes movement of objects globally towards objects heavier masses. heavy masses correspond solutions of problem. position of agent corresponds solution of problem, , mass determined using fitness function. lapse of time, masses attracted heaviest mass, ideally present optimum solution in search space. gsa considered isolated system of masses. small artificial world of masses obeying newtonian laws of gravitation , motion. multi-objective variant of gsa, called mogsa, first proposed hassanzadeh et al. in 2010.
cuckoo search (yang & deb 2009)
in operations research, cuckoo search optimization algorithm developed xin-she yang , suash deb in 2009. inspired obligate brood parasitism of cuckoo species laying eggs in nests of other host birds (of other species). host birds can engage direct conflict intruding cuckoos. example, if host bird discovers eggs not own, either throw these alien eggs away or abandon nest , build new nest elsewhere. cuckoo species such new world brood-parasitic tapera have evolved in such way female parasitic cuckoos specialized in mimicry in colors , pattern of eggs of few chosen host species.
bat algorithm (yang 2010)
bat algorithm swarm-intelligence-based algorithm, inspired echolocation behavior of microbats. ba automatically balances exploration (long-range jumps around global search space avoid getting stuck around 1 local maximum) exploitation (searching in more detail around known solutions find local maxima) controlling loudness , pulse emission rates of simulated bats in multi-dimensional search space.
flower pollination algorithm (yang 2012)
flower pollination algorithm metaheuristic algorithm developed xin-she yang, based on pollination process of flowering plants.
this algorithm has 4 rules or assumptions:
have significant fraction p in overall pollination activities.
these rules can translated following updating equations:
x
i
t
+
1
=
x
i
t
+
l
(
x
i
t
−
g
∗
)
{\displaystyle x_{i}^{t+1}=x_{i}^{t}+l(x_{i}^{t}-g_{*})}
x
i
t
+
1
=
x
i
t
+
ϵ
(
x
i
t
−
x
k
t
)
{\displaystyle x_{i}^{t+1}=x_{i}^{t}+\epsilon (x_{i}^{t}-x_{k}^{t})}
where
x
i
t
{\displaystyle x_{i}^{t}}
solution vector ,
g
∗
{\displaystyle g_{*}}
current best found far during iteration. switch probability between 2 equations during iterations
p
{\displaystyle p}
. in addition,
ϵ
{\displaystyle \epsilon }
random number drawn uniform distribution.
l
{\displaystyle l}
step size drawn lévy distribution.
lévy flights using lévy steps powerful random walk because both global , local search capabilities can carried out @ same time. in contrast standard random walks, lévy flights have occasional long jumps, enable algorithm jump out local valleys. lévy steps obey following approximation:
l
∼
1
s
1
+
β
,
{\displaystyle l\sim {\frac {1}{s^{1+\beta }}},}
where
β
{\displaystyle \beta }
lévy exponent. may challenging draw lévy steps properly, , simple way of generating lévy flights
s
{\displaystyle s}
use 2 normal distributions
u
{\displaystyle u}
,
v
{\displaystyle v}
transform
s
=
u
|
v
|
1
+
β
,
{\displaystyle s={\frac {u}{|v|^{1+\beta }}},}
with
u
∼
n
(
0
,
σ
2
)
,
v
∼
n
(
0
,
1
)
,
{\displaystyle u\sim n(0,\sigma ^{2}),\quad v\sim n(0,1),}
where
σ
{\displaystyle \sigma }
function of
β
{\displaystyle \beta }
.
cuttlefish optimization algorithm (eesa, mohsin, brifcani & orman 2013)
the 6 cases of reflection used cuttlefish
the cuttlefish optimization algorithm population-based search algorithm inspired skin color changing behaviour of cuttlefish developed in 2013 has 2 global search , 2 local search.
the algorithm considers 2 main processes: reflection , visibility. reflection process simulates light reflection mechanism, while visibility simulates visibility of matching patterns. these 2 processes used search strategy find global optimal solution. formulation of finding new solution (newp) using reflection , visibility follows:
n
e
w
p
=
r
e
f
l
e
c
t
i
o
n
+
v
i
s
i
b
i
l
i
t
y
{\displaystyle newp=reflection+visibility}
cfa divide population 4 groups (g1, g2, g3 , g4). g1 algorithm applying case 1 , 2 (the interaction between chromatophores , iridophores) produce new solutions. these 2 cases used global search. g2, algorithm uses case 3 (iridophores reflection opaerator) , case 4 (the interaction between iridophores , chromatophores) produces new solutions) local search. while g3 interaction between leucophores , chromatophores (case 5) used produce solutions around best solution (local search). g4, case 6 (reflection operator of leucophores) used global search reflecting incoming light out modification. main step of cfa described follows:
equations used calculate reflection , visibility 4 groups described below:
case 1 , 2 g1:
r
e
f
l
e
c
t
i
o
n
[
j
]
=
r
∗
g
1
[
j
]
.
p
o
i
n
t
s
[
j
]
{\displaystyle reflection[j]=r*g_{1}[j].points[j]}
v
i
s
i
b
i
l
i
t
y
[
j
]
=
v
∗
(
b
e
s
t
.
p
o
i
n
t
s
[
j
]
−
g
1
[
i
]
.
p
o
i
n
t
s
[
j
]
)
{\displaystyle visibility[j]=v*(best.points[j]-g_{1}[i].points[j])}
case 3 , 4 g2:
r
e
f
l
e
c
t
i
o
n
[
j
]
=
r
∗
b
e
s
t
.
p
o
i
n
t
s
[
j
]
{\displaystyle reflection[j]=r*best.points[j]}
v
i
s
i
b
i
l
i
t
y
[
j
]
=
v
∗
(
b
e
s
t
.
p
o
i
n
t
s
[
j
]
−
g
2
[
i
]
.
p
o
i
n
t
s
[
j
]
)
{\displaystyle visibility[j]=v*(best.points[j]-g_{2}[i].points[j])}
case 5 g3:
r
e
f
l
e
c
t
i
o
n
[
j
]
=
r
∗
b
e
s
t
.
p
o
i
n
t
s
[
j
]
{\displaystyle reflection[j]=r*best.points[j]}
v
i
s
i
b
i
l
i
t
y
[
j
]
=
v
∗
(
b
e
s
t
.
p
o
i
n
t
s
[
j
]
−
a
v
b
e
s
t
{\displaystyle visibility[j]=v*(best.points[j]-av_{best}}
case 6 g4:
p
[
i
]
.
p
o
i
n
t
s
[
j
]
=
r
a
n
d
o
m
∗
(
u
p
p
e
r
l
i
m
i
t
−
l
o
w
e
r
l
i
n
i
t
)
+
l
o
w
e
r
l
i
m
i
t
,
i
=
1
,
2
,
.
.
.
,
n
;
j
=
1
,
2
,
.
.
.
,
d
{\displaystyle p[i].points[j]=random*(upperlimit-lowerlinit)+lowerlimit,i=1,2,...,n;j=1,2,...,d}
where
g
1
{\displaystyle g_{1}}
,
g
2
{\displaystyle g_{2}}
group1 , group2, presents
i
t
h
{\displaystyle i^{th}}
element in g, j
j
t
h
{\displaystyle j^{th}}
point of
i
t
h
{\displaystyle i^{th}}
element in group g, best best solution ,
a
v
b
e
s
t
{\displaystyle av_{best}}
presents average value of best points. while r , v 2 random numbers produced around 0 such between (-1, 1), r represents degree of reflection, v represents visibility degree of final view of pattern, upperlimit , lowerlimit upper limit , lower limit of problem domain.
artificial swarm intelligence (rosenberg 2014)
artificial swarm intelligence refers real-time closed-loop system of human users connected on internet , structured in framework modeled after natural swarms such evokes group s collective wisdom unified emergent intelligence. in way, human swarms can answer questions, make predictions, reach decisions, , solve problems collectively exploring diverse set of options , converging on preferred solutions in synchrony. invented dr. louis rosenberg in 2014, asi methodology has become notable ability make accurate collective predictions outperform individual members of swarm. in 2016 artificial swarm intelligence unanimous a.i. challenged reporter predict winners of kentucky derby, , picked first 4 horses, in order, beating 540 1 odds.
duelist algorithm (biyanto 2016)
duelist algorithm refers gene-based optimization algorithm similar genetic algorithms. duelist algorithm starts initial set of duelists. duel determine winner , loser. loser learns winner, while winner try new skill or technique may improve fighting capabilities. few duelists highest fighting capabilities called champion. champion train new duelist such capabilities. new duelist join tournament representative of each champion. duelist re-evaluated, , duelists worst fighting capabilities eliminated maintain amount of duelists.
killer whale algorithm (biyanto 2016)
killer whale algorithm algorithm inspired killer whale life. philosophy of algorithm patterns of movement killer whale in prey hunting , killer whale social structure. novelty of algorithm incorporating memorize capability of killer whale in algorithm.
rain water algorithm (biyanto 2017)
physical movements of rain drops utilizing newton’s law motion inspired authors create algorithm. each rain drop represent random values of optimized variables have vary in mass , elevation. fall on ground following free fall movement velocity square root of gravity acceleration time elevation. next movement uniformly accelerated motion along rain drop travel reach lowest place on ground. lowest place in ground objective function of algorithm.
mass , energy balances algorithm (biyanto 2017)
mass , energy balances fundamental laws of physics states mass can neither produced nor destroyed. conserved. equally fundamental law of conservation of energy. although energy can change in form, can not created or destroyed also. beauty of algorithm capability reach global optimum solution simultaneously work either minimize , maximize searching method .
Comments
Post a Comment