智能优化算法:龙格-库塔优化算法
摘要:龙格-库塔优化算法(Runge Kutta optimizer,RUN)是于2021年提出的一种新型智能优化算法,该算法基于龙格-库塔方法中提出的计算梯度搜索概念来指导寻优,具有寻优能力强,收敛速度快等特点。
1.算法原理
1.1 搜索机制
该算法的搜索机制基于RK方法,使用一组随机解搜索决策空间,并实现适当的全局和局部搜索。采用4阶RK方法。
S
M
=
1
6
(
x
R
K
)
Δ
x
(1)
SM=\frac{1}{6}(x_{RK})\Delta x\tag{1}
SM=61?(xRK?)Δx(1)
x
R
K
=
k
1
+
2
k
2
+
2
k
3
+
k
4
(2)
x_{RK}=k_1+2k_2+2k_3+k_4 \tag{2}
xRK?=k1?+2k2?+2k3?+k4?(2)
k
1
=
1
2
Δ
x
(
r
a
n
d
?
x
w
?
u
?
x
b
)
(3)
k_1=\frac{1}{2\Delta x}(rand*x_w-u*x_b)\tag{3}
k1?=2Δx1?(rand?xw??u?xb?)(3)
u
=
r
o
u
n
d
(
1
+
r
a
n
d
)
?
(
1
?
r
a
n
d
)
(4)
u=round(1+rand)*(1-rand)\tag{4}
u=round(1+rand)?(1?rand)(4)
k
2
=
1
2
Δ
x
(
r
a
n
d
(
x
w
+
r
a
n
d
1
?
k
1
?
Δ
x
)
?
(
u
x
b
+
r
a
n
d
2
?
k
1
Δ
x
)
)
(5)
k_2=\frac{1}{2\Delta x}(rand(x_w+rand_1*k_1*\Delta x)-(ux_b+rand_2*k_1\Delta x))\tag{5}
k2?=2Δx1?(rand(xw?+rand1??k1??Δx)?(uxb?+rand2??k1?Δx))(5)
k
3
=
1
2
Δ
x
(
r
a
n
d
(
x
w
+
r
a
n
d
1
(
k
2
/
2
)
Δ
x
)
?
(
u
x
b
+
r
a
n
d
2
(
k
2
/
2
)
Δ
x
)
)
(6)
k_3=\frac{1}{2\Delta x}(rand(x_w+rand_1(k_2/2)\Delta x)-(ux_b+rand_2(k_2/2)\Delta x))\tag{6}
k3?=2Δx1?(rand(xw?+rand1?(k2?/2)Δx)?(uxb?+rand2?(k2?/2)Δx))(6)
k
4
=
1
2
Δ
x
(
r
a
n
d
?
(
x
w
+
r
a
n
d
1
k
3
Δ
x
)
?
(
u
x
b
+
r
a
n
d
2
k
3
Δ
x
)
)
(7)
k_4=\frac{1}{2\Delta x}(rand*(x_w+rand_1k_3\Delta x)-(ux_b+rand_2k_3\Delta x))\tag{7}
k4?=2Δx1?(rand?(xw?+rand1?k3?Δx)?(uxb?+rand2?k3?Δx))(7)
Δ
x
=
2
?
r
a
n
d
?
∣
S
t
p
∣
(8)
\Delta x=2*rand*|Stp| \tag{8}
Δx=2?rand?∣Stp∣(8)
S
t
p
=
r
a
n
d
?
(
(
x
b
?
r
a
n
d
?
x
a
v
g
)
+
γ
)
(9)
Stp=rand*((x_b-rand*x_{avg})+\gamma) \tag{9}
Stp=rand?((xb??rand?xavg?)+γ)(9)
γ
=
r
a
n
d
?
(
x
n
?
r
a
n
d
?
(
u
?
l
)
?
e
x
p
(
?
4
i
/
M
a
x
i
)
)
(10)
\gamma=rand*(x_n-rand*(u-l)*exp(-4i/Max_i))\tag{10}
γ=rand?(xn??rand?(u?l)?exp(?4i/Maxi?))(10)
其中
x
b
x_b
xb?和
x
w
x_w
xw?分别为种群的最优解和最差解。
r
a
n
d
1
,
r
a
n
d
2
rand_1,rand_2
rand1?,rand2?为[0,1]之间的随机数。
x
a
v
g
x_{avg}
xavg?为种群的平均值。
i
i
i为当前迭代次数,
M
a
x
i
Max_i
Maxi?为最大迭代次数。
1.2 位置更新
RUN算法的位置更新如式(11)所示
x
n
+
1
=
{
(
x
c
+
r
?
S
F
?
g
?
x
c
)
+
S
F
?
S
M
+
u
?
x
s
,
i
f
?
r
a
n
d
<
0.5
(
x
m
+
r
?
S
F
?
g
?
x
m
)
+
S
F
?
S
M
+
u
?
x
s
′
,
e
l
s
e
(11)
x_{n+1}=\begin{cases} (x_c+r*SF*g*x_c)+SF*SM+u*x_s,if \, rand<0.5\\ (x_m+r*SF*g*x_m)+SF*SM+u*x_s',else \end{cases}\tag{11}
xn+1?={(xc?+r?SF?g?xc?)+SF?SM+u?xs?,ifrand<0.5(xm?+r?SF?g?xm?)+SF?SM+u?xs′?,else?(11)
x
s
=
r
a
n
d
n
?
(
x
m
?
x
c
)
(12)
x_s=randn*(x_m-x_c)\tag{12}
xs?=randn?(xm??xc?)(12)
x
s
′
=
r
a
n
d
n
?
(
x
r
1
?
x
r
2
)
(13)
xs'=randn*(x_{r1}-x_{r2})\tag{13}
xs′=randn?(xr1??xr2?)(13)
中,
r
r
r是1或-1的整数;
g
g
g是[0,2]的随机数;
S
F
SF
SF是自适应因子;
u
u
u为随机数。
SF计算如下:
S
F
=
2
?
(
0.5
?
r
a
n
d
)
?
f
(13)
SF=2*(0.5-rand)*f\tag{13}
SF=2?(0.5?rand)?f(13)
f
=
a
?
e
x
p
(
?
b
?
r
a
n
d
?
i
/
M
a
x
i
)
(14)
f=a*exp(-b*rand*i/Max_i)\tag{14}
f=a?exp(?b?rand?i/Maxi?)(14)
其中
i
i
i为当前迭代次数,
M
a
x
i
Max_i
Maxi?为最大迭代次数。
a
a
a和
b
b
b为一个常数。
r
a
n
d
rand
rand为[0,1]之间的随机数。
x
c
x_c
xc?和
x
m
x_m
xm?的定义如下:
x
c
=
φ
x
n
+
(
1
?
φ
)
?
x
r
1
(15)
x_c=\varphi x_n+(1-\varphi)*x_{r_1} \tag{15}
xc?=φxn?+(1?φ)?xr1??(15)
x
m
=
φ
x
b
e
s
t
+
(
1
?
φ
)
x
l
b
e
s
t
(16)
x_m=\varphi x_{best}+(1-\varphi)x_{lbest} \tag{16}
xm?=φxbest?+(1?φ)xlbest?(16)
其中
φ
\varphi
φ为[0,1]之间的随机数;
x
b
e
s
t
x_{best}
xbest?为全局最优解;
x
l
b
e
s
t
x_{lbest}
xlbest?是每代最优位置。
1.3 解质量增强(ESQ)
在该算法中,采用解质量增强(ESQ)的方法来提高解的质量,避免每次迭代中出现局部最优。通过使用ESQ执行以下公式产生新解(
x
n
e
w
2
x_{new2}
xnew2?):
x
n
e
w
2
=
{
x
n
e
w
1
+
r
w
∣
(
x
n
e
w
1
?
x
a
v
g
)
+
r
a
n
d
n
∣
,
r
a
n
d
<
0.5
,
w
<
1
(
x
n
e
w
1
?
x
a
v
g
)
+
r
w
∣
(
u
x
n
e
w
1
?
x
a
v
g
)
+
r
a
n
d
n
∣
,
r
a
n
d
<
0.5
,
w
≥
1
(17)
x_{new2}=\begin{cases} x_{new1}+rw|(x_{new_1}-x_{avg})+randn|,rand<0.5,w<1\\ (x_{new1}-x_{avg})+rw|(ux_{new1}-x_avg)+randn|,rand<0.5,w\geq 1 \end{cases}\tag{17}
xnew2?={xnew1?+rw∣(xnew1???xavg?)+randn∣,rand<0.5,w<1(xnew1??xavg?)+rw∣(uxnew1??xa?vg)+randn∣,rand<0.5,w≥1?(17) 其中:
w
=
r
a
n
d
(
0
,
2
)
?
e
x
p
(
?
c
i
/
M
a
x
i
)
(18)
w=rand(0,2)*exp(-ci/Max_i) \tag{18}
w=rand(0,2)?exp(?ci/Maxi?)(18)
x
a
v
g
=
x
r
1
+
x
r
2
+
x
r
3
3
(19)
x_{avg}=\frac{x_{r1}+x_{r2}+x_{r3}}{3} \tag{19}
xavg?=3xr1?+xr2?+xr3??(19)
x
n
e
w
1
=
β
?
x
a
v
g
+
(
1
?
β
)
x
b
e
s
t
(20)
x_{new1}=\beta*x_{avg}+(1-\beta)x_{best} \tag{20}
xnew1?=β?xavg?+(1?β)xbest?(20)
其中,
β
\beta
β为[0,1]之间随机数。
c
c
c为[0,5]之间随机数;
r
r
r是1、0或-1的随机整数;
x
b
e
s
t
x_{best}
xbest?表示全局最优位置。该部分计算的解
x
n
e
w
2
x_{new2}
xnew2?可能比不上当前解。为了增强解的质量,将生成另一个新解
x
n
e
w
3
x_{new3}
xnew3? ,定义如下:
x
n
e
w
3
=
(
x
n
e
w
2
?
r
a
n
d
x
n
e
w
2
)
+
S
F
(
r
a
n
d
?
x
R
K
+
(
v
x
b
?
x
n
e
w
2
)
)
(21)
x_{new3}=(x_{new2}-randx_{new2})+SF(rand*x_{RK}+(vx_b-x_{new2}))\tag{21}
xnew3?=(xnew2??randxnew2?)+SF(rand?xRK?+(vxb??xnew2?))(21) 其中,
v
v
v为2*rand的随机数。
算法伪代码如下:
Algorithm 1. The pseudo-code of RUN
Stage 1. Initialization
Initializea,b
Generate the RUN population X n (n = 1,2,…,N)
Calculate the objective function of each member of population
Determine the solutions x w , x b , andx best
Stage 2. RUN operators
for i = 1: Maxi
for n = 1 : N
for l = 1 : D
Calculate position x n+1,l using Eq. (11)
end for
Enhance the solution quality
ifrand < 0.5
Calculate position x new2 using Eq. (17)
if f(x n ) < f(x new2 )
if rand<w
Calculate position x new3 using Eq. (21)
end
end
end
Update positions x w andx b
end for
Update positionx best
i = i + 1
end
Stage 3. returnx best
2.实验结果
3.参考文献
[1] Iman Ahmadianfar, Ali Asghar Heidari, Amir H. Gandomi, Xuefeng Chu, Huiling Chen. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method[J]. Expert Systems with Applications, 2021, 181(115079): 0957-4174.
4.Matlab代码
d the metaphor: An efficient optimization algorithm based on Runge Kutta method[J]. Expert Systems with Applications, 2021, 181(115079): 0957-4174.
|